Compare commits

...

169 Commits

Author SHA1 Message Date
Lance Martin
04fa5bd65f Merge branch 'master' into rlm/sql-pgvector-template 2023-11-13 15:31:16 -08:00
Lance Martin
e549397001 fmt 2023-11-13 15:30:48 -08:00
wemysschen
a591cdb67d add cookbook for RAG with baidu QIANFAN and elasticsearch (#13287)
**Description:** 
Add cookbook for RAG with baidu QIANFAN and elasticsearch.

Co-authored-by: wemysschen <root@icoding-cwx.bcc-szzj.baidu.com>
2023-11-13 14:45:24 -08:00
mertkayhan
9b4974871d IMPROVEMENT Increase flexibility of ElasticVectorSearch (#6863)
Hey @rlancemartin, @eyurtsev ,

I did some minimal changes to the `ElasticVectorSearch` client so that
it plays better with existing ES indices.

Main changes are as follows:

1. You can pass the dense vector field name into `_default_script_query`
2. You can pass a custom script query implementation and the respective
parameters to `similarity_search_with_score`
3. You can pass functions for building page content and metadata for the
resulting `Document`

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  4. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @dev2049
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @dev2049
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @vowelparrot
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-11-13 14:36:03 -08:00
Lance Martin
39852dffd2 Cookbook for multi-modal RAG eval (#13272) 2023-11-13 14:26:02 -08:00
Erick Friis
50a5c919f0 IMPROVEMENT self-query template (#13305)
- [ ]
https://github.com/langchain-ai/langchain/pull/12694#discussion_r1391334719
-> keep date
- [x]
https://github.com/langchain-ai/langchain/pull/12694#discussion_r1391336586
2023-11-13 14:03:15 -08:00
Lance Martin
0686096728 fmt 2023-11-13 13:47:05 -08:00
Lance Martin
adfe14001a fmt 2023-11-13 13:28:09 -08:00
Yasin
b46f88d364 IMPROVEMENT add license file to subproject (#8403)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

hi!
This is pretty straight-forward: The sdist package does not contain the
license file (which is needed by e.g. conda) because the package is
built from the subdir and can't see the license.
I _copied_ the license but since I'm unfamiliar with the projects
direction, I'm not sure that's correct.
thanks!

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-11-13 11:48:21 -08:00
Rui Ramos
ff19a62afc Fix Pinecone cosine relevance score (#8920)
Fixes: #8207

Description:
Pinecone returns scores (not distances) with cosine similarity. The
values according to the docs are [-1, 1], although I could never
reproduce negative values.

This PR ensures that the score returned from Pinecone is preserved,
rather than inverted, so the most relevant documents can be filtered (eg
when using similarity thresholds)

I'll leave this as a draft PR as I couldn't run the tests (my pinecone
account might not be enough - some errors were being thrown around
namespaces) so hopefully someone who _can_ will pick this up.

Maintainers:
@rlancemartin, @eyurtsev

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-11-13 11:47:38 -08:00
Bagatur
2e42ed5de6 Self-query template (#12694)
Co-authored-by: Erick Friis <erick@langchain.dev>
2023-11-13 11:44:19 -08:00
Konstantin Spieß
1e43025bf5 Fix serialization issue in Matching Engine Vector Store (#13266)
- **Description:** Fixed a serialization issue in the add_texts method
of the Matching Engine Vector Store caused by a typo, leading to an
attempt to serialize the json module itself.
  - **Issue:** #12154 
  - **Dependencies:** ./.
  - **Tag maintainer:**
2023-11-13 11:04:11 -08:00
William FH
9169d77cf6 Update error message in evaluation runner (#13296) 2023-11-13 11:03:20 -08:00
Leonie
32c493e3df Refine Weaviate docs and add RAG example (#13057)
- **Description:** Refine Weaviate tutorial and add an example for
Retrieval-Augmented Generation (RAG)
  - **Issue:** (not applicable),
  - **Dependencies:** none
  - **Tag maintainer:** @baskaryan <!--
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
  - **Twitter handle:** @helloiamleonie

Co-authored-by: Leonie <leonie@Leonies-MBP-2.fritz.box>
2023-11-13 10:59:19 -08:00
takatost
f22f273f93 FIX: 'from_texts' method in Weaviate with non-existent kwargs param (#11604)
Due to the possibility of external inputs including UUIDs, there may be
additional values in **kwargs, while Weaviate's `__init__` method does
not support passing extra **kwarg parameters.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-11-13 10:32:20 -08:00
Frank995
971d2b2e34 Add missing filter to max_marginal_relevance_search inner call to max_marginal_relevance_search_by_vector (#13260)
When calling max_marginal_relevance_search from PGVector the filter
param is not carried over to max_marginal_relevance_search_by_vector

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-11-13 10:31:34 -08:00
chevalmuscle
3ad78e48e2 Use endpoint_url if provided with boto3 session for dynamodb (#11622)
- **Description:** Uses `endpoint_url` if provided with a boto3 session.
When running dynamodb locally, credentials are required even if invalid.
With this change, it will be possible to pass a boto3 session with
credentials and specify an endpoint_url

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2023-11-13 10:31:16 -08:00
Erick Friis
18acc22f29 Ollama pass kwargs as options instead of top (#13280)
Noticed params are really in `options` instead while reviewing #12895
2023-11-13 10:28:47 -08:00
刘 方瑞
46af56dc4f Add MyScaleWithoutJSON which allows user to wrap columns into Document's Metadata (#13164)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
Replace this entire comment with:
- **Description:** Add MyScaleWithoutJSON which allows user to wrap
columns into Document's Metadata
  - **Tag maintainer:** @baskaryan
2023-11-13 10:10:36 -08:00
Michael Landis
2aa13f1e10 chore: bump momento dependency version and refactor search hit usage (#13111)
**Description**

Bumps the Momento dependency to the latest version and refactors the
usage of `SearchHit` in the Momento Vector Index (MVI) vector store
integration. This change is a one liner where we use the preferred
attribute `score` to read the query-document similarity instead of
`distance`. The latest versions of Momento clients will use this
attribute going forward.

**Dependencies**

Updated the Momento dependency to latest version.

**Tests**

💚 I re-ran the existing MVI integration tests
(`tests/integration_tests/vectorstores/test_momento_vector_index.py`)
and they pass.

**Review**
cc @baskaryan @eyurtsev
2023-11-13 09:12:21 -08:00
Junlin Zhou
4da2faba41 docs: align custom_tool document headers (#13252)
On the [Defining Custom
Tools](https://python.langchain.com/docs/modules/agents/tools/custom_tools)
page, there's a 'Subclassing the BaseTool class' paragraph under the
'Completely New Tools - String Input and Output' header. Also there's
another 'Subclassing the BaseTool' paragraph under no header, which I
think may belong to the 'Custom Structured Tools' header.

Another thing is, there's a 'Using the tool decorator' and a 'Using the
decorator' paragraph, I think should belong to 'Completely New Tools -
String Input and Output' and 'Custom Structured Tools' separately.

This PR moves those paragraphs to corresponding headers.
2023-11-13 09:03:56 -08:00
Ikko Eltociear Ashimine
700293cae9 Fix typo in timescalevector.ipynb (#13239)
enviornment -> environment
2023-11-13 09:03:07 -08:00
kYLe
cc55d2fcee Add OpenAI API v1 support for ChatAnyscale and fixed a bug with openai_api_key (#13237)
1. Add OpenAI API v1 support
2. Fixed a bug to call `get_secret_value` on a str value
(values["openai_api_key"])
2023-11-13 09:01:54 -08:00
juan-calvo-datatonic
545b76b0fd Add rag google vertex ai search template (#13294)
- **Description:** This is a template demonstrating how to utilize
Google Vertex AI Search in conjunction with ChatVertexAI()
2023-11-13 08:45:36 -08:00
Govind.S.B
9024593468 added system prompt and template fields to ollama (#13022)
**Description**
the ollama api now supports passing system prompt and template directly
instead of modifying the model file , but the ollama integration in
langchain did not have this change updated . The update just adds these
two parameters to it ( there are 2 more parameters that are pending to
be updated, I was not sure about their utility wrt to langchain )
Refer :
8713ac23a8

**Issue** : None Applicable

**Dependencies** : None Changed

**Twitter handle** : https://twitter.com/violetto96

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2023-11-13 08:45:11 -08:00
langchain-infra
f55f67055f Add dockerfile template (#13240) 2023-11-13 10:33:01 -05:00
Shaurya Rohatgi
f70aa82c84 Update README.md - Added notebook for extraction_openai_tools (#13205)
added Parallel Function Calling for Structured Data Extraction notebook

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-11-13 00:12:46 -08:00
Guillem Orellana Trullols
0f31cd8b49 Remove _get_kwarg_value function (#13184)
`_get_kwarg_value` function is useless, one can rely on python builtin
functionalities to do the exact same thing.

- **Description:** Removed `_get_kwarg_value`. Helps with code
readability.
  - **Issue:** the issue # it fixes (if applicable),
  - **Twitter handle:** @Guillem_96
2023-11-13 00:09:54 -08:00
SuperDa Fu
e1c020dfe1 dalle add model parameter (#13201)
- **Description:** dalle_image_generator adding a new model parameter,
  - **Issue:** N/A,
  - **Dependencies:** 
  - **Tag maintainer: @hwchase17
  - **Twitter handle:**

---------

Co-authored-by: dafu <xiangbingze@wenru.wang>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
2023-11-13 00:09:20 -08:00
Mario Angst
96b56a4d4f Typo fix to quickstart.mdx (#13178)
- **Description:** I fixed a very small typo in the quickstart docs
(BaeMessage -> BaseMessage)
2023-11-13 00:02:18 -08:00
Dennis de Greef
64e11592bb Improve CSV reader which can't call .strip() on NoneType (#13079)
Improve CSV reader which can't call .strip() on NoneType if there are
less cells in the row compared to the header

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** 
I have a CSV file as followed

```
headerA,headerB,headerC
v1A,v1B,v1C,
v2A,v2B
v3A,v3B,v3C
```
In this case, row 2 is missing a value, which results in reading a None
type. The strip() method can not be called on None, hence raising. In
this PR I am making the change to only call strip if the value if not
None.

  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2023-11-12 23:51:39 -08:00
glad4enkonm
339973db47 Update ollama.py (#12895)
duplicate option removed
**Description:**  An issue fix, http stop option duplicate removed.
**Issue:** the issue #12892 fix
**Dependencies:** no
**Tag maintainer:** @eyurtsev

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-11-12 23:43:59 -08:00
刘 方瑞
e89e830c55 Free knowledge base pod information update (#12813)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

We updated MyScale free knowledge base, where you can try your RAG with
36 million paragraphs from wikipedia and 2 million paragraphs from
ArXiv.

The pod has two tables
```sql
CREATE TABLE default.ChatArXiv (
    `abstract` String, 
    `id` String, 
    `vector` Array(Float32), 
    `metadata` Object('JSON'), 
    `pubdate` DateTime,
    `title` String,
    `categories` Array(String),
    `authors` Array(String), 
    `comment` String,
    `primary_category` String,
    VECTOR INDEX vec_idx vector TYPE MSTG('metric_type=Cosine'), 
    CONSTRAINT vec_len CHECK length(vector) = 768) 
ENGINE = ReplacingMergeTree ORDER BY id;

CREATE TABLE wiki.Wikipedia (
    `id` String, 
    `title` String, 
    `text` String,
    `url` String,
    `wiki_id` UInt64,
    `views` Float32,
    `paragraph_id` UInt64,
    `langs` UInt32, 
    `emb` Array(Float32), 
    VECTOR INDEX emb_idx emb TYPE MSTG('metric_type=Cosine'), 
    CONSTRAINT emb_len CHECK length(emb) = 768) 
ENGINE = ReplacingMergeTree ORDER BY id;
```

You can connect those two tables using credentials below (just the same
to the old one)
URL: `msc-4a9e710a.us-east-1.aws.staging.myscale.cloud`
Port: `443`
Username: `chatdata`
Password: `myscale_rocks`

It's FREE and you can also use it with 
ChatData: https://github.com/myscale/ChatData
Retrieval-QA-Benchmark:
https://github.com/myscale/Retrieval-QA-Benchmark
... and also LangChain!

Request for review @baskaryan
2023-11-12 23:22:42 -08:00
Luis Valencia
c40973814d Update README.md (#8570)
- Description: updated readme.
  - Tag maintainer: @baskaryan
  - Twitter handle: @Levalencia

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2023-11-12 22:07:49 -08:00
Manuel Soria
f7f6aebc8d add code to generate embeddings 2023-11-12 18:41:41 -03:00
Manuel Soria
362e1c5233 x 2023-11-12 18:27:46 -03:00
Manuel Soria
e65b73157e adapting readme file 2023-11-12 18:27:36 -03:00
Manuel Soria
ee7c68e8b9 seprating prompts as they are too long 2023-11-12 18:15:42 -03:00
Manuel Soria
597a8c084d replacing chain 2023-11-12 17:54:19 -03:00
Manuel Soria
97d4e028f4 creating template folder 2023-11-12 17:43:41 -03:00
Isak Nyberg
8f81703d76 Add new models to openai callback (#13244)
**Description:** Adding the new models to the openai callback function,
info taken from [model
announcement](https://platform.openai.com/docs/models) and
[pricing](https://openai.com/pricing)

A short description for a short PR :)
2023-11-12 12:01:19 -08:00
Bagatur
ea6dd3a550 bump 335 (#13261) 2023-11-12 11:30:25 -08:00
William FH
a837b03e55 Update langsmith version 0.63 (#13208) 2023-11-12 11:29:25 -08:00
Harrison Chase
7f1d26160d update tools (#13243) 2023-11-12 10:22:54 -08:00
Nuno Campos
8d6faf5665 Make it easier to subclass runnable binding with custom init args (#13189) 2023-11-11 09:01:17 +00:00
Peter Vandenabeele
7f1964b264 Fix BeautifulSoupTransformer: no more duplicates and correct order of tags + tests (#12596) 2023-11-11 08:56:37 +00:00
Bagatur
937d7c41f3 update stack diagram (#13213) 2023-11-10 16:50:20 -08:00
Erick Friis
9c7afa8adb Upgrade cohere embedding model to v3 (#13219)
Just updates API docs, doesn't change default param from 2.0 (could be
breaking change)
2023-11-10 16:25:58 -08:00
Matvey Arye
180657ca7a Add template for conversational rag with timescale vector (#13041)
**Description:** This is like the rag-conversation template in many
ways. What's different is:
- support for a timescale vector store.
- support for time-based filters.
- support for metadata filters.

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-11-10 16:12:32 -08:00
Andrew Zhou
1a1a1a883f fleet_context docs update (#13221)
- **Description:** Changed the fleet_context documentation to use
`context.download_embeddings()` from the latest release from our
package. More details here:
https://github.com/fleet-ai/context/tree/main#api
  - **Issue:** n/a
  - **Dependencies:** n/a
  - **Tag maintainer:** @baskaryan 
  - **Twitter handle:** @andrewthezhou
2023-11-10 14:53:57 -08:00
Erick Friis
8fdf15c023 Fix Document Loader Unit Test - Docusaurus (#13228) 2023-11-10 14:52:01 -08:00
Lee
72ad448daa feat: Docusaurus Loader (#9138)
Added a Docusaurus Loader

Issue: #6353

I had to implement this for working with the Ionic documentation, and
wanted to open this up as a draft to get some guidance on building this
out further. I wasn't sure if having it be a light extension of the
SitemapLoader was in the spirit of a proper feature for the library --
but I'm grateful for the opportunities Langchain has given me and I'd
love to build this out properly for the sake of the community.

Any feedback welcome!
2023-11-10 14:21:55 -08:00
VAS
8fa960641a Update Documentation: Corrected Typos and Improved Clarity (#11725)
Docs updates

---------

Co-authored-by: Advaya <126754021+bluevayes@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2023-11-10 14:14:44 -08:00
Leonid Ganeline
e165daa0ae new course on DeepLearning.ai (#12755)
Added a new course on
[DeepLearning.ai](https://learn.deeplearning.ai/functions-tools-agents-langchain)
Added the LangChain `Wikipedia` link. Probably, it can be placed in the
"More" menu.
2023-11-10 13:55:27 -08:00
Erick Friis
93ae589f1b Add mongo parent template to index (#13222) 2023-11-10 11:56:44 -08:00
Tomaz Bratanic
0dc4ab0be1 Neo4j chat message history (#13008) 2023-11-10 11:53:34 -08:00
Bagatur
bf8cf7e042 Bagatur/langserve blurb (#13217) 2023-11-10 14:05:43 -05:00
fyasla
d266b3ea4a issue #12165 mask API key in chat_models/azureml_endpoint module (#12836)
- **Description:** `AzureMLChatOnlineEndpoint` object from
langchain/chat_models/azureml_endpoint.py safe to print
without having any secrets included in raw format in the string
representation.
  - **Issue:** #12165,
  - **Tag maintainer:** @eyurtsev

---------

Co-authored-by: Faysal Bougamale <faysal.bougamale@horiba.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-11-10 14:05:19 -05:00
Anush
52f34de9b7 feat: FastEmbed embedding provider (#13109)
## Description:
This PR intends to add
[Qdrant/FastEmbed](https://qdrant.github.io/fastembed/) as a local
embeddings provider, associated tests and documentation.

**Documentation preview:**
https://langchain-git-fork-anush008-master-langchain.vercel.app/docs/integrations/text_embedding/fastembed

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2023-11-10 13:51:52 -05:00
Eugene Yurtsev
b0e8cbe0b3 Add RunnableSequence documentation (#13094)
Add RunnableSequence documentation
2023-11-10 13:44:43 -05:00
Eugene Yurtsev
869df62736 Document RunnableWithFallbacks (#13088)
Add documentation to RunnableWithFallbacks
2023-11-10 13:16:21 -05:00
Eugene Yurtsev
8313c218da Add more runnable documentation (#13083)
- Adding documentation to the runnable.
- Documentation is not organized in the best way for the runnable; i.e.,
in
terms of LCEL vs. other standard methods, will follow up with more
edits.
2023-11-10 13:14:57 -05:00
Erick Friis
a26105de8e vectara rag mq (#13214)
Description: another Vectara template for MultiQuery RAG flow
Twitter handle: @ofermend

Fixes to #13106

---------

Co-authored-by: Ofer Mendelevitch <ofer@vectara.com>
Co-authored-by: Ofer Mendelevitch <ofermend@gmail.com>
2023-11-10 10:08:45 -08:00
Bagatur
24386e0860 bump 334, exp 40 (#13211) 2023-11-10 09:43:29 -08:00
Lance Martin
d2e50b3108 Add Chroma multimodal cookbook (#12952)
Pending:
* https://github.com/chroma-core/chroma/pull/1294
* https://github.com/chroma-core/chroma/pull/1293

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-11-10 09:43:10 -08:00
The1Bill
55912868da Update toolkit.py to remove single quotes around table names (#12445)
**Description:** Removing the single quote wrapper around the table
names in the SQL agent toolkit.py file as it misleads the LLM into
querying against tables with single quotes around their names.
**Issue:** #7457 
**Dependencies:** None
**Tag maintainer:** @hwchase17 
**Twitter handle:** None
2023-11-10 06:39:15 -08:00
Nuno Campos
362a446999 Changes to root listener (#12174)
- Implement config_specs to include session_id
- Remove Runnable method and update notebook
- Add more details to notebook, eg. show input schema and config schema
before and after adding message history

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-11-10 09:53:48 +00:00
Nuno Campos
b2b94424db Update return type for Runnable.__or__ (#12880)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2023-11-10 09:52:38 +00:00
Bagatur
dd7959f4ac template readme's in docs (#13152) 2023-11-09 23:36:21 -08:00
Bagatur
86b93b5810 Add serve to quickstart (#13174) 2023-11-09 23:10:26 -08:00
Bagatur
fbf7047468 Bagatur/update agent docs (#13167) 2023-11-09 21:14:30 -08:00
Harrison Chase
0a2b1c7471 improve duck duck go tool (#13165) 2023-11-09 20:49:39 -08:00
Bagatur
850336bcf1 Update model i/o docs (#13160) 2023-11-09 20:35:55 -08:00
Jacob Lee
cf271784fa Add basic critique revise template (#12688)
@baskaryan @hwchase17

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-11-09 17:33:29 -08:00
Cweili
ee3ceb0fb8 Document: Fix "Biadu" typo (#12985)
Fix document "Baidu Cloud ElasticSearch VectorSearch" `Biadu` typo.
2023-11-09 17:32:38 -08:00
Chenyu Zhao
defd4b4f11 Clean up Fireworks provider documentation (#13157) 2023-11-09 16:35:05 -08:00
Bagatur
d9e493e96c fix module sidebar (#13158) 2023-11-09 16:31:45 -08:00
wemysschen
e76ff63125 fix baiducloud_vector_search document typo (#12976)
**Issue:**
fix baiducloud_vector_search document typo

---------

Co-authored-by: wemysschen <root@icoding-cwx.bcc-szzj.baidu.com>
2023-11-09 16:27:04 -08:00
Holt Skinner
fceae456b9 fix: Updates to formatting in Google Drive Retriever docs (#13015)
- Minor updates to formatting to make easier to read
2023-11-09 16:15:55 -08:00
Bagatur
c63eb9d797 LCEL nits (#13155) 2023-11-09 16:09:33 -08:00
Shinya Maeda
28cc60b347 Fix langchain.llms OpenAI completion doesn't work due to v1 client update (#13099)
This commit fixes the issue that langchain.llms OpenAI completion
stopped working since the V1 openai client update.

Replace this entire comment with:
- **Description:** This PR fixes the issue [AttributeError: module
'openai' has no attribute
'Completion'](https://github.com/langchain-ai/langchain/issues/12967)
similar to
8e0cb2eb84
and https://github.com/langchain-ai/langchain/pull/12969,
  - **Issue:** https://github.com/langchain-ai/langchain/issues/12967,
  - **Dependencies:** `openai` v1.x.x client,
  - **Tag maintainer:** @baskaryan,
  - **Twitter handle:** @dosuken123 

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-11-09 15:12:19 -08:00
Bagatur
555ce600ef Bagatur/docs serve context (#13150) 2023-11-09 15:05:18 -08:00
Bagatur
ff43cd6701 OpenAI remove httpx typing (#13154)
Addresses #13124
2023-11-09 14:32:09 -08:00
Erick Friis
8ad3b255dc Pirate Speak Configurable Template (#13153) 2023-11-09 22:13:45 +00:00
Bagatur
eb51150557 update oai tool agent doc (#13147) 2023-11-09 12:37:30 -08:00
Bagatur
b298f550fe update modules sidebar (#13141) 2023-11-09 11:57:09 -08:00
Bagatur
84e65533e9 Docs: combine LCEL index and why (#13142) 2023-11-09 11:16:45 -08:00
Bagatur
1311450646 fix langsmith links (#13144) 2023-11-09 11:12:50 -08:00
Bagatur
8b2a82b5ce Bagatur/docs smith context (#13139) 2023-11-09 10:22:49 -08:00
Erick Friis
58da6e0d47 Multimodal rag traces (#13140) 2023-11-09 09:54:00 -08:00
Bagatur
150d58304d update oai cookbooks (#13135) 2023-11-09 08:04:51 -08:00
Bagatur
f04cc4b7e1 bump 333 (#13131) 2023-11-09 07:33:15 -08:00
billytrend-cohere
b346d4a455 Add message to documents (#12552)
This adds the response message as a document to the rag retriever so
users can choose to use this. Also drops document limit.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-11-09 07:30:48 -08:00
Harrison Chase
5f38770161 Support oai tool call (#13110)
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Nuno Campos <nuno@boringbits.io>
2023-11-09 07:29:29 -08:00
Stefano Lottini
c52725bdc5 (Astra DB/Cassandra) Minor clarification about dependencies in the demo notebook (#13118)
This PR helps developers trying the Astra DB / Cassandra vector store
quickstart notebook by making it clear what other dependencies are
required.
2023-11-09 09:19:15 -05:00
Holt Skinner
0fc8fd12bd feat: Vertex AI Search - Add Snippet Retrieval for Non-Advanced Website Data Stores (#13020)
https://cloud.google.com/generative-ai-app-builder/docs/snippets#snippets

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2023-11-08 21:52:50 -05:00
Erick Friis
3dbaaf59b2 Tool Retrieval Template (#13104)
Adds a template like
https://python.langchain.com/docs/modules/agents/how_to/custom_agent_with_tool_retrieval

Uses OpenAI functions, LCEL, and FAISS
2023-11-08 18:33:31 -08:00
Jacob Lee
76283e9625 Adds embeddings filter option to return scores in state (#12489)
CC @baskaryan @assafelovic
2023-11-08 17:50:06 -08:00
jakerachleff
18601bd4c8 Get project from langchain sdk (#13100)
## Description
We need to centralize the API we use to get the project name for our
tracers. This PR makes it so we always get this from a shared function
in the langsmith sdk.

## Dependencies
Upgraded langsmith from 0.52 to 0.62 to include the new API
`get_tracer_project`
2023-11-08 17:10:12 -08:00
Bagatur
72e12f6bcf update more azure docs (#13093) 2023-11-08 14:11:16 -08:00
Bagatur
1703f132c6 update azure embedding docs (#13091) 2023-11-08 13:39:31 -08:00
Bagatur
9fdfac22c2 bump 332 (#13089) 2023-11-08 13:23:16 -08:00
Bagatur
1f85ec34d5 bump 331rc3 exp 39 (#13086) 2023-11-08 13:00:13 -08:00
Anton Troynikov
9f077270c8 Don't pass EF to chroma (#13085)
- **Description:** 

Recently Chroma rolled out a breaking change on the way we handle
embedding functions, in order to support multi-modal collections.

This broke the way LangChain's `Chroma` objects get created, because we
were passing the EF down into the Chroma collection:
https://docs.trychroma.com/migration#migration-to-0416---november-7-2023

However, internally, we are never actually using embeddings on the
chroma collection - LangChain's `Chroma` object calls it instead. Thus
we just don't pass an `embedding_function` to Chroma itself, which fixes
the issue.
2023-11-08 12:55:35 -08:00
Erick Friis
f15f8e01cf Azure OpenAI Embeddings (#13039)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-11-08 12:37:17 -08:00
David Peterson
37561d8986 Add Proper Import Error (#13042)
- **Description:** The issue was not listing the proper import error for
amazon textract loader.
- **Issue:** Time wasted trying to figure out what to install...
(langchain docs don't list the dependency either)
  - **Dependencies:** N/A
  - **Tag maintainer:** @sbusso 
  - **Twitter handle:** @h9ste

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-11-08 10:29:08 -08:00
Eugene Yurtsev
06c503f672 Add RunnableRetry Documentation (#13074) 2023-11-08 18:20:18 +00:00
Bagatur
55aeff6777 oai assistant multiple actions (#13068) 2023-11-08 08:25:37 -08:00
Erick Friis
a9b70baef9 cli updates, 0.0.16 (#13034)
- confirm flags, serve detection
- 0.0.16
- always gen code
- pip bool
2023-11-08 07:47:30 -08:00
Bagatur
1f27104626 Fleet context (#13038)
cc @adrwz
2023-11-07 18:57:09 -08:00
Bagatur
d26fd6f0d1 redirect langsmith walkthrough (#13040) 2023-11-07 18:24:13 -08:00
Erick Friis
6f45532620 Upgrade docs postcss (#13031) 2023-11-07 15:50:25 -08:00
Erick Friis
54ad3cc2b8 template versions again (#13030)
- scipy was locked due to py version
- same guardrails-output-parser
- rag-redis
2023-11-07 15:15:18 -08:00
Erick Friis
506f81563f Update Deps in Experimental (#13029) 2023-11-07 15:15:09 -08:00
Erick Friis
db4b97d590 Relock Templates (#13028) 2023-11-07 15:01:49 -08:00
Stefano Lottini
4f4b020582 Add "Astra DB" vector store integration (#12966)
# Astra DB Vector store integration

- **Description:** This PR adds a `VectorStore` implementation for
DataStax Astra DB using its HTTP API
  - **Issue:** (no related issue)
- **Dependencies:** A new required dependency is `astrapy` (`>=0.5.3`)
which was added to pyptoject.toml, optional, as per guidelines
- **Tag maintainer:** I recently mentioned to @baskaryan this
integration was coming
  - **Twitter handle:** `@rsprrs` if you want to mention me

This PR introduces the `AstraDB` vector store class, extensive
integration test coverage, a reworking of the documentation which
conflates Cassandra and Astra DB on a single "provider" page and a new,
completely reworked vector-store example notebook (common to the
Cassandra store, since parts of the flow is shared by the two APIs). I
also took care in ensuring docs (and redirects therein) are behaving
correctly.

All style, linting, typechecks and tests pass as far as the `AstraDB`
integration is concerned.

I could build the documentation and check it all right (but ran into
trouble with the `api_docs_build` makefile target which I could not
verify: `Error: Unable to import module
'plan_and_execute.agent_executor' with error: No module named
'langchain_experimental'` was the first of many similar errors)

Thank you for a review!
Stefano

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-11-07 14:45:33 -08:00
Tomaz Bratanic
13bd83bd61 Add neo4j vector memory template (#12993) 2023-11-07 13:00:49 -08:00
Bagatur
5ac2fc5bb2 update stack diagram (#13021) 2023-11-07 12:59:24 -08:00
Yang, Bo
600caff03c Add Memorize tool (#11722)
- **Description:** Add `Memorize` tool
  - **Tag maintainer:** @hwchase17

This PR added a new tool `Memorize` so that an agent can use it to
fine-tune itself. This tool requires `TrainableLLM` introduced in #11721

DEMO:
6a9003d5db

![image](https://github.com/langchain-ai/langchain/assets/601530/d6f0cb45-54df-4dcf-b143-f8aefb1e76e3)
2023-11-07 12:42:10 -08:00
Bagatur
cf481c9418 bump exp 38 (#13016) 2023-11-07 11:49:23 -08:00
Bagatur
57e19989f6 Bagatur/oai assistant (#13010) 2023-11-07 11:44:53 -08:00
Erick Friis
74134dd7e1 cli pyproject updating (#12945)
`langchain app add` and `langchain app remove` will now keep the
dependencies list updated.

---------

Co-authored-by: Nuno Campos <nuno@boringbits.io>
2023-11-07 11:06:08 -08:00
Tomaz Bratanic
d9abcf1aae Neo4j conversation cypher template (#12927)
Adding custom graph memory to Cypher chain

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-11-07 11:05:28 -08:00
Lance Martin
2287a311cf Multi modal RAG + QA Cookbooks (#12946)
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Vinzenz Klass <76391770+VinzenzKlass@users.noreply.github.com>
Co-authored-by: Praveen Venkateswaran <praveenv@uci.edu>
Co-authored-by: Praveen Venkateswaran <praveen.venkateswaran@ibm.com>
Co-authored-by: Kacper Łukawski <kacperlukawski@users.noreply.github.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Ofer Mendelevitch <ofermend@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-11-07 09:10:24 -08:00
Bagatur
6175dc30aa bump 331rc2 (#13006) 2023-11-07 08:52:17 -08:00
Jasan
ff87f4b4f9 Fix for rag-supabase readme (#12869)
- **Description:** Correct naming for package in README
- **Issue:** README wasn't aligned with pyproject.toml, resulting in not
being able to install the rag-supabase package.
  - **Tag maintainer:** @gregnr
2023-11-06 19:38:22 -08:00
Harrison Chase
99ffeb239f add ingest for mongo (#12897) 2023-11-06 19:28:22 -08:00
Ofer Mendelevitch
ce21308f29 Vectara RAG template (#12975)
- **Description:** RAG template using Vectara
  - **Twitter handle:** @ofermend
2023-11-06 19:24:00 -08:00
Erick Friis
0c81cd923e oai v1 embeddings (#12969)
Initial PR to get OpenAIEmbeddings working with the new sdk

fyi @rlancemartin 

Fixes #12943

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-11-06 18:52:33 -08:00
Bagatur
fdbb45d79e bump 331rc1 (#12965) 2023-11-06 15:36:43 -08:00
Bagatur
3bb8030a6e fix max_tokens (#12964) 2023-11-06 15:36:05 -08:00
Bagatur
a9002a82b8 bump 331rc0 (#12963) 2023-11-06 15:19:33 -08:00
Harrison Chase
c27400efeb Support multimodal messages (#11320)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-11-06 15:14:18 -08:00
Bagatur
388f248391 add oai v1 cookbook (#12961) 2023-11-06 14:28:32 -08:00
Bagatur
4f7dff9d66 Record system fingerprint chat openai (#12960) 2023-11-06 14:25:53 -08:00
Bagatur
8e0cb2eb84 ChatOpenAI and AzureChatOpenAI openai>=1 compatible (#12948) 2023-11-06 13:24:18 -08:00
Kacper Łukawski
52d0055a91 Add support of Cohere Embed v3 (#12940)
Cohere released the new embedding API (Embed v3:
https://txt.cohere.com/introducing-embed-v3/) that treats document and
query embeddings differently. This PR updated the `CohereEmbeddings` to
use them appropriately. It also works with the old models.
2023-11-06 15:06:58 -05:00
Praveen Venkateswaran
8e0dcb37d2 Add SecretStr for Symbl.ai Nebula API (#12896)
Description: This PR masks API key secrets for the Nebula model from
Symbl.ai
Issue: #12165 
Maintainer: @eyurtsev

---------

Co-authored-by: Praveen Venkateswaran <praveen.venkateswaran@ibm.com>
2023-11-06 14:13:59 -05:00
Vinzenz Klass
59d0bd2150 feat: acquire advisory lock before creating extension in pgvector (#12935)
- **Description:** Acquire advisory lock before attempting to create
extension on postgres server, preventing errors in concurrent
executions.
  - **Issue:** #12933
  - **Dependencies:** None

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2023-11-06 14:00:39 -05:00
Eugene Yurtsev
b376854b26 Fix for anyscale chat model api key (#12938)
* ChatAnyscale was missing coercion to SecretStr for anyscale api key
* The model inherits from ChatOpenAI so it should not force the openai
api key to be secret str until openai model has the same changes

https://github.com/langchain-ai/langchain/issues/12841
2023-11-06 13:28:02 -05:00
Bagatur
58889149c2 fix guides link (#12941) 2023-11-06 08:13:02 -08:00
matthieudelaro
52503a367f Remove useless line of code from sql.ipynb (#12906)
This PR remove a single line of code from a notebook of the
documentation. This line used to define a variable, which is never used
in the code.
For further context, for reviewers, here is the online documentation:
https://python.langchain.com/docs/use_cases/qa_structured/sql#case-3-sql-agents
2023-11-06 07:59:12 -08:00
hmasdev
622bf12c2e fix regex pattern of structured output parser (#12929)
- **Description:** fix the regex pattern of
[StructuredChatOutputParser](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/structured_chat/output_parser.py#L18)
and add unit tests for the code change.
- **Issue:** #12158 #12922
- **Dependencies:** None
- **Tag maintainer:** 
- **Twitter handle:** @hmdev3
- **NOTE:** This PR conflicts #7495 . After #7495 is merged, I am going
to update PR.
2023-11-06 07:53:14 -08:00
wemysschen
8c02f4fbd8 add baidu cloud vectorsearch document (#12928)
**Description:** 
Add BaiduCloud VectorSearch document with implement of BESVectorSearch
in langchain vectorstores

---------

Co-authored-by: wemysschen <root@icoding-cwx.bcc-szzj.baidu.com>
2023-11-06 07:52:50 -08:00
wemysschen
8d7144e6a6 fix baiducloud directory loader import file loader (#12924)
**Issue:** 
fix baiducloud BOS directory loader imports its file loader

---------

Co-authored-by: wemysschen <root@icoding-cwx.bcc-szzj.baidu.com>
2023-11-06 07:52:31 -08:00
Alex Howard
5bb2ea51a5 docs: clean up vestigial markdown (#12907)
- **Description:** Remove text "LangChain currently does not support"
which appears to be vestigial leftovers from a previous change.
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Tag maintainer:** @baskaryan, @eyurtsev
  - **Twitter handle:** thezanke
2023-11-06 07:51:56 -08:00
Praveen Venkateswaran
1eb7d3a862 docs: update hf pipeline docs (#12908)
- **Description:** Noticed that the Hugging Face Pipeline documentation
was a bit out of date.
Updated with information about passing in a pipeline directly
(consistent with docstring) and a recent contribution of mine on adding
support for multi-gpu specifications with Accelerate in
21eeba075c
2023-11-06 07:51:31 -08:00
Christoffer Bo Petersen
37da6e546b Fix typo in e2b_data_analysis.ipynb (#12930)
Just a small typo fix
2023-11-06 07:37:30 -08:00
Kacper Łukawski
621419f71e Fix normalizing the cosine distance in Qdrant (#12934)
Qdrant was incorrectly calculating the cosine similarity and returning
`0.0` for the best match, instead of `1.0`. Internally Qdrant returns a
cosine score from `-1.0` (worst match) to `1.0` (best match), and the
current formula reflects it.
2023-11-06 07:36:59 -08:00
Hech
8fe6bcc662 Fix return metadata when searching for DingoDB (#12937) 2023-11-06 07:35:36 -08:00
Jakub Novák
ada3d2cbd1 Add possibility to pass on_artifacts for a specific conversation (#12687)
Possibility to pass on_artifacts to a conversation. It can be then
achieved by adding this way:

```python
result = agent.run(
    input=message.text,
    metadata={
        "on_artifact": CALLBACK_FUNCTION
    },
)
```
2023-11-06 07:29:47 -08:00
Bagatur
0378662e1d fix langsmith link (#12939) 2023-11-06 07:17:05 -08:00
Harrison Chase
1a92d2245d Harrison/docs smith serve (#12898)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-11-06 07:07:25 -08:00
Bagatur
53f453f01a bump 331 (#12932) 2023-11-06 05:58:12 -08:00
Priyadutt
a4d9e986fb Update csv.ipynb description (#12878)
The line removed is not required as there are no other alternative
solutions above than that.

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2023-11-06 03:32:04 -08:00
Erick Friis
5000c7308e cli template gitignores (#12914)
- ap gitignore
- package
2023-11-05 22:34:45 -08:00
Harrison Chase
aba407f774 use keys not items (#12918) 2023-11-05 22:08:29 -08:00
Harrison Chase
60d025b83b mongo parent document retrieval (#12887) 2023-11-04 10:16:02 -07:00
Michael Hunger
e43b4079c8 template: use dashes instead of underscores for neo4j-cypher package and path in readme (#12827)
Minimal readme template update

underscores didn't work, dashes do
2023-11-03 15:54:48 -07:00
wemysschen
e14aa37d59 fix bes vector store search (#12828)
**Issue:** 
fix search body in baidu cloud vectorsearch

---------

Co-authored-by: wemysschen <root@icoding-cwx.bcc-szzj.baidu.com>
2023-11-03 15:39:19 -07:00
standby24x7
f04e4df7f9 coockbook: Fix typo in wikibase_agent.ipynb (#12839)
This patch fixes a spelling typo in message
within wikibase_agent.ipynb.

Signed-off-by: Masanari Iida <standby24x7@gmail.com>

Signed-off-by: Masanari Iida <standby24x7@gmail.com>
2023-11-03 14:57:37 -07:00
Kacper Łukawski
66c41c0dbf Add template for self-query-qdrant (#12795)
This PR adds a self-querying template using Qdrant as a vector store.
The template uses an artificial dataset and was implemented in a way
that simplifies passing different components and choosing LLM and
embedding providers.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-11-03 13:37:29 -07:00
Daniel Chalef
f41f4c5e37 zep/rag conversation zep template (#12762)
LangServe template for a RAG Conversation App using Zep.

 @baskaryan, @eyurtsev

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-11-03 13:34:44 -07:00
Lance Martin
ea1ab391d4 Open Clip multimodal embeddings (#12754) 2023-11-03 13:33:36 -07:00
Bagatur
ebee616822 bump 330 (#12853) 2023-11-03 13:26:41 -07:00
Tomaz Bratanic
0dbdb8498a Neo4j Advanced RAG template (#12794)
Todo:

- [x] Docs
2023-11-03 13:22:55 -07:00
Harrison Chase
83cee2cec4 Template Readmes and Standardization (#12819)
Co-authored-by: Erick Friis <erick@langchain.dev>
2023-11-03 13:15:29 -07:00
Erick Friis
6c237716c4 Update readmes with new cli install (#12847)
Old command still works. Just simplifying.

Merge after releasing CLI 0.0.15
2023-11-03 12:10:32 -07:00
Erick Friis
7db49d3842 Confirm sys.path includes current dir for app serve (#12851)
- Make sure sys.path is set properly for langchain app serve
- bump
2023-11-03 11:37:20 -07:00
558 changed files with 71908 additions and 18432 deletions

View File

@@ -17,13 +17,16 @@ For more info, check out the [GitHub documentation](https://docs.github.com/en/f
## VS Code Dev Containers
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
Note: If you click this link you will open the main repo and not your local cloned repo, you can use this link and replace with your username and cloned repo name:
Note: If you click the link above you will open the main repo (langchain-ai/langchain) and not your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name:
```
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/<yourusername>/<yourclonedreponame>
```
Then you will have a local cloned repo where you can contribute and then create pull requests.
If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
You can also follow these steps to open this repo in a container using the VS Code Dev Containers extension:
Alternatively you can also follow these steps to open this repo in a container using the VS Code Dev Containers extension:
1. If this is your first time using a development container, please ensure your system meets the pre-reqs (i.e. have Docker installed) in the [getting started steps](https://aka.ms/vscode-remote/containers/getting-started).

1
.gitignore vendored
View File

@@ -178,3 +178,4 @@ docs/docs/build
docs/docs/node_modules
docs/docs/yarn.lock
_dist
docs/docs/templates

File diff suppressed because one or more lines are too long

View File

@@ -20,6 +20,7 @@ Notebook | Description
[databricks_sql_db.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/databricks_sql_db.ipynb) | Connect to databricks runtimes and databricks sql.
[deeplake_semantic_search_over_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/deeplake_semantic_search_over_chat.ipynb) | Perform semantic search and question-answering over a group chat using activeloop's deep lake with gpt4.
[elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl API.
[extraction_openai_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/extraction_openai_tools.ipynb) | Structured Data Extraction with OpenAI Tools
[forward_looking_retrieval_augm...](https://github.com/langchain-ai/langchain/tree/master/cookbook/forward_looking_retrieval_augmented_generation.ipynb) | Implement the forward-looking active retrieval augmented generation (flare) method, which generates answers to questions, identifies uncertain tokens, generates hypothetical questions based on these tokens, and retrieves relevant documents to continue generating the answer.
[generative_agents_interactive_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb) | Implement a generative agent that simulates human behavior, based on a research paper, using a time-weighted memory object backed by a langchain retriever.
[gymnasium_agent_simulation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/gymnasium_agent_simulation.ipynb) | Create a simple agent-environment interaction loop in simulated environments like text-based games with gymnasium.
@@ -38,6 +39,7 @@ Notebook | Description
[multiagent_bidding.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_bidding.ipynb) | Implement a multi-agent simulation where agents bid to speak, with the highest bidder speaking next, demonstrated through a fictitious presidential debate example.
[myscale_vector_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/myscale_vector_sql.ipynb) | Access and interact with the myscale integrated vector database, which can enhance the performance of language model (llm) applications.
[openai_functions_retrieval_qa....](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_functions_retrieval_qa.ipynb) | Structure response output in a question-answering system by incorporating openai functions into a retrieval pipeline.
[openai_v1_cookbook.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_v1_cookbook.ipynb) | Explore new functionality released alongside the V1 release of the OpenAI Python library.
[petting_zoo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/petting_zoo.ipynb) | Create multi-agent simulations with simulated environments using the petting zoo library.
[plan_and_execute_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/plan_and_execute_agent.ipynb) | Create plan-and-execute agents that accomplish objectives by planning tasks with a language model (llm) and executing them with a separate agent.
[press_releases.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/press_releases.ipynb) | Retrieve and query company press release data powered by [Kay.ai](https://kay.ai).

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,213 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2def22ea",
"metadata": {},
"source": [
"# Extraction with OpenAI Tools\n",
"\n",
"Performing extraction has never been easier! OpenAI's tool calling ability is the perfect thing to use as it allows for extracting multiple different elements from text that are different types. \n",
"\n",
"Models after 1106 use tools and support \"parallel function calling\" which makes this super easy."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "5c628496",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.pydantic_v1 import BaseModel\n",
"from typing import Optional, List\n",
"from langchain.chains.openai_tools import create_extraction_chain_pydantic"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "afe9657b",
"metadata": {},
"outputs": [],
"source": [
"# Make sure to use a recent model that supports tools\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo-1106\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "bc0ca3b6",
"metadata": {},
"outputs": [],
"source": [
"# Pydantic is an easy way to define a schema\n",
"class Person(BaseModel):\n",
" \"\"\"Information about people to extract.\"\"\"\n",
"\n",
" name: str\n",
" age: Optional[int] = None"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "2036af68",
"metadata": {},
"outputs": [],
"source": [
"chain = create_extraction_chain_pydantic(Person, model)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "1748ad21",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Person(name='jane', age=2), Person(name='bob', age=3)]"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"input\": \"jane is 2 and bob is 3\"})"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "c8262ce5",
"metadata": {},
"outputs": [],
"source": [
"# Let's define another element\n",
"class Class(BaseModel):\n",
" \"\"\"Information about classes to extract.\"\"\"\n",
"\n",
" teacher: str\n",
" students: List[str]"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "4973c104",
"metadata": {},
"outputs": [],
"source": [
"chain = create_extraction_chain_pydantic([Person, Class], model)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "e976a15e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Person(name='jane', age=2),\n",
" Person(name='bob', age=3),\n",
" Class(teacher='Mrs Sampson', students=['jane', 'bob'])]"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"input\": \"jane is 2 and bob is 3 and they are in Mrs Sampson's class\"})"
]
},
{
"cell_type": "markdown",
"id": "6575a7d6",
"metadata": {},
"source": [
"## Under the hood\n",
"\n",
"Under the hood, this is a simple chain:"
]
},
{
"cell_type": "markdown",
"id": "b8ba83e5",
"metadata": {},
"source": [
"```python\n",
"from typing import Union, List, Type, Optional\n",
"\n",
"from langchain.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain.utils.openai_functions import convert_pydantic_to_openai_tool\n",
"from langchain.schema.runnable import Runnable\n",
"from langchain.pydantic_v1 import BaseModel\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema.messages import SystemMessage\n",
"from langchain.schema.language_model import BaseLanguageModel\n",
"\n",
"_EXTRACTION_TEMPLATE = \"\"\"Extract and save the relevant entities mentioned \\\n",
"in the following passage together with their properties.\n",
"\n",
"If a property is not present and is not required in the function parameters, do not include it in the output.\"\"\" # noqa: E501\n",
"\n",
"\n",
"def create_extraction_chain_pydantic(\n",
" pydantic_schemas: Union[List[Type[BaseModel]], Type[BaseModel]],\n",
" llm: BaseLanguageModel,\n",
" system_message: str = _EXTRACTION_TEMPLATE,\n",
") -> Runnable:\n",
" if not isinstance(pydantic_schemas, list):\n",
" pydantic_schemas = [pydantic_schemas]\n",
" prompt = ChatPromptTemplate.from_messages([\n",
" (\"system\", system_message),\n",
" (\"user\", \"{input}\")\n",
" ])\n",
" tools = [convert_pydantic_to_openai_tool(p) for p in pydantic_schemas]\n",
" model = llm.bind(tools=tools)\n",
" chain = prompt | model | PydanticToolsParser(tools=pydantic_schemas)\n",
" return chain\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2eac6b68",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,506 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f970f757-ec76-4bf0-90cd-a2fb68b945e3",
"metadata": {},
"source": [
"# Exploring OpenAI V1 functionality\n",
"\n",
"On 11.06.23 OpenAI released a number of new features, and along with it bumped their Python SDK to 1.0.0. This notebook shows off the new features and how to use them with LangChain."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ee897729-263a-4073-898f-bb4cf01ed829",
"metadata": {},
"outputs": [],
"source": [
"# need openai>=1.1.0, langchain>=0.0.333, langchain-experimental>=0.0.39\n",
"!pip install -U openai langchain langchain-experimental"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c3e067ce-7a43-47a7-bc89-41f1de4cf136",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.schema.messages import HumanMessage, SystemMessage"
]
},
{
"cell_type": "markdown",
"id": "fa7e7e95-90a1-4f73-98fe-10c4b4e0951b",
"metadata": {},
"source": [
"## [Vision](https://platform.openai.com/docs/guides/vision)\n",
"\n",
"OpenAI released multi-modal models, which can take a sequence of text and images as input."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "1c8c3965-d3c9-4186-b5f3-5e67855ef916",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The image appears to be a diagram representing the architecture or components of a software system or framework related to language processing, possibly named LangChain or associated with a project or product called LangChain, based on the prominent appearance of that term. The diagram is organized into several layers or aspects, each containing various elements or modules:\\n\\n1. **Protocol**: This may be the foundational layer, which includes \"LCEL\" and terms like parallelization, fallbacks, tracing, batching, streaming, async, and composition. These seem related to communication and execution protocols for the system.\\n\\n2. **Integrations Components**: This layer includes \"Model I/O\" with elements such as the model, output parser, prompt, and example selector. It also has a \"Retrieval\" section with a document loader, retriever, embedding model, vector store, and text splitter. Lastly, there\\'s an \"Agent Tooling\" section. These components likely deal with the interaction with external data, models, and tools.\\n\\n3. **Application**: The application layer features \"LangChain\" with chains, agents, agent executors, and common application logic. This suggests that the system uses a modular approach with chains and agents to process language tasks.\\n\\n4. **Deployment**: This contains \"Lang')"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat = ChatOpenAI(model=\"gpt-4-vision-preview\", max_tokens=256)\n",
"chat.invoke(\n",
" [\n",
" HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"What is this image showing\"},\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\n",
" \"url\": \"https://raw.githubusercontent.com/langchain-ai/langchain/master/docs/static/img/langchain_stack.png\",\n",
" \"detail\": \"auto\",\n",
" },\n",
" },\n",
" ]\n",
" )\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "210f8248-fcf3-4052-a4a3-0684e08f8785",
"metadata": {},
"source": [
"## [OpenAI assistants](https://platform.openai.com/docs/assistants/overview)\n",
"\n",
"> The Assistants API allows you to build AI assistants within your own applications. An Assistant has instructions and can leverage models, tools, and knowledge to respond to user queries. The Assistants API currently supports three types of tools: Code Interpreter, Retrieval, and Function calling\n",
"\n",
"\n",
"You can interact with OpenAI Assistants using OpenAI tools or custom tools. When using exclusively OpenAI tools, you can just invoke the assistant directly and get final answers. When using custom tools, you can run the assistant and tool execution loop using the built-in AgentExecutor or easily write your own executor.\n",
"\n",
"Below we show the different ways to interact with Assistants. As a simple example, let's build a math tutor that can write and run code."
]
},
{
"cell_type": "markdown",
"id": "318da28d-4cec-42ab-ae3e-76d95bb34fa5",
"metadata": {},
"source": [
"### Using only OpenAI tools"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a9064bbe-d9f7-4a29-a7b3-73933b3197e7",
"metadata": {},
"outputs": [],
"source": [
"from langchain_experimental.openai_assistant import OpenAIAssistantRunnable"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "7a20a008-49ac-46d2-aa26-b270118af5ea",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[ThreadMessage(id='msg_g9OJv0rpPgnc3mHmocFv7OVd', assistant_id='asst_hTwZeNMMphxzSOqJ01uBMsJI', content=[MessageContentText(text=Text(annotations=[], value='The result of \\\\(10 - 4^{2.7}\\\\) is approximately \\\\(-32.224\\\\).'), type='text')], created_at=1699460600, file_ids=[], metadata={}, object='thread.message', role='assistant', run_id='run_nBIT7SiAwtUfSCTrQNSPLOfe', thread_id='thread_14n4GgXwxgNL0s30WJW5F6p0')]"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"interpreter_assistant = OpenAIAssistantRunnable.create_assistant(\n",
" name=\"langchain assistant\",\n",
" instructions=\"You are a personal math tutor. Write and run code to answer math questions.\",\n",
" tools=[{\"type\": \"code_interpreter\"}],\n",
" model=\"gpt-4-1106-preview\",\n",
")\n",
"output = interpreter_assistant.invoke({\"content\": \"What's 10 - 4 raised to the 2.7\"})\n",
"output"
]
},
{
"cell_type": "markdown",
"id": "a8ddd181-ac63-4ab6-a40d-a236120379c1",
"metadata": {},
"source": [
"### As a LangChain agent with arbitrary tools\n",
"\n",
"Now let's recreate this functionality using our own tools. For this example we'll use the [E2B sandbox runtime tool](https://e2b.dev/docs?ref=landing-page-get-started)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ee4cc355-f2d6-4c51-bcf7-f502868357d3",
"metadata": {},
"outputs": [],
"source": [
"!pip install e2b duckduckgo-search"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "48681ac7-b267-48d4-972c-8a7df8393a21",
"metadata": {},
"outputs": [],
"source": [
"from langchain.tools import E2BDataAnalysisTool, DuckDuckGoSearchRun\n",
"\n",
"tools = [E2BDataAnalysisTool(api_key=\"...\"), DuckDuckGoSearchRun()]"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "1c01dd79-dd3e-4509-a2e2-009a7f99f16a",
"metadata": {},
"outputs": [],
"source": [
"agent = OpenAIAssistantRunnable.create_assistant(\n",
" name=\"langchain assistant e2b tool\",\n",
" instructions=\"You are a personal math tutor. Write and run code to answer math questions. You can also search the internet.\",\n",
" tools=tools,\n",
" model=\"gpt-4-1106-preview\",\n",
" as_agent=True,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "1ac71d8b-4b4b-4f98-b826-6b3c57a34166",
"metadata": {},
"source": [
"#### Using AgentExecutor"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "1f137f94-801f-4766-9ff5-2de9df5e8079",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'content': \"What's the weather in SF today divided by 2.7\",\n",
" 'output': \"The weather in San Francisco today is reported to have temperatures as high as 66 °F. To get the temperature divided by 2.7, we will calculate that:\\n\\n66 °F / 2.7 = 24.44 °F\\n\\nSo, when the high temperature of 66 °F is divided by 2.7, the result is approximately 24.44 °F. Please note that this doesn't have a meteorological meaning; it's purely a mathematical operation based on the given temperature.\"}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.agents import AgentExecutor\n",
"\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools)\n",
"agent_executor.invoke({\"content\": \"What's the weather in SF today divided by 2.7\"})"
]
},
{
"cell_type": "markdown",
"id": "2d0a0b1d-c1b3-4b50-9dce-1189b51a6206",
"metadata": {},
"source": [
"#### Custom execution"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "c0475fa7-b6c1-4331-b8e2-55407466c724",
"metadata": {},
"outputs": [],
"source": [
"agent = OpenAIAssistantRunnable.create_assistant(\n",
" name=\"langchain assistant e2b tool\",\n",
" instructions=\"You are a personal math tutor. Write and run code to answer math questions.\",\n",
" tools=tools,\n",
" model=\"gpt-4-1106-preview\",\n",
" as_agent=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "b76cb669-6aba-4827-868f-00aa960026f2",
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema.agent import AgentFinish\n",
"\n",
"\n",
"def execute_agent(agent, tools, input):\n",
" tool_map = {tool.name: tool for tool in tools}\n",
" response = agent.invoke(input)\n",
" while not isinstance(response, AgentFinish):\n",
" tool_outputs = []\n",
" for action in response:\n",
" tool_output = tool_map[action.tool].invoke(action.tool_input)\n",
" print(action.tool, action.tool_input, tool_output, end=\"\\n\\n\")\n",
" tool_outputs.append(\n",
" {\"output\": tool_output, \"tool_call_id\": action.tool_call_id}\n",
" )\n",
" response = agent.invoke(\n",
" {\n",
" \"tool_outputs\": tool_outputs,\n",
" \"run_id\": action.run_id,\n",
" \"thread_id\": action.thread_id,\n",
" }\n",
" )\n",
"\n",
" return response"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7946116a-b82f-492e-835e-ca958a8949a5",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"e2b_data_analysis {'python_code': 'print(10 - 4 ** 2.7)'} {\"stdout\": \"-32.22425314473263\", \"stderr\": \"\", \"artifacts\": []}\n",
"\n",
"\\( 10 - 4^{2.7} \\) is approximately \\(-32.22425314473263\\).\n"
]
}
],
"source": [
"response = execute_agent(agent, tools, {\"content\": \"What's 10 - 4 raised to the 2.7\"})\n",
"print(response.return_values[\"output\"])"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "f2744a56-9f4f-4899-827a-fa55821c318c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"e2b_data_analysis {'python_code': 'result = 10 - 4 ** 2.7\\nprint(result + 17.241)'} {\"stdout\": \"-14.983253144732629\", \"stderr\": \"\", \"artifacts\": []}\n",
"\n",
"When you add \\( 17.241 \\) to \\( 10 - 4^{2.7} \\), the result is approximately \\( -14.98325314473263 \\).\n"
]
}
],
"source": [
"next_response = execute_agent(\n",
" agent, tools, {\"content\": \"now add 17.241\", \"thread_id\": response.thread_id}\n",
")\n",
"print(next_response.return_values[\"output\"])"
]
},
{
"cell_type": "markdown",
"id": "71c34763-d1e7-4b9a-a9d7-3e4cc0dfc2c4",
"metadata": {},
"source": [
"## [JSON mode](https://platform.openai.com/docs/guides/text-generation/json-mode)\n",
"\n",
"Constrain the model to only generate valid JSON. Note that you must include a system message with instructions to use JSON for this mode to work.\n",
"\n",
"Only works with certain models. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "db6072c4-f3f3-415d-872b-71ea9f3c02bb",
"metadata": {},
"outputs": [],
"source": [
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\").bind(\n",
" response_format={\"type\": \"json_object\"}\n",
")\n",
"\n",
"output = chat.invoke(\n",
" [\n",
" SystemMessage(\n",
" content=\"Extract the 'name' and 'origin' of any companies mentioned in the following statement. Return a JSON list.\"\n",
" ),\n",
" HumanMessage(\n",
" content=\"Google was founded in the USA, while Deepmind was founded in the UK\"\n",
" ),\n",
" ]\n",
")\n",
"print(output.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "08e00ccf-b991-4249-846b-9500a0ccbfa0",
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"json.loads(output.content)"
]
},
{
"cell_type": "markdown",
"id": "aa9a94d9-4319-4ab7-a979-c475ce6b5f50",
"metadata": {},
"source": [
"## [System fingerprint](https://platform.openai.com/docs/guides/text-generation/reproducible-outputs)\n",
"\n",
"OpenAI sometimes changes model configurations in a way that impacts outputs. Whenever this happens, the system_fingerprint associated with a generation will change."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1281883c-bf8f-4665-89cd-4f33ccde69ab",
"metadata": {},
"outputs": [],
"source": [
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\")\n",
"output = chat.generate(\n",
" [\n",
" [\n",
" SystemMessage(\n",
" content=\"Extract the 'name' and 'origin' of any companies mentioned in the following statement. Return a JSON list.\"\n",
" ),\n",
" HumanMessage(\n",
" content=\"Google was founded in the USA, while Deepmind was founded in the UK\"\n",
" ),\n",
" ]\n",
" ]\n",
")\n",
"print(output.llm_output)"
]
},
{
"cell_type": "markdown",
"id": "aa6565be-985d-4127-848e-c3bca9d7b434",
"metadata": {},
"source": [
"## Breaking changes to Azure classes\n",
"\n",
"OpenAI V1 rewrote their clients and separated Azure and OpenAI clients. This has led to some changes in LangChain interfaces when using OpenAI V1.\n",
"\n",
"BREAKING CHANGES:\n",
"- To use Azure embeddings with OpenAI V1, you'll need to use the new `AzureOpenAIEmbeddings` instead of the existing `OpenAIEmbeddings`. `OpenAIEmbeddings` continue to work when using Azure with `openai<1`.\n",
"```python\n",
"from langchain.embeddings import AzureOpenAIEmbeddings\n",
"```\n",
"\n",
"\n",
"RECOMMENDED CHANGES:\n",
"- When using AzureChatOpenAI, if passing in an Azure endpoint (eg https://example-resource.azure.openai.com/) this should be specified via the `azure_endpoint` parameter or the `AZURE_OPENAI_ENDPOINT`. We're maintaining backwards compatibility for now with specifying this via `openai_api_base`/`base_url` or env var `OPENAI_API_BASE` but this shouldn't be relied upon.\n",
"- When using Azure chat or embedding models, pass in API keys either via `openai_api_key` parameter or `AZURE_OPENAI_API_KEY` parameter. We're maintaining backwards compatibility for now with specifying this via `OPENAI_API_KEY` but this shouldn't be relied upon."
]
},
{
"cell_type": "markdown",
"id": "49944887-3972-497e-8da2-6d32d44345a9",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"Use tools for parallel function calling."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "916292d8-0f89-40a6-af1c-5a1122327de8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[GetCurrentWeather(location='New York, NY', unit='fahrenheit'),\n",
" GetCurrentWeather(location='Los Angeles, CA', unit='fahrenheit'),\n",
" GetCurrentWeather(location='San Francisco, CA', unit='fahrenheit')]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import Literal\n",
"\n",
"from langchain.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain.utils.openai_functions import convert_pydantic_to_openai_tool\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class GetCurrentWeather(BaseModel):\n",
" \"\"\"Get the current weather in a location.\"\"\"\n",
"\n",
" location: str = Field(description=\"The city and state, e.g. San Francisco, CA\")\n",
" unit: Literal[\"celsius\", \"fahrenheit\"] = Field(\n",
" default=\"fahrenheit\", description=\"The temperature unit, default to fahrenheit\"\n",
" )\n",
"\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"system\", \"You are a helpful assistant\"), (\"user\", \"{input}\")]\n",
")\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo-1106\").bind(\n",
" tools=[convert_pydantic_to_openai_tool(GetCurrentWeather)]\n",
")\n",
"chain = prompt | model | PydanticToolsParser(tools=[GetCurrentWeather])\n",
"\n",
"chain.invoke({\"input\": \"what's the weather in NYC, LA, and SF\"})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,168 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# RAG based on Qianfan and BES"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook is an implementation of Retrieval augmented generation (RAG) using Baidu Qianfan Platform combined with Baidu ElasricSearch, where the original data is located on BOS.\n",
"## Baidu Qianfan\n",
"Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.\n",
"\n",
"## Baidu ElasticSearch\n",
"[Baidu Cloud VectorSearch](https://cloud.baidu.com/doc/BES/index.html?from=productToDoc) is a fully managed, enterprise-level distributed search and analysis service which is 100% compatible to open source. Baidu Cloud VectorSearch provides low-cost, high-performance, and reliable retrieval and analysis platform level product services for structured/unstructured data. As a vector database , it supports multiple index types and similarity distance methods. "
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installation and Setup\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#!pip install qianfan\n",
"#!pip install bce-python-sdk\n",
"#!pip install elasticsearch == 7.11.0"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from baidubce.bce_client_configuration import BceClientConfiguration\n",
"from baidubce.auth.bce_credentials import BceCredentials\n",
"from langchain.document_loaders.baiducloud_bos_directory import BaiduBOSDirectoryLoader\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain.embeddings.huggingface import HuggingFaceEmbeddings\n",
"from langchain.vectorstores import BESVectorStore\n",
"from langchain.llms.baidu_qianfan_endpoint import QianfanLLMEndpoint"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Document loading"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bos_host = \"your bos eddpoint\"\n",
"access_key_id = \"your bos access ak\"\n",
"secret_access_key = \"your bos access sk\"\n",
"\n",
"# create BceClientConfiguration\n",
"config = BceClientConfiguration(credentials=BceCredentials(access_key_id, secret_access_key), endpoint = bos_host)\n",
"\n",
"loader = BaiduBOSDirectoryLoader(conf=config, bucket=\"llm-test\", prefix=\"llm/\")\n",
"documents = loader.load()\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=200, chunk_overlap=0)\n",
"split_docs = text_splitter.split_documents(documents)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Embedding and VectorStore"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"embeddings = HuggingFaceEmbeddings(model_name=\"shibing624/text2vec-base-chinese\")\n",
"embeddings.client = sentence_transformers.SentenceTransformer(embeddings.model_name)\n",
"\n",
"db = BESVectorStore.from_documents(\n",
" documents=split_docs, embedding=embeddings, bes_url=\"your bes url\", index_name='test-index', vector_query_field='vector'\n",
" )\n",
"\n",
"db.client.indices.refresh(index='test-index')\n",
"retriever = db.as_retriever()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## QA Retriever"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm = QianfanLLMEndpoint(model=\"ERNIE-Bot\", qianfan_ak='your qianfan ak', qianfan_sk='your qianfan sk', streaming=True)\n",
"qa = RetrievalQA.from_chain_type(llm=llm, chain_type=\"refine\", retriever=retriever, return_source_documents=True)\n",
"\n",
"query = \"什么是张量?\"\n",
"print(qa.run(query))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"> 张量Tensor是一个数学概念用于表示多维数据。它是一个可以表示多个数值的数组可以是标量、向量、矩阵等。在深度学习和人工智能领域中张量常用于表示神经网络的输入、输出和权重等。"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.9.17"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -35,7 +35,7 @@
"tags": []
},
"source": [
"### API keys and other secrats\n",
"### API keys and other secrets\n",
"\n",
"We use an `.ini` file, like this: \n",
"```\n",

View File

@@ -15,7 +15,7 @@ poetry run python scripts/model_feat_table.py
poetry run nbdoc_build --srcdir docs
cp ../cookbook/README.md src/pages/cookbook.mdx
cp ../.github/CONTRIBUTING.md docs/contributing.md
wget https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/guides/deployments/langserve.md
wget https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/langserve.md
poetry run python scripts/generate_api_reference_links.py
yarn install
yarn start

File diff suppressed because one or more lines are too long

View File

@@ -1,15 +1,18 @@
# Tutorials
Below are links to tutorials and courses on LangChain. For written guides on common use cases for LangChain, check out the [use cases guides](/docs/use_cases/qa_structured/sql).
Below are links to tutorials and courses on LangChain. For written guides on common use cases for LangChain, check out the [use cases guides](/docs/use_cases).
⛓ icon marks a new addition [last update 2023-09-21]
---------------------
### [LangChain on Wikipedia](https://en.wikipedia.org/wiki/LangChain)
### DeepLearning.AI courses
by [Harrison Chase](https://github.com/hwchase17) and [Andrew Ng](https://en.wikipedia.org/wiki/Andrew_Ng)
by [Harrison Chase](https://en.wikipedia.org/wiki/LangChain) and [Andrew Ng](https://en.wikipedia.org/wiki/Andrew_Ng)
- [LangChain for LLM Application Development](https://learn.deeplearning.ai/langchain)
- [LangChain Chat with Your Data](https://learn.deeplearning.ai/langchain-chat-with-your-data)
- ⛓ [Functions, Tools and Agents with LangChain](https://learn.deeplearning.ai/functions-tools-agents-langchain)
### Handbook
[LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham**

View File

@@ -12,6 +12,19 @@
"Suppose we have a simple prompt + model sequence:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "950297ed-2d67-4091-8ea7-1d412d259d04",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema import StrOutputParser\n",
"from langchain.schema.runnable import RunnablePassthrough"
]
},
{
"cell_type": "code",
"execution_count": 11,
@@ -37,11 +50,6 @@
}
],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema import StrOutputParser\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
@@ -105,31 +113,29 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 3,
"id": "f66a0fe4-fde0-4706-8863-d60253f211c7",
"metadata": {},
"outputs": [],
"source": [
"functions = [\n",
" {\n",
" \"name\": \"solver\",\n",
" \"description\": \"Formulates and solves an equation\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"equation\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The algebraic expression of the equation\",\n",
" },\n",
" \"solution\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The solution to the equation\",\n",
" },\n",
"function = {\n",
" \"name\": \"solver\",\n",
" \"description\": \"Formulates and solves an equation\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"equation\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The algebraic expression of the equation\",\n",
" },\n",
" \"solution\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The solution to the equation\",\n",
" },\n",
" \"required\": [\"equation\", \"solution\"],\n",
" },\n",
" }\n",
"]"
" \"required\": [\"equation\", \"solution\"],\n",
" },\n",
"}"
]
},
{
@@ -161,19 +167,70 @@
" ]\n",
")\n",
"model = ChatOpenAI(model=\"gpt-4\", temperature=0).bind(\n",
" function_call={\"name\": \"solver\"}, functions=functions\n",
" function_call={\"name\": \"solver\"}, functions=[function]\n",
")\n",
"runnable = {\"equation_statement\": RunnablePassthrough()} | prompt | model\n",
"runnable.invoke(\"x raised to the third plus seven equals 12\")"
]
},
{
"cell_type": "markdown",
"id": "f07d7528-9269-4d6f-b12e-3669592a9e03",
"metadata": {},
"source": [
"## Attaching OpenAI tools"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"id": "2cdeeb4c-0c1f-43da-bd58-4f591d9e0671",
"metadata": {},
"outputs": [],
"source": []
"source": [
"tools = [\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"get_current_weather\",\n",
" \"description\": \"Get the current weather in a given location\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"location\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
" },\n",
" \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]},\n",
" },\n",
" \"required\": [\"location\"],\n",
" },\n",
" },\n",
" }\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "2b65beab-48bb-46ff-a5a4-ef8ac95a513c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_zHN0ZHwrxM7nZDdqTp6dkPko', 'function': {'arguments': '{\"location\": \"San Francisco, CA\", \"unit\": \"celsius\"}', 'name': 'get_current_weather'}, 'type': 'function'}, {'id': 'call_aqdMm9HBSlFW9c9rqxTa7eQv', 'function': {'arguments': '{\"location\": \"New York, NY\", \"unit\": \"celsius\"}', 'name': 'get_current_weather'}, 'type': 'function'}, {'id': 'call_cx8E567zcLzYV2WSWVgO63f1', 'function': {'arguments': '{\"location\": \"Los Angeles, CA\", \"unit\": \"celsius\"}', 'name': 'get_current_weather'}, 'type': 'function'}]})"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model = ChatOpenAI(model=\"gpt-3.5-turbo-1106\").bind(tools=tools)\n",
"model.invoke(\"What's the weather in SF, NYC and LA?\")"
]
}
],
"metadata": {

View File

@@ -5,7 +5,7 @@
"id": "39eaf61b",
"metadata": {},
"source": [
"# Configuration\n",
"# Configure chain internals at runtime\n",
"\n",
"Oftentimes you may want to experiment with, or even expose to the end user, multiple different ways of doing things.\n",
"In order to make this experience as easy as possible, we have defined two methods.\n",
@@ -594,7 +594,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -5,7 +5,7 @@
"id": "fbc4bf6e",
"metadata": {},
"source": [
"# Run arbitrary functions\n",
"# Run custom functions\n",
"\n",
"You can use arbitrary functions in the pipeline\n",
"\n",
@@ -175,7 +175,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Custom generator functions\n",
"# Stream custom generator functions\n",
"\n",
"You can use generator functions (ie. functions that use the `yield` keyword, and behave like iterators) in a LCEL pipeline.\n",
"\n",
@@ -21,15 +21,7 @@
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"lion, tiger, wolf, gorilla, panda\n"
]
}
],
"outputs": [],
"source": [
"from typing import Iterator, List\n",
"\n",
@@ -43,16 +35,51 @@
")\n",
"model = ChatOpenAI(temperature=0.0)\n",
"\n",
"\n",
"str_chain = prompt | model | StrOutputParser()\n",
"\n",
"print(str_chain.invoke({\"animal\": \"bear\"}))"
"str_chain = prompt | model | StrOutputParser()"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"lion, tiger, wolf, gorilla, panda"
]
}
],
"source": [
"for chunk in str_chain.stream({\"animal\": \"bear\"}):\n",
" print(chunk, end=\"\", flush=True)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'lion, tiger, wolf, gorilla, panda'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"str_chain.invoke({\"animal\": \"bear\"})"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"# This is a custom parser that splits an iterator of llm tokens\n",
@@ -77,22 +104,61 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"list_chain = str_chain | split_into_list"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['lion', 'tiger', 'wolf', 'gorilla', 'panda']\n"
"['lion']\n",
"['tiger']\n",
"['wolf']\n",
"['gorilla']\n",
"['panda']\n"
]
}
],
"source": [
"list_chain = str_chain | split_into_list\n",
"\n",
"print(list_chain.invoke({\"animal\": \"bear\"}))"
"for chunk in list_chain.stream({\"animal\": \"bear\"}):\n",
" print(chunk, flush=True)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['lion', 'tiger', 'wolf', 'gorilla', 'panda']"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"list_chain.invoke({\"animal\": \"bear\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -111,9 +177,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -5,7 +5,7 @@
"id": "b022ab74-794d-4c54-ad47-ff9549ddb9d2",
"metadata": {},
"source": [
"# Use RunnableParallel/RunnableMap\n",
"# Parallelize steps\n",
"\n",
"RunnableParallel (aka. RunnableMap) makes it easy to execute multiple Runnables in parallel, and to return the output of these Runnables as a map."
]
@@ -195,7 +195,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -5,7 +5,7 @@
"id": "4b47436a",
"metadata": {},
"source": [
"# Route between multiple Runnables\n",
"# Dynamically route logic based on input\n",
"\n",
"This notebook covers how to do routing in the LangChain Expression Language.\n",
"\n",

View File

@@ -4,33 +4,30 @@ sidebar_class_name: hidden
# LangChain Expression Language (LCEL)
LangChain Expression Language or LCEL is a declarative way to easily compose chains together.
There are several benefits to writing chains in this manner (as opposed to writing normal code):
LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together.
LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (weve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:
**Async, Batch, and Streaming Support**
Any chain constructed this way will automatically have full sync, async, batch, and streaming support.
This makes it easy to prototype a chain in a Jupyter notebook using the sync interface, and then expose it as an async streaming interface.
**Streaming support**
When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens.
**Fallbacks**
The non-determinism of LLMs makes it important to be able to handle errors gracefully.
With LCEL you can easily attach fallbacks to any chain.
**Async support**
Any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](/docs/langsmith) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.
**Parallelism**
Since LLM applications involve (sometimes long) API calls, it often becomes important to run things in parallel.
With LCEL syntax, any components that can be run in parallel automatically are.
**Optimized parallel execution**
Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency.
**Seamless LangSmith Tracing Integration**
**Retries and fallbacks**
Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. Were currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
**Access intermediate results**
For more complex chains its often very useful to access the results of intermediate steps even before the final output is produced. This can be used let end-users know something is happening, or even just to debug your chain. You can stream intermediate results, and its available on every [LangServe](/docs/langserve) server.
**Input and output schemas**
Input and output schemas give every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.
**Seamless LangSmith tracing integration**
As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step.
With LCEL, **all** steps are automatically logged to [LangSmith](https://smith.langchain.com) for maximal observability and debuggability.
With LCEL, **all** steps are automatically logged to [LangSmith](/docs/langsmith/) for maximum observability and debuggability.
#### [Interface](/docs/expression_language/interface)
The base interface shared by all LCEL objects
#### [How to](/docs/expression_language/how_to)
How to use core features of LCEL
#### [Cookbook](/docs/expression_language/cookbook)
Examples of common LCEL usage patterns
#### [Why use LCEL](/docs/expression_language/why)
A deeper dive into the benefits of LCEL
**Seamless LangServe deployment integration**
Any chain created with LCEL can be easily deployed using LangServe.

View File

@@ -8,7 +8,7 @@
"---\n",
"sidebar_position: 0\n",
"title: Interface\n",
"---\n"
"---"
]
},
{
@@ -31,26 +31,17 @@
"- [`abatch`](#async-batch): call the chain on a list of inputs async\n",
"- [`astream_log`](#async-stream-intermediate-steps): stream back intermediate steps as they happen, in addition to the final response\n",
"\n",
"The **input type** varies by component:\n",
"The **input type** and **output type** varies by component:\n",
"\n",
"| Component | Input Type |\n",
"| --- | --- |\n",
"|Prompt|Dictionary|\n",
"|Retriever|Single string|\n",
"|LLM, ChatModel| Single string, list of chat messages or a PromptValue|\n",
"|Tool|Single string, or dictionary, depending on the tool|\n",
"|OutputParser|The output of an LLM or ChatModel|\n",
"| Component | Input Type | Output Type |\n",
"| --- | --- | --- |\n",
"| Prompt | Dictionary | PromptValue |\n",
"| ChatModel | Single string, list of chat messages or a PromptValue | ChatMessage |\n",
"| LLM | Single string, list of chat messages or a PromptValue | String |\n",
"| OutputParser | The output of an LLM or ChatModel | Depends on the parser |\n",
"| Retriever | Single string | List of Documents |\n",
"| Tool | Single string or dictionary, depending on the tool | Depends on the tool |\n",
"\n",
"The **output type** also varies by component:\n",
"\n",
"| Component | Output Type |\n",
"| --- | --- |\n",
"| LLM | String |\n",
"| ChatModel | ChatMessage |\n",
"| Prompt | PromptValue |\n",
"| Retriever | List of documents |\n",
"| Tool | Depends on the tool |\n",
"| OutputParser | Depends on the parser |\n",
"\n",
"All runnables expose input and output **schemas** to inspect the inputs and outputs:\n",
"- [`input_schema`](#input-schema): an input Pydantic model auto-generated from the structure of the Runnable\n",
@@ -1161,7 +1152,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -1,11 +0,0 @@
# Why use LCEL?
The LangChain Expression Language was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (weve seen folks successfully running in production LCEL chains with 100s of steps). To highlight a few of the reasons you might want to use LCEL:
- first-class support for streaming: when you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens. Were constantly improving streaming support, recently we added a [streaming JSON parser](https://twitter.com/LangChainAI/status/1709690468030914584), and more is in the works.
- first-class async support: any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](https://github.com/langchain-ai/langserve) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.
- optimised parallel execution: whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency.
- support for retries and fallbacks: more recently weve added support for configuring retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. Were currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
- accessing intermediate results: for more complex chains its often very useful to access the results of intermediate steps even before the final output is produced. This can be used let end-users know something is happening, or even just to debug your chain. Weve added support for [streaming intermediate results](https://x.com/LangChainAI/status/1711806009097044193?s=20), and its available on every LangServe server.
- [input and output schemas](https://x.com/LangChainAI/status/1711805322195861934?s=20): input and output schemas give every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.
- tracing with LangSmith: all chains built with LCEL have first-class tracing support, which can be used to debug your chains, or to understand whats happening in production. To enable this all you have to do is add your [LangSmith](https://www.langchain.com/langsmith) API key as an environment variable.

View File

@@ -28,3 +28,37 @@ If you want to install from source, you can do so by cloning the repo and be sur
```bash
pip install -e .
```
## Langchain experimental
The `langchain-experimental` package holds experimental LangChain code, intended for research and experimental uses.
Install with:
```bash
pip install langchain-experimental
```
## LangChain CLI
The LangChain CLI is useful for working with LangChain templates and other LangServe projects.
Install with:
```bash
pip install langchain-cli
```
## LangServe
LangServe helps developers deploy LangChain runnables and chains as a REST API.
LangServe is automatically installed by LangChain CLI.
If not using LangChain CLI, install with:
```bash
pip install "langserve[all]"
```
for both client and server dependencies. Or `pip install "langserve[client]"` for client code, and `pip install "langserve[server]"` for server code.
## LangSmith SDK
The LangSmith SDK is automatically installed by LangChain.
If not using LangChain, install with:
```bash
pip install langsmith
```

View File

@@ -8,11 +8,26 @@ sidebar_position: 0
- **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)
- **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)
The main value props of LangChain are:
1. **Components**: abstractions for working with language models, along with a collection of implementations for each abstraction. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
2. **Off-the-shelf chains**: a structured assembly of components for accomplishing specific higher-level tasks
This framework consists of several parts.
- **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
- **[LangChain Templates](/docs/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.
- **[LangServe](/docs/langserve)**: A library for deploying LangChain chains as a REST API.
- **[LangSmith](/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
Off-the-shelf chains make it easy to get started. For complex applications, components make it easy to customize existing chains and build new ones.
![LangChain Diagram](/img/langchain_stack.png)
Together, these products simplify the entire application lifecycle:
- **Develop**: Write your applications in LangChain/LangChain.js. Hit the ground running using Templates for reference.
- **Productionize**: Use LangSmith to inspect, test and monitor your chains, so that you can constantly improve and deploy with confidence.
- **Deploy**: Turn any chain into an API with LangServe.
## LangChain Libraries
The main value props of the LangChain packages are:
1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
## Get started
@@ -20,45 +35,59 @@ Off-the-shelf chains make it easy to get started. For complex applications, comp
We recommend following our [Quickstart](/docs/get_started/quickstart) guide to familiarize yourself with the framework by building your first LangChain application.
_**Note**: These docs are for the LangChain [Python package](https://github.com/langchain-ai/langchain). For documentation on [LangChain.js](https://github.com/langchain-ai/langchainjs), the JS/TS version, [head here](https://js.langchain.com/docs)._
Read up on our [Security](/docs/security) best practices to make sure you're developing safely with LangChain.
:::note
These docs focus on the Python LangChain library. [Head here](https://js.langchain.com) for docs on the JavaScript LangChain library.
:::
## LangChain Expression Language (LCEL)
LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
- **[Overview](/docs/expression_language/)**: LCEL and its benefits
- **[Interface](/docs/expression_language/interface)**: The standard interface for LCEL objects
- **[How-to](/docs/expression_language/interface)**: Key features of LCEL
- **[Cookbook](/docs/expression_language/cookbook)**: Example code for accomplishing common tasks
## Modules
LangChain provides standard, extendable interfaces and external integrations for the following modules, listed from least to most complex:
LangChain provides standard, extendable interfaces and integrations for the following modules:
#### [Model I/O](/docs/modules/model_io/)
Interface with language models
#### [Retrieval](/docs/modules/data_connection/)
Interface with application-specific data
#### [Chains](/docs/modules/chains/)
Construct sequences of calls
#### [Agents](/docs/modules/agents/)
Let chains choose which tools to use given high-level directives
#### [Memory](/docs/modules/memory/)
Persist application state between runs of a chain
#### [Callbacks](/docs/modules/callbacks/)
Log and stream intermediate steps of any chain
Let models choose which tools to use given high-level directives
## Examples, ecosystem, and resources
### [Use cases](/docs/use_cases/question_answering/)
Walkthroughs and best-practices for common end-to-end use cases, like:
Walkthroughs and techniques for common end-to-end use cases, like:
- [Document question answering](/docs/use_cases/question_answering/)
- [Chatbots](/docs/use_cases/chatbots/)
- [Analyzing structured data](/docs/use_cases/qa_structured/sql/)
- and much more...
### [Guides](/docs/guides/)
Learn best practices for developing with LangChain.
### [Integrations](/docs/integrations/providers/)
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/providers/).
### [Ecosystem](/docs/integrations/providers/)
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/providers/) and [dependent repos](/docs/additional_resources/dependents).
### [Guides](/docs/guides/adapters/openai)
Best practices for developing with LangChain.
### [Additional resources](/docs/additional_resources/)
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/additional_resources/youtube) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).
### [API reference](https://api.python.langchain.com)
Head to the reference section for full documentation of all classes and methods in the LangChain and LangChain Experimental Python packages.
### [Developer's guide](/docs/contributing)
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.
### [Community](/docs/community)
Head to the [Community navigator](/docs/community) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLMs.
## API reference
Head to the [reference](https://api.python.langchain.com) section for full documentation of all classes and methods in the LangChain Python package.

View File

@@ -1,6 +1,17 @@
# Quickstart
## Installation
In this quickstart we'll show you how to:
- Get setup with LangChain, LangSmith and LangServe
- Use the most basic and common components of LangChain: prompt templates, models, and output parsers
- Use LangChain Expression Language, the protocol that LangChain is built on and which facilitates component chaining
- Build simple application with LangChain
- Trace your application with LangSmith
- Serve your application with LangServe
That's a fair amount to cover! Let's dive in.
## Setup
### Installation
To install LangChain run:
@@ -20,7 +31,7 @@ import CodeBlock from "@theme/CodeBlock";
For more details, see our [Installation guide](/docs/get_started/installation).
## Environment setup
### Environment
Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs.
@@ -39,56 +50,79 @@ export OPENAI_API_KEY="..."
If you'd prefer not to set an environment variable you can pass the key in directly via the `openai_api_key` named parameter when initiating the OpenAI LLM class:
```python
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
llm = OpenAI(openai_api_key="...")
llm = ChatOpenAI(openai_api_key="...")
```
### LangSmith
## Building an application
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.
As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.
The best way to do this is with [LangSmith](https://smith.langchain.com).
Now we can start building our language model application. LangChain provides many modules that can be used to build language model applications.
Modules can be used as stand-alones in simple applications and they can be combined for more complex use cases.
Note that LangSmith is not needed, but it is helpful.
If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:
The most common and most important chain that LangChain helps create contains three things:
- LLM: The language model is the core reasoning engine here. In order to work with LangChain, you need to understand the different types of language models and how to work with them.
- Prompt Templates: This provides instructions to the language model. This controls what the language model outputs, so understanding how to construct prompts and different prompting strategies is crucial.
- Output Parsers: These translate the raw response from the LLM to a more workable format, making it easy to use the output downstream.
```shell
export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_API_KEY=...
```
In this getting started guide we will cover those three components by themselves, and then go over how to combine all of them.
### LangServe
LangServe helps developers deploy LangChain chains as a REST API. You do not need to use LangServe to use LangChain, but in this guide we'll show how you can deploy your app with LangServe.
Install with:
```bash
pip install "langserve[all]"
```
## Building with LangChain
LangChain provides many modules that can be used to build language model applications.
Modules can be used as standalones in simple applications and they can be composed for more complex use cases.
Composition is powered by **LangChain Expression Language** (LCEL), which defines a unified `Runnable` interface that many modules implement, making it possible to seamlessly chain components.
The simplest and most common chain contains three things:
- LLM/Chat Model: The language model is the core reasoning engine here. In order to work with LangChain, you need to understand the different types of language models and how to work with them.
- Prompt Template: This provides instructions to the language model. This controls what the language model outputs, so understanding how to construct prompts and different prompting strategies is crucial.
- Output Parser: These translate the raw response from the language model to a more workable format, making it easy to use the output downstream.
In this guide we'll cover those three components individually, and then go over how to combine them.
Understanding these concepts will set you up well for being able to use and customize LangChain applications.
Most LangChain applications allow you to configure the LLM and/or the prompt used, so knowing how to take advantage of this will be a big enabler.
Most LangChain applications allow you to configure the model and/or the prompt, so knowing how to take advantage of this will be a big enabler.
## LLMs
### LLM / Chat Model
There are two types of language models, which in LangChain are called:
There are two types of language models:
- LLMs: this is a language model which takes a string as input and returns a string
- ChatModels: this is a language model which takes a list of messages as input and returns a message
- `LLM`: underlying model takes a string as input and returns a string
- `ChatModel`: underlying model takes a list of messages as input and returns a message
The input/output for LLMs is simple and easy to understand - a string.
But what about ChatModels? The input there is a list of `ChatMessages`, and the output is a single `ChatMessage`.
A `ChatMessage` has two required components:
Strings are simple, but what exactly are messages? The base message interface is defined by `BaseMessage`, which has two required attributes:
- `content`: This is the content of the message.
- `role`: This is the role of the entity from which the `ChatMessage` is coming from.
- `content`: The content of the message. Usually a string.
- `role`: The entity from which the `BaseMessage` is coming.
LangChain provides several objects to easily distinguish between different roles:
- `HumanMessage`: A `ChatMessage` coming from a human/user.
- `AIMessage`: A `ChatMessage` coming from an AI/assistant.
- `SystemMessage`: A `ChatMessage` coming from the system.
- `FunctionMessage`: A `ChatMessage` coming from a function call.
- `HumanMessage`: A `BaseMessage` coming from a human/user.
- `AIMessage`: A `BaseMessage` coming from an AI/assistant.
- `SystemMessage`: A `BaseMessage` coming from the system.
- `FunctionMessage` / `ToolMessage`: A `BaseMessage` containing the output of a function or tool call.
If none of those roles sound right, there is also a `ChatMessage` class where you can specify the role manually.
For more information on how to use these different messages most effectively, see our prompting guide.
Langchain provides a common interface that's shared by both LLMs and ChatModels.
However it's useful to understand this difference in order to construct prompts for a given language model.
LangChain provides a common interface that's shared by both `LLM`s and `ChatModel`s.
However it's useful to understand the difference in order to most effectively construct prompts for a given language model.
The standard interface that LangChain provides has two methods:
- `predict`: Takes in a string, returns a string
- `predict_messages`: Takes in a list of messages, returns a message.
The simplest way to call an `LLM` or `ChatModel` is using `.invoke()`, the universal synchronous call method for all LangChain Expression Language (LCEL) objects:
- `LLM.invoke`: Takes in a string, returns a string.
- `ChatModel.invoke`: Takes in a list of `BaseMessage`, returns a `BaseMessage`.
The input types for these methods are actually more general than this, but for simplicity here we can assume LLMs only take strings and Chat models only takes lists of messages.
Check out the "Go deeper" section below to learn more about model invocation.
Let's see how to work with these different types of models and these different types of inputs.
First, let's import an LLM and a ChatModel.
@@ -99,50 +133,36 @@ from langchain.chat_models import ChatOpenAI
llm = OpenAI()
chat_model = ChatOpenAI()
llm.predict("hi!")
>>> "Hi"
chat_model.predict("hi!")
>>> "Hi"
```
The `OpenAI` and `ChatOpenAI` objects are basically just configuration objects.
`LLM` and `ChatModel` objects are effectively configuration objects.
You can initialize them with parameters like `temperature` and others, and pass them around.
Next, let's use the `predict` method to run over a string input.
```python
text = "What would be a good company name for a company that makes colorful socks?"
llm.predict(text)
# >> Feetful of Fun
chat_model.predict(text)
# >> Socks O'Color
```
Finally, let's use the `predict_messages` method to run over a list of messages.
```python
from langchain.schema import HumanMessage
text = "What would be a good company name for a company that makes colorful socks?"
messages = [HumanMessage(content=text)]
llm.predict_messages(messages)
llm.invoke(text)
# >> Feetful of Fun
chat_model.predict_messages(messages)
# >> Socks O'Color
chat_model.invoke(messages)
# >> AIMessage(content="Socks O'Color")
```
For both these methods, you can also pass in parameters as keyword arguments.
For example, you could pass in `temperature=0` to adjust the temperature that is used from what the object was configured with.
Whatever values are passed in during run time will always override what the object was configured with.
<details> <summary>Go deeper</summary>
`LLM.invoke` and `ChatModel.invoke` actually both support as input any of `Union[str, List[BaseMessage], PromptValue]`.
`PromptValue` is an object that defines it's own custom logic for returning it's inputs either as a string or as messages.
`LLM`s have logic for coercing any of these into a string, and `ChatModel`s have logic for coercing any of these to messages.
The fact that `LLM` and `ChatModel` accept the same inputs means that you can directly swap them for one another in most chains without breaking anything,
though it's of course important to think about how inputs are being coerced and how that may affect model performance.
To dive deeper on models head to the [Language models](/docs/modules/model_io/models) section.
## Prompt templates
</details>
### Prompt templates
Most LLM applications do not pass user input directly into an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.
@@ -168,10 +188,10 @@ You can "partial" out variables - e.g. you can format only some of the variables
You can compose them together, easily combining different templates into a single prompt.
For explanations of these functionalities, see the [section on prompts](/docs/modules/model_io/prompts) for more detail.
PromptTemplates can also be used to produce a list of messages.
`PromptTemplate`s can also be used to produce a list of messages.
In this case, the prompt not only contains information about the content, but also each message (its role, its position in the list, etc.).
Here, what happens most often is a ChatPromptTemplate is a list of ChatMessageTemplates.
Each ChatMessageTemplate contains instructions for how to format that ChatMessage - its role, and then also its content.
Here, what happens most often is a `ChatPromptTemplate` is a list of `ChatMessageTemplates`.
Each `ChatMessageTemplate` contains instructions for how to format that `ChatMessage` - its role, and then also its content.
Let's take a look at this below:
```python
@@ -198,13 +218,13 @@ chat_prompt.format_messages(input_language="English", output_language="French",
ChatPromptTemplates can also be constructed in other ways - see the [section on prompts](/docs/modules/model_io/prompts) for more detail.
## Output parsers
### Output parsers
OutputParsers convert the raw output of an LLM into a format that can be used downstream.
There are few main types of OutputParsers, including:
`OutputParsers` convert the raw output of a language model into a format that can be used downstream.
There are few main types of `OutputParser`s, including:
- Convert text from LLM into structured information (e.g. JSON)
- Convert a ChatMessage into just a string
- Convert text from `LLM` into structured information (e.g. JSON)
- Convert a `ChatMessage` into just a string
- Convert the extra information returned from a call besides the message (like OpenAI function invocation) into a string.
For full information on this, see the [section on output parsers](/docs/modules/model_io/output_parsers).
@@ -226,7 +246,7 @@ CommaSeparatedListOutputParser().parse("hi, bye")
# >> ['hi', 'bye']
```
## PromptTemplate + LLM + OutputParser
### Composing with LCEL
We can now combine all these into one chain.
This chain will take input variables, pass those to a prompt template to create a prompt, pass the prompt to a language model, and then pass the output through an (optional) output parser.
@@ -234,15 +254,17 @@ This is a convenient way to bundle up a modular piece of logic.
Let's see it in action!
```python
from typing import List
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import ChatPromptTemplate
from langchain.prompts import ChatPromptTemplate
from langchain.schema import BaseOutputParser
class CommaSeparatedListOutputParser(BaseOutputParser):
class CommaSeparatedListOutputParser(BaseOutputParser[List[str]]):
"""Parse the output of an LLM call to a comma-separated list."""
def parse(self, text: str):
def parse(self, text: str) -> List[str]:
"""Parse the output of an LLM call."""
return text.strip().split(", ")
@@ -260,20 +282,118 @@ chain.invoke({"text": "colors"})
# >> ['red', 'blue', 'green', 'yellow', 'orange']
```
Note that we are using the `|` syntax to join these components together.
This `|` syntax is called the LangChain Expression Language.
To learn more about this syntax, read the documentation [here](/docs/expression_language).
This `|` syntax is powered by the LangChain Expression Language (LCEL) and relies on the universal `Runnable` interface that all of these objects implement.
To learn more about LCEL, read the documentation [here](/docs/expression_language).
## Tracing with LangSmith
Assuming we've set our environment variables as shown in the beginning, all of the model and chain calls we've been making will have been automatically logged to LangSmith.
Once there, we can use LangSmith to debug and annotate our application traces, then turn them into datasets for evaluating future iterations of the application.
Check out what the trace for the above chain would look like:
https://smith.langchain.com/public/09370280-4330-4eb4-a7e8-c91817f6aa13/r
For more on LangSmith [head here](/docs/langsmith/).
## Serving with LangServe
Now that we've built an application, we need to serve it. That's where LangServe comes in.
LangServe helps developers deploy LCEL chains as a REST API.
The library is integrated with FastAPI and uses pydantic for data validation.
### Server
To create a server for our application we'll make a `serve.py` file with three things:
1. The definition of our chain (same as above)
2. Our FastAPI app
3. A definition of a route from which to serve the chain, which is done with `langserve.add_routes`
```python
#!/usr/bin/env python
from typing import List
from fastapi import FastAPI
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain.schema import BaseOutputParser
from langserve import add_routes
# 1. Chain definition
class CommaSeparatedListOutputParser(BaseOutputParser[List[str]]):
"""Parse the output of an LLM call to a comma-separated list."""
def parse(self, text: str) -> List[str]:
"""Parse the output of an LLM call."""
return text.strip().split(", ")
template = """You are a helpful assistant who generates comma separated lists.
A user will pass in a category, and you should generate 5 objects in that category in a comma separated list.
ONLY return a comma separated list, and nothing more."""
human_template = "{text}"
chat_prompt = ChatPromptTemplate.from_messages([
("system", template),
("human", human_template),
])
category_chain = chat_prompt | ChatOpenAI() | CommaSeparatedListOutputParser()
# 2. App definition
app = FastAPI(
title="LangChain Server",
version="1.0",
description="A simple api server using Langchain's Runnable interfaces",
)
# 3. Adding chain route
add_routes(
app,
category_chain,
path="/category_chain",
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="localhost", port=8000)
```
And that's it! If we execute this file:
```bash
python serve.py
```
we should see our chain being served at localhost:8000.
### Playground
Every LangServe service comes with a simple built-in UI for configuring and invoking the application with streaming output and visibility into intermediate steps.
Head to http://localhost:8000/category_chain/playground/ to try it out!
### Client
Now let's set up a client for programmatically interacting with our service. We can easily do this with the `langserve.RemoteRunnable`.
Using this, we can interact with the served chain as if it were running client-side.
```python
from langserve import RemoteRunnable
remote_chain = RemoteRunnable("http://localhost:8000/category_chain/")
remote_chain.invoke({"text": "colors"})
# >> ['red', 'blue', 'green', 'yellow', 'orange']
```
To learn more about the many other features of LangServe [head here](/docs/langserve).
## Next steps
This is it!
We've now gone over how to create the core building block of LangChain applications.
There is a lot more nuance in all these components (LLMs, prompts, output parsers) and a lot more different components to learn about as well.
We've touched on how to build an application with LangChain, how to trace it with LangSmith, and how to serve it with LangServe.
There are a lot more features in all three of these than we can cover here.
To continue on your journey:
- [Dive deeper](/docs/modules/model_io) into LLMs, prompts, and output parsers
- Learn the other [key components](/docs/modules)
- Read up on [LangChain Expression Language](/docs/expression_language) to learn how to chain these components together
- Check out our [helpful guides](/docs/guides) for detailed walkthroughs on particular topics
- Explore [end-to-end use cases](/docs/use_cases/qa_structured/sql)
- Read up on [LangChain Expression Language (LCEL)](/docs/expression_language) to learn how to chain these components together
- [Dive deeper](/docs/modules/model_io) into LLMs, prompts, and output parsers and learn the other [key components](/docs/modules)
- Explore common [end-to-end use cases](/docs/use_cases/qa_structured/sql) and [template applications](/docs/templates)
- [Read up on LangSmith](/docs/langsmith/), the platform for debugging, testing, monitoring and more
- Learn more about serving your applications with [LangServe](/docs/langserve)

View File

@@ -8,7 +8,7 @@ Here are a few different tools and functionalities to aid in debugging.
## Tracing
Platforms with tracing capabilities like [LangSmith](/docs/guides/langsmith/) and [WandB](/docs/integrations/providers/wandb_tracing) are the most comprehensive solutions for debugging. These platforms make it easy to not only log and visualize LLM apps, but also to actively debug, test and refine them.
Platforms with tracing capabilities like [LangSmith](/docs/langsmith/) and [WandB](/docs/integrations/providers/wandb_tracing) are the most comprehensive solutions for debugging. These platforms make it easy to not only log and visualize LLM apps, but also to actively debug, test and refine them.
For anyone building production-grade LLM applications, we highly recommend using a platform like this.

View File

@@ -113,7 +113,7 @@
"tags": []
},
"source": [
"Here are two examples of how to use the `TrubricsCallbackHandler` with Langchain [LLMs](https://python.langchain.com/docs/modules/model_io/models/llms/) or [Chat Models](https://python.langchain.com/docs/modules/model_io/models/chat/). We will use OpenAI models, so set your `OPENAI_API_KEY` key here:"
"Here are two examples of how to use the `TrubricsCallbackHandler` with Langchain [LLMs](https://python.langchain.com/docs/modules/model_io/llms/) or [Chat Models](https://python.langchain.com/docs/modules/model_io/chat/). We will use OpenAI models, so set your `OPENAI_API_KEY` key here:"
]
},
{

View File

@@ -5,18 +5,20 @@
"id": "38f26d7a",
"metadata": {},
"source": [
"# Azure\n",
"# Azure OpenAI\n",
"\n",
"This notebook goes over how to connect to an Azure hosted OpenAI endpoint"
"This notebook goes over how to connect to an Azure hosted OpenAI endpoint. We recommend having version `openai>=1` installed."
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "96164b42",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"from langchain.chat_models import AzureChatOpenAI\n",
"from langchain.schema import HumanMessage"
]
@@ -24,57 +26,51 @@
{
"cell_type": "code",
"execution_count": 4,
"id": "cbe4bb58-ba13-4355-8af9-cd990dc47a64",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"AZURE_OPENAI_API_KEY\"] = \"...\"\n",
"os.environ[\"AZURE_OPENAI_ENDPOINT\"] = \"https://<your-endpoint>.openai.azure.com/\""
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "8161278f",
"metadata": {},
"outputs": [],
"source": [
"BASE_URL = \"https://${TODO}.openai.azure.com\"\n",
"API_KEY = \"...\"\n",
"DEPLOYMENT_NAME = \"chat\"\n",
"model = AzureChatOpenAI(\n",
" openai_api_base=BASE_URL,\n",
" openai_api_version=\"2023-05-15\",\n",
" deployment_name=DEPLOYMENT_NAME,\n",
" openai_api_key=API_KEY,\n",
" openai_api_type=\"azure\",\n",
" azure_deployment=\"your-deployment-name\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 15,
"id": "99509140",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"\\n\\nJ'aime programmer.\", additional_kwargs={})"
"AIMessage(content=\"J'adore la programmation.\")"
]
},
"execution_count": 5,
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model(\n",
" [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French. I love programming.\"\n",
" )\n",
" ]\n",
")"
"message = HumanMessage(\n",
" content=\"Translate this sentence from English to French. I love programming.\"\n",
")\n",
"model([message])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3b6e9376",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "f27fa24d",
@@ -88,7 +84,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 8,
"id": "0531798a",
"metadata": {},
"outputs": [],
@@ -98,48 +94,19 @@
},
{
"cell_type": "code",
"execution_count": 14,
"id": "3fd97dfc",
"metadata": {},
"outputs": [],
"source": [
"BASE_URL = \"https://{endpoint}.openai.azure.com\"\n",
"API_KEY = \"...\"\n",
"DEPLOYMENT_NAME = \"gpt-35-turbo\" # in Azure, this deployment has version 0613 - input and output tokens are counted separately"
]
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": null,
"id": "aceddb72",
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total Cost (USD): $0.000054\n"
]
}
],
"outputs": [],
"source": [
"model = AzureChatOpenAI(\n",
" openai_api_base=BASE_URL,\n",
" openai_api_version=\"2023-05-15\",\n",
" deployment_name=DEPLOYMENT_NAME,\n",
" openai_api_key=API_KEY,\n",
" openai_api_type=\"azure\",\n",
" azure_deployment=\"gpt-35-turbo\", # in Azure, this deployment has version 0613 - input and output tokens are counted separately\n",
")\n",
"with get_openai_callback() as cb:\n",
" model(\n",
" [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French. I love programming.\"\n",
" )\n",
" ]\n",
" )\n",
" model([message])\n",
" print(\n",
" f\"Total Cost (USD): ${format(cb.total_cost, '.6f')}\"\n",
" ) # without specifying the model version, flat-rate 0.002 USD per 1k input and output tokens is used"
@@ -169,21 +136,12 @@
],
"source": [
"model0613 = AzureChatOpenAI(\n",
" openai_api_base=BASE_URL,\n",
" openai_api_version=\"2023-05-15\",\n",
" deployment_name=DEPLOYMENT_NAME,\n",
" openai_api_key=API_KEY,\n",
" openai_api_type=\"azure\",\n",
" deployment_name=\"gpt-35-turbo,\n",
" model_version=\"0613\",\n",
")\n",
"with get_openai_callback() as cb:\n",
" model0613(\n",
" [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French. I love programming.\"\n",
" )\n",
" ]\n",
" )\n",
" model0613([message])\n",
" print(f\"Total Cost (USD): ${format(cb.total_cost, '.6f')}\")"
]
},
@@ -212,7 +170,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -65,9 +65,7 @@
"id": "359565a7-dad3-403c-a73c-6414b1295127",
"metadata": {},
"source": [
"## 2. Define chat loader\n",
"\n",
"LangChain currently does not support "
"## 2. Define chat loader"
]
},
{

File diff suppressed because one or more lines are too long

View File

@@ -1,155 +1,218 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "959300d4",
"metadata": {},
"source": [
"# Hugging Face Local Pipelines\n",
"\n",
"Hugging Face models can be run locally through the `HuggingFacePipeline` class.\n",
"\n",
"The [Hugging Face Model Hub](https://huggingface.co/models) hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.\n",
"\n",
"These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class. For more information on the hosted pipelines, see the [HuggingFaceHub](huggingface_hub.html) notebook."
]
"cells": [
{
"cell_type": "markdown",
"id": "959300d4",
"metadata": {},
"source": [
"# Hugging Face Local Pipelines\n",
"\n",
"Hugging Face models can be run locally through the `HuggingFacePipeline` class.\n",
"\n",
"The [Hugging Face Model Hub](https://huggingface.co/models) hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.\n",
"\n",
"These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class. For more information on the hosted pipelines, see the [HuggingFaceHub](huggingface_hub.html) notebook."
]
},
{
"cell_type": "markdown",
"id": "4c1b8450-5eaf-4d34-8341-2d785448a1ff",
"metadata": {
"tags": []
},
"source": [
"To use, you should have the ``transformers`` python [package installed](https://pypi.org/project/transformers/), as well as [pytorch](https://pytorch.org/get-started/locally/). You can also install `xformer` for a more memory-efficient attention implementation."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d772b637-de00-4663-bd77-9bc96d798db2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install transformers --quiet"
]
},
{
"cell_type": "markdown",
"id": "91ad075f-71d5-4bc8-ab91-cc0ad5ef16bb",
"metadata": {},
"source": [
"### Model Loading\n",
"\n",
"Models can be loaded by specifying the model parameters using the `from_model_id` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "165ae236-962a-4763-8052-c4836d78a5d2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.llms.huggingface_pipeline import HuggingFacePipeline\n",
"\n",
"hf = HuggingFacePipeline.from_model_id(\n",
" model_id=\"gpt2\",\n",
" task=\"text-generation\",\n",
" pipeline_kwargs={\"max_new_tokens\": 10},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "00104b27-0c15-4a97-b198-4512337ee211",
"metadata": {},
"source": [
"They can also be loaded by passing in an existing `transformers` pipeline directly"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms.huggingface_pipeline import HuggingFacePipeline\n",
"from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n",
"\n",
"model_id = \"gpt2\"\n",
"tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
"model = AutoModelForCausalLM.from_pretrained(model_id)\n",
"pipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer, max_new_tokens=10)\n",
"hf = HuggingFacePipeline(pipeline=pipe)"
],
"id": "7f426a4f"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Chain\n",
"\n",
"With the model loaded into memory, you can compose it with a prompt to\n",
"form a chain."
],
"id": "60e7ba8d"
},
{
"cell_type": "code",
"execution_count": null,
"id": "3acf0069",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"\n",
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"prompt = PromptTemplate.from_template(template)\n",
"\n",
"chain = prompt | hf\n",
"\n",
"question = \"What is electroencephalography?\"\n",
"\n",
"print(chain.invoke({\"question\": question}))"
]
},
{
"cell_type": "markdown",
"id": "dbbc3a37",
"metadata": {},
"source": [
"### GPU Inference\n",
"\n",
"When running on a machine with GPU, you can specify the `device=n` parameter to put the model on the specified device.\n",
"Defaults to `-1` for CPU inference.\n",
"\n",
"If you have multiple-GPUs and/or the model is too large for a single GPU, you can specify `device_map=\"auto\"`, which requires and uses the [Accelerate](https://huggingface.co/docs/accelerate/index) library to automatically determine how to load the model weights. \n",
"\n",
"*Note*: both `device` and `device_map` should not be specified together and can lead to unexpected behavior."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"gpu_llm = HuggingFacePipeline.from_model_id(\n",
" model_id=\"gpt2\",\n",
" task=\"text-generation\",\n",
" device=0, # replace with device_map=\"auto\" to use the accelerate library.\n",
" pipeline_kwargs={\"max_new_tokens\": 10},\n",
")\n",
"\n",
"gpu_chain = prompt | gpu_llm\n",
"\n",
"question = \"What is electroencephalography?\"\n",
"\n",
"print(gpu_chain.invoke({\"question\": question}))"
],
"id": "703c91c8"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Batch GPU Inference\n",
"\n",
"If running on a device with GPU, you can also run inference on the GPU in batch mode."
],
"id": "59276016"
},
{
"cell_type": "code",
"execution_count": null,
"id": "097ba62f",
"metadata": {},
"outputs": [],
"source": [
"gpu_llm = HuggingFacePipeline.from_model_id(\n",
" model_id=\"bigscience/bloom-1b7\",\n",
" task=\"text-generation\",\n",
" device=0, # -1 for CPU\n",
" batch_size=2, # adjust as needed based on GPU map and model size.\n",
" model_kwargs={\"temperature\": 0, \"max_length\": 64},\n",
")\n",
"\n",
"gpu_chain = prompt | gpu_llm.bind(stop=[\"\\n\\n\"])\n",
"\n",
"questions = []\n",
"for i in range(4):\n",
" questions.append({\"question\": f\"What is the number {i} in french?\"})\n",
"\n",
"answers = gpu_chain.batch(questions)\n",
"for answer in answers:\n",
" print(answer)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
{
"cell_type": "markdown",
"id": "4c1b8450-5eaf-4d34-8341-2d785448a1ff",
"metadata": {
"tags": []
},
"source": [
"To use, you should have the ``transformers`` python [package installed](https://pypi.org/project/transformers/), as well as [pytorch](https://pytorch.org/get-started/locally/). You can also install `xformer` for a more memory-efficient attention implementation."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d772b637-de00-4663-bd77-9bc96d798db2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install transformers --quiet"
]
},
{
"cell_type": "markdown",
"id": "91ad075f-71d5-4bc8-ab91-cc0ad5ef16bb",
"metadata": {},
"source": [
"### Load the model"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "165ae236-962a-4763-8052-c4836d78a5d2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.llms import HuggingFacePipeline\n",
"\n",
"llm = HuggingFacePipeline.from_model_id(\n",
" model_id=\"bigscience/bloom-1b7\",\n",
" task=\"text-generation\",\n",
" model_kwargs={\"temperature\": 0, \"max_length\": 64},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "00104b27-0c15-4a97-b198-4512337ee211",
"metadata": {},
"source": [
"### Create Chain\n",
"\n",
"With the model loaded into memory, you can compose it with a prompt to\n",
"form a chain."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3acf0069",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"\n",
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"prompt = PromptTemplate.from_template(template)\n",
"\n",
"chain = prompt | llm\n",
"\n",
"question = \"What is electroencephalography?\"\n",
"\n",
"print(chain.invoke({\"question\": question}))"
]
},
{
"cell_type": "markdown",
"id": "dbbc3a37",
"metadata": {},
"source": [
"### Batch GPU Inference\n",
"\n",
"If running on a device with GPU, you can also run inference on the GPU in batch mode."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "097ba62f",
"metadata": {},
"outputs": [],
"source": [
"gpu_llm = HuggingFacePipeline.from_model_id(\n",
" model_id=\"bigscience/bloom-1b7\",\n",
" task=\"text-generation\",\n",
" device=0, # -1 for CPU\n",
" batch_size=2, # adjust as needed based on GPU map and model size.\n",
" model_kwargs={\"temperature\": 0, \"max_length\": 64},\n",
")\n",
"\n",
"gpu_chain = prompt | gpu_llm.bind(stop=[\"\\n\\n\"])\n",
"\n",
"questions = []\n",
"for i in range(4):\n",
" questions.append({\"question\": f\"What is the number {i} in french?\"})\n",
"\n",
"answers = gpu_chain.batch(questions)\n",
"for answer in answers:\n",
" print(answer)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -288,7 +288,7 @@
"metadata": {},
"source": [
"## Streaming Response\n",
"You can optionally stream the response as it is produced, which is helpful to show interactivity to users for time-consuming generations. See detailed docs on [Streaming](https://python.langchain.com/docs/modules/model_io/models/llms/how_to/streaming_llm) for more information."
"You can optionally stream the response as it is produced, which is helpful to show interactivity to users for time-consuming generations. See detailed docs on [Streaming](https://python.langchain.com/docs/modules/model_io/llms/how_to/streaming_llm) for more information."
]
},
{

View File

@@ -0,0 +1,76 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "91c6a7ef",
"metadata": {},
"source": [
"# Neo4j\n",
"\n",
"[Neo4j](https://en.wikipedia.org/wiki/Neo4j) is an open-source graph database management system, renowned for its efficient management of highly connected data. Unlike traditional databases that store data in tables, Neo4j uses a graph structure with nodes, edges, and properties to represent and store data. This design allows for high-performance queries on complex data relationships.\n",
"\n",
"This notebook goes over how to use `Neo4j` to store chat message history."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d15e3302",
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import Neo4jChatMessageHistory\n",
"\n",
"history = Neo4jChatMessageHistory(\n",
" url=\"bolt://localhost:7687\",\n",
" username=\"neo4j\",\n",
" password=\"password\",\n",
" session_id=\"session_id_1\"\n",
")\n",
"\n",
"history.add_user_message(\"hi!\")\n",
"\n",
"history.add_ai_message(\"whats up?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "64fc465e",
"metadata": {},
"outputs": [],
"source": [
"history.messages"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8af285f8",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -2,7 +2,7 @@
All functionality related to `Microsoft Azure` and other `Microsoft` products.
## LLM
## Chat Models
### Azure OpenAI
>[Microsoft Azure](https://en.wikipedia.org/wiki/Microsoft_Azure), often referred to as `Azure` is a cloud computing platform run by `Microsoft`, which offers access, management, and development of applications and services through global data centers. It provides a range of capabilities, including software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). `Microsoft Azure` supports many programming languages, tools, and frameworks, including Microsoft-specific and third-party software and systems.
@@ -18,16 +18,15 @@ Set the environment variables to get access to the `Azure OpenAI` service.
```python
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
os.environ["OPENAI_API_VERSION"] = "2023-05-15"
os.environ["AZURE_OPENAI_ENDPOINT"] = "https://<your-endpoint.openai.azure.com/"
os.environ["AZURE_OPENAI_API_KEY"] = "your AzureOpenAI key"
```
See a [usage example](/docs/integrations/llms/azure_openai_example).
See a [usage example](/docs/integrations/chat/azure_chat_openai)
```python
from langchain.llms import AzureOpenAI
from langchain.chat_models import AzureChatOpenAI
```
## Text Embedding Models
@@ -36,16 +35,16 @@ from langchain.llms import AzureOpenAI
See a [usage example](/docs/integrations/text_embedding/azureopenai)
```python
from langchain.embeddings import OpenAIEmbeddings
from langchain.embeddings import AzureOpenAIEmbeddings
```
## Chat Models
## LLMs
### Azure OpenAI
See a [usage example](/docs/integrations/chat/azure_chat_openai)
See a [usage example](/docs/integrations/llms/azure_openai_example).
```python
from langchain.chat_models import AzureChatOpenAI
from langchain.llms import AzureOpenAI
```
## Document loaders

View File

@@ -0,0 +1,85 @@
# Astra DB
This page lists the integrations available with [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) and [Apache Cassandra®](https://cassandra.apache.org/).
### Setup
Install the following Python package:
```bash
pip install "astrapy>=0.5.3"
```
## Astra DB
> DataStax [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on Cassandra and made conveniently available
> through an easy-to-use JSON API.
### Vector Store
```python
from langchain.vectorstores import AstraDB
vector_store = AstraDB(
embedding=my_embedding,
collection_name="my_store",
api_endpoint="...",
token="...",
)
```
Learn more in the [example notebook](/docs/integrations/vectorstores/astradb).
## Apache Cassandra and Astra DB through CQL
> [Cassandra](https://cassandra.apache.org/) is a NoSQL, row-oriented, highly scalable and highly available database.
> Starting with version 5.0, the database ships with [vector search capabilities](https://cassandra.apache.org/doc/trunk/cassandra/vector-search/overview.html).
> DataStax [Astra DB through CQL](https://docs.datastax.com/en/astra-serverless/docs/vector-search/quickstart.html) is a managed serverless database built on Cassandra, offering the same interface and strengths.
These databases use the CQL protocol (Cassandra Query Language).
Hence, a different set of connectors, outlined below, shall be used.
### Vector Store
```python
from langchain.vectorstores import Cassandra
vector_store = Cassandra(
embedding=my_embedding,
table_name="my_store",
)
```
Learn more in the [example notebook](/docs/integrations/vectorstores/astradb) (scroll down to the CQL-specific section).
### Memory
```python
from langchain.memory import CassandraChatMessageHistory
message_history = CassandraChatMessageHistory(session_id="my-session")
```
Learn more in the [example notebook](/docs/integrations/memory/cassandra_chat_message_history).
### LLM Cache
```python
from langchain.cache import CassandraCache
langchain.llm_cache = CassandraCache()
```
Learn more in the [example notebook](/docs/integrations/llms/llm_caching) (scroll to the Cassandra section).
### Semantic LLM Cache
```python
from langchain.cache import CassandraSemanticCache
cassSemanticCache = CassandraSemanticCache(
embedding=my_embedding,
table_name="my_store",
)
```
Learn more in the [example notebook](/docs/integrations/llms/llm_caching) (scroll to the appropriate section).

View File

@@ -1,35 +0,0 @@
# Cassandra
>[Apache Cassandra®](https://cassandra.apache.org/) is a free and open-source, distributed, wide-column
> store, NoSQL database management system designed to handle large amounts of data across many commodity servers,
> providing high availability with no single point of failure. Cassandra offers support for clusters spanning
> multiple datacenters, with asynchronous masterless replication allowing low latency operations for all clients.
> Cassandra was designed to implement a combination of _Amazon's Dynamo_ distributed storage and replication
> techniques combined with _Google's Bigtable_ data and storage engine model.
## Installation and Setup
```bash
pip install cassandra-driver
pip install cassio
```
## Vector Store
See a [usage example](/docs/integrations/vectorstores/cassandra).
```python
from langchain.vectorstores import Cassandra
```
## Memory
See a [usage example](/docs/integrations/memory/cassandra_chat_message_history).
```python
from langchain.memory import CassandraChatMessageHistory
```

View File

@@ -1,22 +1,47 @@
# Fireworks
This page covers how to use the Fireworks models within Langchain.
This page covers how to use [Fireworks](https://app.fireworks.ai/) models within
Langchain.
## Installation and Setup
## Installation and setup
- To use the Fireworks model, you need to have a Fireworks API key. To generate one, sign up at [app.fireworks.ai](https://app.fireworks.ai).
- Install the Fireworks client library.
```
pip install fireworks-ai
```
- Get a Fireworks API key by signing up at [app.fireworks.ai](https://app.fireworks.ai).
- Authenticate by setting the FIREWORKS_API_KEY environment variable.
## LLM
## Authentication
Fireworks integrates with Langchain through the LLM module, which allows for standardized usage of any models deployed on the Fireworks models.
There are two ways to authenticate using your Fireworks API key:
In this example, we'll work the llama-v2-13b-chat model.
1. Setting the `FIREWORKS_API_KEY` environment variable.
```python
os.environ["FIREWORKS_API_KEY"] = "<KEY>"
```
2. Setting `fireworks_api_key` field in the Fireworks LLM module.
```python
llm = Fireworks(fireworks_api_key="<KEY>")
```
## Using the Fireworks LLM module
Fireworks integrates with Langchain through the LLM module. In this example, we
will work the llama-v2-13b-chat model.
```python
from langchain.llms.fireworks import Fireworks
llm = Fireworks(model="fireworks-llama-v2-13b-chat", max_tokens=256, temperature=0.4)
llm = Fireworks(
fireworks_api_key="<KEY>",
model="accounts/fireworks/models/llama-v2-13b-chat",
max_tokens=256)
llm("Name 3 sports.")
```

View File

@@ -11,7 +11,7 @@ Get a [Minimax group id](https://api.minimax.chat/user-center/basic-information)
## LLM
There exists a Minimax LLM wrapper, which you can access with
See a [usage example](/docs/modules/model_io/models/llms/integrations/minimax).
See a [usage example](/docs/modules/model_io/llms/integrations/minimax).
```python
from langchain.llms import Minimax
@@ -19,7 +19,7 @@ from langchain.llms import Minimax
## Chat Models
See a [usage example](/docs/modules/model_io/models/chat/integrations/minimax)
See a [usage example](/docs/modules/model_io/chat/integrations/minimax)
```python
from langchain.chat_models import MiniMaxChat

View File

@@ -46,6 +46,6 @@ eng = sqlalchemy.create_engine(conn_str)
set_llm_cache(SQLAlchemyCache(engine=eng))
```
From here, see the [LLM Caching](/docs/modules/model_io/models/llms/how_to/llm_caching) documentation on how to use.
From here, see the [LLM Caching](/docs/modules/model_io/llms/how_to/llm_caching) documentation on how to use.

File diff suppressed because one or more lines are too long

View File

@@ -66,25 +66,26 @@
"id": "fa339ca0-f478-440c-ba80-0e5f41a19ce1",
"metadata": {},
"source": [
"By default, all files with these mime-type can be converted to `Document`.\n",
"- text/text\n",
"- text/plain\n",
"- text/html\n",
"- text/csv\n",
"- text/markdown\n",
"- image/png\n",
"- image/jpeg\n",
"- application/epub+zip\n",
"- application/pdf\n",
"- application/rtf\n",
"- application/vnd.google-apps.document (GDoc)\n",
"- application/vnd.google-apps.presentation (GSlide)\n",
"- application/vnd.google-apps.spreadsheet (GSheet)\n",
"- application/vnd.google.colaboratory (Notebook colab)\n",
"- application/vnd.openxmlformats-officedocument.presentationml.presentation (PPTX)\n",
"- application/vnd.openxmlformats-officedocument.wordprocessingml.document (DOCX)\n",
"By default, all files with these MIME types can be converted to `Document`.\n",
"\n",
"It's possible to update or customize this. See the documentation of `GDriveRetriever`.\n",
"- `text/text`\n",
"- `text/plain`\n",
"- `text/html`\n",
"- `text/csv`\n",
"- `text/markdown`\n",
"- `image/png`\n",
"- `image/jpeg`\n",
"- `application/epub+zip`\n",
"- `application/pdf`\n",
"- `application/rtf`\n",
"- `application/vnd.google-apps.document` (GDoc)\n",
"- `application/vnd.google-apps.presentation` (GSlide)\n",
"- `application/vnd.google-apps.spreadsheet` (GSheet)\n",
"- `application/vnd.google.colaboratory` (Notebook colab)\n",
"- `application/vnd.openxmlformats-officedocument.presentationml.presentation` (PPTX)\n",
"- `application/vnd.openxmlformats-officedocument.wordprocessingml.document` (DOCX)\n",
"\n",
"It's possible to update or customize this. See the documentation of `GoogleDriveRetriever`.\n",
"\n",
"But, the corresponding packages must be installed."
]
@@ -121,16 +122,17 @@
"metadata": {},
"source": [
"You can customize the criteria to select the files. A set of predefined filter are proposed:\n",
"| template | description |\n",
"| -------------------------------------- | --------------------------------------------------------------------- |\n",
"| gdrive-all-in-folder | Return all compatible files from a `folder_id` |\n",
"| gdrive-query | Search `query` in all drives |\n",
"| gdrive-by-name | Search file with name `query`) |\n",
"| gdrive-query-in-folder | Search `query` in `folder_id` (and sub-folders in `_recursive=true`) |\n",
"| gdrive-mime-type | Search a specific `mime_type` |\n",
"| gdrive-mime-type-in-folder | Search a specific `mime_type` in `folder_id` |\n",
"| gdrive-query-with-mime-type | Search `query` with a specific `mime_type` |\n",
"| gdrive-query-with-mime-type-and-folder | Search `query` with a specific `mime_type` and in `folder_id` |"
"\n",
"| Template | Description |\n",
"| -------------------------------------- | --------------------------------------------------------------------- |\n",
"| `gdrive-all-in-folder` | Return all compatible files from a `folder_id` |\n",
"| `gdrive-query` | Search `query` in all drives |\n",
"| `gdrive-by-name` | Search file with name `query` |\n",
"| `gdrive-query-in-folder` | Search `query` in `folder_id` (and sub-folders in `_recursive=true`) |\n",
"| `gdrive-mime-type` | Search a specific `mime_type` |\n",
"| `gdrive-mime-type-in-folder` | Search a specific `mime_type` in `folder_id` |\n",
"| `gdrive-query-with-mime-type` | Search `query` with a specific `mime_type` |\n",
"| `gdrive-query-with-mime-type-and-folder` | Search `query` with a specific `mime_type` and in `folder_id` |"
]
},
{

View File

@@ -5,9 +5,100 @@
"id": "c3852491",
"metadata": {},
"source": [
"# AzureOpenAI\n",
"# Azure OpenAI\n",
"\n",
"Let's load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints."
"Let's load the Azure OpenAI Embedding class with environment variables set to indicate to use Azure endpoints."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "8a6ed30d-806f-4800-b5fd-d04126be9060",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"AZURE_OPENAI_API_KEY\"] = \"...\"\n",
"os.environ[\"AZURE_OPENAI_ENDPOINT\"] = \"https://<your-endpoint>.openai.azure.com/\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "20179bc7-3f71-4909-be12-d38bce009b18",
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings import AzureOpenAIEmbeddings\n",
"\n",
"embeddings = AzureOpenAIEmbeddings(\n",
" azure_deployment=\"<your-embeddings-deployment-name>\",\n",
" openai_api_version=\"2023-05-15\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f8cb9dca-738b-450f-9986-5c3efd3c6eb3",
"metadata": {},
"outputs": [],
"source": [
"text = \"this is a test document\""
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "0fae0295-b117-4a5a-8b98-500c79306551",
"metadata": {},
"outputs": [],
"source": [
"query_result = embeddings.embed_query(text)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "65a01ddd-0bbf-444f-a87f-93af25ef902c",
"metadata": {},
"outputs": [],
"source": [
"doc_result = embeddings.embed_documents([text])"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "45771052-68ca-4e03-9c4f-a0c7796d9442",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[-0.012222584727053133,\n",
" 0.0072103982392216145,\n",
" -0.014818063280923775,\n",
" -0.026444746872933557,\n",
" -0.0034330499700826883]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"doc_result[0][:5]"
]
},
{
"cell_type": "markdown",
"id": "e66ec1f2-6768-4ee5-84bf-a2d76adc20c8",
"metadata": {},
"source": [
"## [Legacy] When using `openai<1`"
]
},
{
@@ -79,9 +170,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "poetry-venv",
"language": "python",
"name": "python3"
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {

View File

@@ -0,0 +1,154 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Qdrant FastEmbed\n",
"\n",
"[FastEmbed](https://qdrant.github.io/fastembed/) is a lightweight, fast, Python library built for embedding generation. \n",
"\n",
"- Quantized model weights\n",
"- ONNX Runtime, no PyTorch dependency\n",
"- CPU-first design\n",
"- Data-parallelism for encoding of large datasets."
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "2a773d8d",
"metadata": {},
"source": [
"## Dependencies\n",
"\n",
"To use FastEmbed with LangChain, install the `fastembed` Python package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "91ea14ce-831d-409a-a88f-30353acdabd1",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install fastembed"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "426f1156",
"metadata": {},
"source": [
"## Imports"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "3f5dc9d7-65e3-4b5b-9086-3327d016cfe0",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.embeddings.fastembed import FastEmbedEmbeddings"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiating FastEmbed\n",
" \n",
"### Parameters\n",
"- `model_name: str` (default: \"BAAI/bge-small-en-v1.5\")\n",
" > Name of the FastEmbedding model to use. You can find the list of supported models [here](https://qdrant.github.io/fastembed/examples/Supported_Models/).\n",
"\n",
"- `max_length: int` (default: 512)\n",
" > The maximum number of tokens. Unknown behavior for values > 512.\n",
"\n",
"- `cache_dir: Optional[str]`\n",
" > The path to the cache directory. Defaults to `local_cache` in the parent directory.\n",
"\n",
"- `threads: Optional[int]`\n",
" > The number of threads a single onnxruntime session can use. Defaults to None.\n",
"\n",
"- `doc_embed_type: Literal[\"default\", \"passage\"]` (default: \"default\")\n",
" > \"default\": Uses FastEmbed's default embedding method.\n",
" \n",
" > \"passage\": Prefixes the text with \"passage\" before embedding."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6fb585dd",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"embeddings = FastEmbedEmbeddings()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"### Generating document embeddings"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"document_embeddings = embeddings.embed_documents([\"This is a document\", \"This is some other document\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Generating query embeddings"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"query_embeddings = embeddings.embed_query(\"This is a query\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

View File

@@ -34,7 +34,7 @@
"source": [
"## Using `ZERO_SHOT_REACT_DESCRIPTION`\n",
"\n",
"This shows how to initialize the agent using the `ZERO_SHOT_REACT_DESCRIPTION` agent type. Note that this is an alternative to the above."
"This shows how to initialize the agent using the `ZERO_SHOT_REACT_DESCRIPTION` agent type."
]
},
{

View File

@@ -14,7 +14,7 @@
"E2B Data Analysis sandbox allows you to:\n",
"- Run Python code\n",
"- Generate charts via matplotlib\n",
"- Install Python packages dynamically durint runtime\n",
"- Install Python packages dynamically during runtime\n",
"- Install system packages dynamically during runtime\n",
"- Run shell commands\n",
"- Upload and download files\n",

View File

@@ -0,0 +1,204 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Memorize\n",
"\n",
"Fine-tuning LLM itself to memorize information using unsupervised learning.\n",
"\n",
"This tool requires LLMs that support fine-tuning. Currently, only `langchain.llms import GradientLLM` is supported."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Imports"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from langchain.llms import GradientLLM\n",
"from langchain.chains import LLMChain\n",
"from langchain.agents import AgentExecutor, AgentType, initialize_agent, load_tools\n",
"from langchain.memory import ConversationBufferMemory"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set the Environment API Key\n",
"Make sure to get your API key from Gradient AI. You are given $10 in free credits to test and fine-tune different models."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"from getpass import getpass\n",
"\n",
"\n",
"if not os.environ.get(\"GRADIENT_ACCESS_TOKEN\", None):\n",
" # Access token under https://auth.gradient.ai/select-workspace\n",
" os.environ[\"GRADIENT_ACCESS_TOKEN\"] = getpass(\"gradient.ai access token:\")\n",
"if not os.environ.get(\"GRADIENT_WORKSPACE_ID\", None):\n",
" # `ID` listed in `$ gradient workspace list`\n",
" # also displayed after login at at https://auth.gradient.ai/select-workspace\n",
" os.environ[\"GRADIENT_WORKSPACE_ID\"] = getpass(\"gradient.ai workspace id:\")\n",
"if not os.environ.get(\"GRADIENT_MODEL_ADAPTER_ID\", None):\n",
" # `ID` listed in `$ gradient model list --workspace-id \"$GRADIENT_WORKSPACE_ID\"`\n",
" os.environ[\"GRADIENT_MODEL_ID\"] = getpass(\"gradient.ai model id:\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Optional: Validate your Environment variables ```GRADIENT_ACCESS_TOKEN``` and ```GRADIENT_WORKSPACE_ID``` to get currently deployed models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the `GradientLLM` instance\n",
"You can specify different parameters such as the model name, max tokens generated, temperature, etc."
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"llm = GradientLLM(\n",
" model_id=os.environ[\"GRADIENT_MODEL_ID\"],\n",
" # # optional: set new credentials, they default to environment variables\n",
" # gradient_workspace_id=os.environ[\"GRADIENT_WORKSPACE_ID\"],\n",
" # gradient_access_token=os.environ[\"GRADIENT_ACCESS_TOKEN\"],\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load tools"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
"tools = load_tools([\"memorize\"], llm=llm)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initiate the Agent"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"agent = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n",
" verbose=True,\n",
" # memory=ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run the agent\n",
"Ask the agent to memorize a piece of text."
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI should memorize this fact.\n",
"Action: Memorize\n",
"Action Input: Zara T\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mTrain complete. Loss: 1.6853971333333335\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the final answer.\n",
"Final Answer: Zara Tubikova set a world\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Zara Tubikova set a world'"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\n",
" \"Please remember the fact in detail:\\nWith astonishing dexterity, Zara Tubikova set a world record by solving a 4x4 Rubik's Cube variation blindfolded in under 20 seconds, employing only their feet.\"\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
},
"vscode": {
"interpreter": {
"hash": "a0a0263b650d907a3bfe41c0f8d6a63a071b884df3cfdc1579f00cdc1aed6b03"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,758 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d2d6ca14-fb7e-4172-9aa0-a3119a064b96",
"metadata": {},
"source": [
"# Astra DB\n",
"\n",
"This page provides a quickstart for using [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) and [Apache Cassandra®](https://cassandra.apache.org/) as a Vector Store.\n",
"\n",
"_Note: in addition to access to the database, an OpenAI API Key is required to run the full example._"
]
},
{
"cell_type": "markdown",
"id": "bb9be7ce-8c70-4d46-9f11-71c42a36e928",
"metadata": {},
"source": [
"### Setup and general dependencies"
]
},
{
"cell_type": "markdown",
"id": "dbe7c156-0413-47e3-9237-4769c4248869",
"metadata": {},
"source": [
"Use of the integration requires the following Python package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8d00fcf4-9798-4289-9214-d9734690adfc",
"metadata": {},
"outputs": [],
"source": [
"!pip install --quiet \"astrapy>=0.5.3\""
]
},
{
"cell_type": "markdown",
"id": "2453d83a-bc8f-41e1-a692-befe4dd90156",
"metadata": {},
"source": [
"_Note: depending on your LangChain setup, you may need to install/upgrade other dependencies needed for this demo_\n",
"_(specifically, recent versions of `datasets` `openai` `pypdf` and `tiktoken` are required)._"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b06619af-fea2-4863-8149-7f239a8c9c82",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"from datasets import (\n",
" load_dataset,\n",
") # if not present yet, run: pip install \"datasets==2.14.6\"\n",
"\n",
"from langchain.schema import Document\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.document_loaders import PyPDFLoader\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema.runnable import RunnablePassthrough\n",
"from langchain.schema.output_parser import StrOutputParser"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1983f1da-0ae7-4a9b-bf4c-4ade328f7a3a",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"OPENAI_API_KEY\"] = getpass(\"OPENAI_API_KEY = \")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c656df06-e938-4bc5-b570-440b8b7a0189",
"metadata": {},
"outputs": [],
"source": [
"embe = OpenAIEmbeddings()"
]
},
{
"cell_type": "markdown",
"id": "dd8caa76-bc41-429e-a93b-989ba13aff01",
"metadata": {},
"source": [
"_Keep reading to connect with Astra DB. For usage with Apache Cassandra and Astra DB through CQL, scroll to the section below._"
]
},
{
"cell_type": "markdown",
"id": "22866f09-e10d-4f05-a24b-b9420129462e",
"metadata": {},
"source": [
"## Astra DB"
]
},
{
"cell_type": "markdown",
"id": "5fba47cc-3533-42fc-84b7-9dc14cd68b2b",
"metadata": {},
"source": [
"DataStax [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on Cassandra and made conveniently available through an easy-to-use JSON API."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0b32730d-176e-414c-9d91-fd3644c54211",
"metadata": {},
"outputs": [],
"source": [
"from langchain.vectorstores import AstraDB"
]
},
{
"cell_type": "markdown",
"id": "68f61b01-3e09-47c1-9d67-5d6915c86626",
"metadata": {},
"source": [
"### Astra DB connection parameters\n",
"\n",
"- the API Endpoint looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`\n",
"- the Token looks like `AstraCS:6gBhNmsk135....`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d78af8ed-cff9-4f14-aa5d-016f99ab547c",
"metadata": {},
"outputs": [],
"source": [
"ASTRA_DB_API_ENDPOINT = input(\"ASTRA_DB_API_ENDPOINT = \")\n",
"ASTRA_DB_TOKEN = getpass(\"ASTRA_DB_TOKEN = \")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b77553b-8bb5-4949-b87b-8c6abac56a26",
"metadata": {},
"outputs": [],
"source": [
"vstore = AstraDB(\n",
" embedding=embe,\n",
" collection_name=\"astra_vector_demo\",\n",
" api_endpoint=ASTRA_DB_API_ENDPOINT,\n",
" token=ASTRA_DB_TOKEN,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "9a348678-b2f6-46ca-9a0d-2eb4cc6b66b1",
"metadata": {},
"source": [
"### Load a dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3a1f532f-ad63-4256-9730-a183841bd8e9",
"metadata": {},
"outputs": [],
"source": [
"philo_dataset = load_dataset(\"datastax/philosopher-quotes\")[\"train\"]\n",
"\n",
"docs = []\n",
"for entry in philo_dataset:\n",
" metadata = {\"author\": entry[\"author\"]}\n",
" doc = Document(page_content=entry[\"quote\"], metadata=metadata)\n",
" docs.append(doc)\n",
"\n",
"inserted_ids = vstore.add_documents(docs)\n",
"print(f\"\\nInserted {len(inserted_ids)} documents.\")"
]
},
{
"cell_type": "markdown",
"id": "084d8802-ab39-4262-9a87-42eafb746f92",
"metadata": {},
"source": [
"Add some more entries, this time with `add_texts`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b6b157f5-eb31-4907-a78e-2e2b06893936",
"metadata": {},
"outputs": [],
"source": [
"texts = [\"I think, therefore I am.\", \"To the things themselves!\"]\n",
"metadatas = [{\"author\": \"descartes\"}, {\"author\": \"husserl\"}]\n",
"ids = [\"desc_01\", \"huss_xy\"]\n",
"\n",
"inserted_ids_2 = vstore.add_texts(texts=texts, metadatas=metadatas, ids=ids)\n",
"print(f\"\\nInserted {len(inserted_ids_2)} documents.\")"
]
},
{
"cell_type": "markdown",
"id": "c031760a-1fc5-4855-adf2-02ed52fe2181",
"metadata": {},
"source": [
"### Run simple searches"
]
},
{
"cell_type": "markdown",
"id": "02a77d8e-1aae-4054-8805-01c77947c49f",
"metadata": {},
"source": [
"This section demonstrates metadata filtering and getting the similarity scores back:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1761806a-1afd-4491-867c-25a80d92b9fe",
"metadata": {},
"outputs": [],
"source": [
"results = vstore.similarity_search(\"Our life is what we make of it\", k=3)\n",
"for res in results:\n",
" print(f\"* {res.page_content} [{res.metadata}]\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "eebc4f7c-f61a-438e-b3c8-17e6888d8a0b",
"metadata": {},
"outputs": [],
"source": [
"results_filtered = vstore.similarity_search(\n",
" \"Our life is what we make of it\",\n",
" k=3,\n",
" filter={\"author\": \"plato\"},\n",
")\n",
"for res in results_filtered:\n",
" print(f\"* {res.page_content} [{res.metadata}]\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "11bbfe64-c0cd-40c6-866a-a5786538450e",
"metadata": {},
"outputs": [],
"source": [
"results = vstore.similarity_search_with_score(\"Our life is what we make of it\", k=3)\n",
"for res, score in results:\n",
" print(f\"* [SIM={score:3f}] {res.page_content} [{res.metadata}]\")"
]
},
{
"cell_type": "markdown",
"id": "b14ea558-bfbe-41ce-807e-d70670060ada",
"metadata": {},
"source": [
"### MMR (Maximal-marginal-relevance) search"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "76381ce8-780a-4e3b-97b1-056d6782d7d5",
"metadata": {},
"outputs": [],
"source": [
"results = vstore.max_marginal_relevance_search(\n",
" \"Our life is what we make of it\",\n",
" k=3,\n",
" filter={\"author\": \"aristotle\"},\n",
")\n",
"for res in results:\n",
" print(f\"* {res.page_content} [{res.metadata}]\")"
]
},
{
"cell_type": "markdown",
"id": "1cc86edd-692b-4495-906c-ccfd13b03c23",
"metadata": {},
"source": [
"### Deleting stored documents"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "38a70ec4-b522-4d32-9ead-c642864fca37",
"metadata": {},
"outputs": [],
"source": [
"delete_1 = vstore.delete(inserted_ids[:3])\n",
"print(f\"all_succeed={delete_1}\") # True, all documents deleted"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d4cf49ed-9d29-4ed9-bdab-51a308c41b8e",
"metadata": {},
"outputs": [],
"source": [
"delete_2 = vstore.delete(inserted_ids[2:5])\n",
"print(f\"some_succeeds={delete_2}\") # True, though some IDs were gone already"
]
},
{
"cell_type": "markdown",
"id": "847181ba-77d1-4a17-b7f9-9e2c3d8efd13",
"metadata": {},
"source": [
"### A minimal RAG chain"
]
},
{
"cell_type": "markdown",
"id": "cd64b844-846f-43c5-a7dd-c26b9ed417d0",
"metadata": {},
"source": [
"The next cells will implement a simple RAG pipeline:\n",
"- download a sample PDF file and load it onto the store;\n",
"- create a RAG chain with LCEL (LangChain Expression Language), with the vector store at its heart;\n",
"- run the question-answering chain."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5cbc4dba-0d5e-4038-8fc5-de6cadd1c2a9",
"metadata": {},
"outputs": [],
"source": [
"!curl -L \\\n",
" \"https://github.com/awesome-astra/datasets/blob/main/demo-resources/what-is-philosophy/what-is-philosophy.pdf?raw=true\" \\\n",
" -o \"what-is-philosophy.pdf\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "459385be-5e9c-47ff-ba53-2b7ae6166b09",
"metadata": {},
"outputs": [],
"source": [
"pdf_loader = PyPDFLoader(\"what-is-philosophy.pdf\")\n",
"splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=64)\n",
"docs_from_pdf = pdf_loader.load_and_split(text_splitter=splitter)\n",
"\n",
"print(f\"Documents from PDF: {len(docs_from_pdf)}.\")\n",
"inserted_ids_from_pdf = vstore.add_documents(docs_from_pdf)\n",
"print(f\"Inserted {len(inserted_ids_from_pdf)} documents.\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5010a66c-4298-4e32-82b5-2da0d36a5c70",
"metadata": {},
"outputs": [],
"source": [
"retriever = vstore.as_retriever(search_kwargs={\"k\": 3})\n",
"\n",
"philo_template = \"\"\"\n",
"You are a philosopher that draws inspiration from great thinkers of the past\n",
"to craft well-thought answers to user questions. Use the provided context as the basis\n",
"for your answers and do not make up new reasoning paths - just mix-and-match what you are given.\n",
"Your answers must be concise and to the point, and refrain from answering about other topics than philosophy.\n",
"\n",
"CONTEXT:\n",
"{context}\n",
"\n",
"QUESTION: {question}\n",
"\n",
"YOUR ANSWER:\"\"\"\n",
"\n",
"philo_prompt = ChatPromptTemplate.from_template(philo_template)\n",
"\n",
"llm = ChatOpenAI()\n",
"\n",
"chain = (\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
" | philo_prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fcbc1296-6c7c-478b-b55b-533ba4e54ddb",
"metadata": {},
"outputs": [],
"source": [
"chain.invoke(\"How does Russel elaborate on Peirce's idea of the security blanket?\")"
]
},
{
"cell_type": "markdown",
"id": "869ab448-a029-4692-aefc-26b85513314d",
"metadata": {},
"source": [
"For more, check out a complete RAG template using Astra DB [here](https://github.com/langchain-ai/langchain/tree/master/templates/rag-astradb)."
]
},
{
"cell_type": "markdown",
"id": "177610c7-50d0-4b7b-8634-b03338054c8e",
"metadata": {},
"source": [
"### Cleanup"
]
},
{
"cell_type": "markdown",
"id": "0da4d19f-9878-4d3d-82c9-09cafca20322",
"metadata": {},
"source": [
"If you want to completely delete the collection from your Astra DB instance, run this.\n",
"\n",
"_(You will lose the data you stored in it.)_"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fd405a13-6f71-46fa-87e6-167238e9c25e",
"metadata": {},
"outputs": [],
"source": [
"vstore.delete_collection()"
]
},
{
"cell_type": "markdown",
"id": "94ebaab1-7cbf-4144-a147-7b0e32c43069",
"metadata": {},
"source": [
"## Apache Cassandra and Astra DB through CQL"
]
},
{
"cell_type": "markdown",
"id": "bc3931b4-211d-4f84-bcc0-51c127e3027c",
"metadata": {},
"source": [
"[Cassandra](https://cassandra.apache.org/) is a NoSQL, row-oriented, highly scalable and highly available database.Starting with version 5.0, the database ships with [vector search capabilities](https://cassandra.apache.org/doc/trunk/cassandra/vector-search/overview.html).\n",
"\n",
"DataStax [Astra DB through CQL](https://docs.datastax.com/en/astra-serverless/docs/vector-search/quickstart.html) is a managed serverless database built on Cassandra, offering the same interface and strengths."
]
},
{
"cell_type": "markdown",
"id": "a0055fbf-448d-4e46-9c40-28d43df25ca3",
"metadata": {},
"source": [
"#### What sets this case apart from \"Astra DB\" above?\n",
"\n",
"Thanks to LangChain having a standardized `VectorStore` interface, most of the \"Astra DB\" section above applies to this case as well. However, this time the database uses the CQL protocol, which means you'll use a _different_ class this time and instantiate it in another way.\n",
"\n",
"The cells below show how you should get your `vstore` object in this case and how you can clean up the database resources at the end: for the rest, i.e. the actual usage of the vector store, you will be able to run the very code that was shown above.\n",
"\n",
"In other words, running this demo in full with Cassandra or Astra DB through CQL means:\n",
"\n",
"- **initialization as shown below**\n",
"- \"Load a dataset\", _see above section_\n",
"- \"Run simple searches\", _see above section_\n",
"- \"MMR search\", _see above section_\n",
"- \"Deleting stored documents\", _see above section_\n",
"- \"A minimal RAG chain\", _see above section_\n",
"- **cleanup as shown below**"
]
},
{
"cell_type": "markdown",
"id": "23d12be2-745f-4e72-a82c-334a887bc7cd",
"metadata": {},
"source": [
"### Initialization"
]
},
{
"cell_type": "markdown",
"id": "e3212542-79be-423e-8e1f-b8d725e3cda8",
"metadata": {},
"source": [
"The class to use is the following:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "941af73e-a090-4fba-b23c-595757d470eb",
"metadata": {},
"outputs": [],
"source": [
"from langchain.vectorstores import Cassandra"
]
},
{
"cell_type": "markdown",
"id": "414d1e72-f7c9-4b6d-bf6f-16075712c7e3",
"metadata": {},
"source": [
"Now, depending on whether you connect to a Cassandra cluster or to Astra DB through CQL, you will provide different parameters when creating the vector store object."
]
},
{
"cell_type": "markdown",
"id": "48ecca56-71a4-4a91-b198-29384c44ce27",
"metadata": {},
"source": [
"#### Initialization (Cassandra cluster)"
]
},
{
"cell_type": "markdown",
"id": "55ebe958-5654-43e0-9aed-d607ffd3fa48",
"metadata": {},
"source": [
"In this case, you first need to create a `cassandra.cluster.Session` object, as described in the [Cassandra driver documentation](https://docs.datastax.com/en/developer/python-driver/latest/api/cassandra/cluster/#module-cassandra.cluster). The details vary (e.g. with network settings and authentication), but this might be something like:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4642dafb-a065-4063-b58c-3d276f5ad07e",
"metadata": {},
"outputs": [],
"source": [
"from cassandra.cluster import Cluster\n",
"\n",
"cluster = Cluster([\"127.0.0.1\"])\n",
"session = cluster.connect()"
]
},
{
"cell_type": "markdown",
"id": "624c93bf-fb46-4350-bcfa-09ca09dc068f",
"metadata": {},
"source": [
"You can now set the session, along with your desired keyspace name, as a global CassIO parameter:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "92a4ab28-1c4f-4dad-9671-d47e0b1dde7b",
"metadata": {},
"outputs": [],
"source": [
"import cassio\n",
"\n",
"CASSANDRA_KEYSPACE = input(\"CASSANDRA_KEYSPACE = \")\n",
"\n",
"cassio.init(session=session, keyspace=CASSANDRA_KEYSPACE)"
]
},
{
"cell_type": "markdown",
"id": "3b87a824-36f1-45b4-b54c-efec2a2de216",
"metadata": {},
"source": [
"Now you can create the vector store:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "853a2a88-a565-4e24-8789-d78c213954a6",
"metadata": {},
"outputs": [],
"source": [
"vstore = Cassandra(\n",
" embedding=embe,\n",
" table_name=\"cassandra_vector_demo\",\n",
" # session=None, keyspace=None # Uncomment on older versions of LangChain\n",
")"
]
},
{
"cell_type": "markdown",
"id": "768ddf7a-0c3e-4134-ad38-25ac53c3da7a",
"metadata": {},
"source": [
"#### Initialization (Astra DB through CQL)"
]
},
{
"cell_type": "markdown",
"id": "4ed4269a-b7e7-4503-9e66-5a11335c7681",
"metadata": {},
"source": [
"In this case you initialize CassIO with the following connection parameters:\n",
"\n",
"- the Database ID, e.g. `01234567-89ab-cdef-0123-456789abcdef`\n",
"- the Token, e.g. `AstraCS:6gBhNmsk135....` (it must be a \"Database Administrator\" token)\n",
"- Optionally a Keyspace name (if omitted, the default one for the database will be used)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5fa6bd74-d4b2-45c5-9757-96dddc6242fb",
"metadata": {},
"outputs": [],
"source": [
"ASTRA_DB_ID = input(\"ASTRA_DB_ID = \")\n",
"ASTRA_DB_TOKEN = getpass(\"ASTRA_DB_TOKEN = \")\n",
"\n",
"desired_keyspace = input(\"ASTRA_DB_KEYSPACE (optional, can be left empty) = \")\n",
"if desired_keyspace:\n",
" ASTRA_DB_KEYSPACE = desired_keyspace\n",
"else:\n",
" ASTRA_DB_KEYSPACE = None"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "add6e585-17ff-452e-8ef6-7e485ead0b06",
"metadata": {},
"outputs": [],
"source": [
"import cassio\n",
"\n",
"cassio.init(\n",
" database_id=ASTRA_DB_ID,\n",
" token=ASTRA_DB_TOKEN,\n",
" keyspace=ASTRA_DB_KEYSPACE,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "b305823c-bc98-4f3d-aabb-d7eb663ea421",
"metadata": {},
"source": [
"Now you can create the vector store:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f45f3038-9d59-41cc-8b43-774c6aa80295",
"metadata": {},
"outputs": [],
"source": [
"vstore = Cassandra(\n",
" embedding=embe,\n",
" table_name=\"cassandra_vector_demo\",\n",
" # session=None, keyspace=None # Uncomment on older versions of LangChain\n",
")"
]
},
{
"cell_type": "markdown",
"id": "39284918-cf8a-49bb-a2d3-aef285bb2ffa",
"metadata": {},
"source": [
"### Usage of the vector store"
]
},
{
"cell_type": "markdown",
"id": "3cc1aead-d6ec-48a3-affe-1d0cffa955a9",
"metadata": {},
"source": [
"_See the sections \"Load a dataset\" through \"A minimal RAG chain\" above._\n",
"\n",
"Speaking of the latter, you can check out a full RAG template for Astra DB through CQL [here](https://github.com/langchain-ai/langchain/tree/master/templates/cassandra-entomology-rag)."
]
},
{
"cell_type": "markdown",
"id": "096397d8-6622-4685-9f9d-7e238beca467",
"metadata": {},
"source": [
"### Cleanup"
]
},
{
"cell_type": "markdown",
"id": "cc1e74f9-5500-41aa-836f-235b1ed5f20c",
"metadata": {},
"source": [
"the following essentially retrieves the `Session` object from CassIO and runs a CQL `DROP TABLE` statement with it:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b5b82c33-0e77-4a37-852c-8d50edbdd991",
"metadata": {},
"outputs": [],
"source": [
"cassio.config.resolve_session().execute(\n",
" f\"DROP TABLE {cassio.config.resolve_keyspace()}.cassandra_vector_demo;\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "c10ece4d-ae06-42ab-baf4-4d0ac2051743",
"metadata": {},
"source": [
"### Learn more"
]
},
{
"cell_type": "markdown",
"id": "51ea8b69-7e15-458f-85aa-9fa199f95f9c",
"metadata": {},
"source": [
"For more information, extended quickstarts and additional usage examples, please visit the [CassIO documentation](https://cassio.org/frameworks/langchain/about/) for more on using the LangChain `Cassandra` vector store."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,165 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Baidu Cloud ElasticSearch VectorSearch\n",
"\n",
">[Baidu Cloud VectorSearch](https://cloud.baidu.com/doc/BES/index.html?from=productToDoc) is a fully managed, enterprise-level distributed search and analysis service which is 100% compatible to open source. Baidu Cloud VectorSearch provides low-cost, high-performance, and reliable retrieval and analysis platform level product services for structured/unstructured data. As a vector database , it supports multiple index types and similarity distance methods. \n",
"\n",
">`Baidu Cloud ElasticSearch` provides a privilege management mechanism, for you to configure the cluster privileges freely, so as to further ensure data security.\n",
"\n",
"This notebook shows how to use functionality related to the `Baidu Cloud ElasticSearch VectorStore`.\n",
"To run, you should have an [Baidu Cloud ElasticSearch](https://cloud.baidu.com/product/bes.html) instance up and running:\n",
"\n",
"Read the [help document](https://cloud.baidu.com/doc/BES/s/8llyn0hh4 ) to quickly familiarize and configure Baidu Cloud ElasticSearch instance."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"After the instance is up and running, follow these steps to split documents, get embeddings, connect to the baidu cloud elasticsearch instance, index documents, and perform vector retrieval."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We need to install the following Python packages first."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#!pip install elasticsearch == 7.11.0"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"First, we want to use `QianfanEmbeddings` so we have to get the Qianfan AK and SK. Details for QianFan is related to [Baidu Qianfan Workshop](https://cloud.baidu.com/product/wenxinworkshop)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import getpass\n",
"\n",
"os.environ[\"QIANFAN_AK\"] = getpass.getpass(\"Your Qianfan AK:\")\n",
"os.environ[\"QIANFAN_SK\"] = getpass.getpass(\"Your Qianfan SK:\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Secondly, split documents and get embeddings."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import TextLoader\n",
"\n",
"loader = TextLoader(\"../../../state_of_the_union.txt\")\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"\n",
"from langchain.embeddings import QianfanEmbeddingsEndpoint\n",
"\n",
"embeddings = QianfanEmbeddingsEndpoint()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, create a Baidu ElasticeSearch accessable instance."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create a bes instance and index docs.\n",
"from langchain.vectorstores import BESVectorStore\n",
"\n",
"bes = BESVectorStore.from_documents(\n",
" documents=docs,\n",
" embedding=embeddings,\n",
" bes_url=\"your bes cluster url\",\n",
" index_name=\"your vector index\",\n",
")\n",
"bes.client.indices.refresh(index=\"your vector index\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, Query and retrive data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = bes.similarity_search(query)\n",
"print(docs[0].page_content)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Please feel free to contact <liuboyao@baidu.com> or <chenweixu01@baidu.com> if you encounter any problems during use, and we will do our best to support you."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.9.17"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,326 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
"source": [
"# Cassandra\n",
"\n",
">[Apache Cassandra®](https://cassandra.apache.org) is a NoSQL, row-oriented, highly scalable and highly available database.\n",
"\n",
"Newest Cassandra releases natively [support](https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-30%3A+Approximate+Nearest+Neighbor(ANN)+Vector+Search+via+Storage-Attached+Indexes) Vector Similarity Search.\n",
"\n",
"To run this notebook you need either a running Cassandra cluster equipped with Vector Search capabilities (in pre-release at the time of writing) or a DataStax Astra DB instance running in the cloud (you can get one for free at [datastax.com](https://astra.datastax.com)). Check [cassio.org](https://cassio.org/start_here/) for more information."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b4c41cad-08ef-4f72-a545-2151e4598efe",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"!pip install \"cassio>=0.1.0\""
]
},
{
"cell_type": "markdown",
"id": "b7e46bb0",
"metadata": {},
"source": [
"### Please provide database connection parameters and secrets:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36128a32",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import getpass\n",
"\n",
"database_mode = (input(\"\\n(C)assandra or (A)stra DB? \")).upper()\n",
"\n",
"keyspace_name = input(\"\\nKeyspace name? \")\n",
"\n",
"if database_mode == \"A\":\n",
" ASTRA_DB_APPLICATION_TOKEN = getpass.getpass('\\nAstra DB Token (\"AstraCS:...\") ')\n",
" #\n",
" ASTRA_DB_SECURE_BUNDLE_PATH = input(\"Full path to your Secure Connect Bundle? \")\n",
"elif database_mode == \"C\":\n",
" CASSANDRA_CONTACT_POINTS = input(\n",
" \"Contact points? (comma-separated, empty for localhost) \"\n",
" ).strip()"
]
},
{
"cell_type": "markdown",
"id": "4f22aac2",
"metadata": {},
"source": [
"#### depending on whether local or cloud-based Astra DB, create the corresponding database connection \"Session\" object"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "677f8576",
"metadata": {},
"outputs": [],
"source": [
"from cassandra.cluster import Cluster\n",
"from cassandra.auth import PlainTextAuthProvider\n",
"\n",
"if database_mode == \"C\":\n",
" if CASSANDRA_CONTACT_POINTS:\n",
" cluster = Cluster(\n",
" [cp.strip() for cp in CASSANDRA_CONTACT_POINTS.split(\",\") if cp.strip()]\n",
" )\n",
" else:\n",
" cluster = Cluster()\n",
" session = cluster.connect()\n",
"elif database_mode == \"A\":\n",
" ASTRA_DB_CLIENT_ID = \"token\"\n",
" cluster = Cluster(\n",
" cloud={\n",
" \"secure_connect_bundle\": ASTRA_DB_SECURE_BUNDLE_PATH,\n",
" },\n",
" auth_provider=PlainTextAuthProvider(\n",
" ASTRA_DB_CLIENT_ID,\n",
" ASTRA_DB_APPLICATION_TOKEN,\n",
" ),\n",
" )\n",
" session = cluster.connect()\n",
"else:\n",
" raise NotImplementedError"
]
},
{
"cell_type": "markdown",
"id": "320af802-9271-46ee-948f-d2453933d44b",
"metadata": {},
"source": [
"### Please provide OpenAI access key\n",
"\n",
"We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ffea66e4-bc23-46a9-9580-b348dfe7b7a7",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")"
]
},
{
"cell_type": "markdown",
"id": "e98a139b",
"metadata": {},
"source": [
"### Creation and usage of the Vector Store"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "aac9563e",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores import Cassandra\n",
"from langchain.document_loaders import TextLoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a3c3999a",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import TextLoader\n",
"\n",
"SOURCE_FILE_NAME = \"../../modules/state_of_the_union.txt\"\n",
"\n",
"loader = TextLoader(SOURCE_FILE_NAME)\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"\n",
"embedding_function = OpenAIEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6e104aee",
"metadata": {},
"outputs": [],
"source": [
"table_name = \"my_vector_db_table\"\n",
"\n",
"docsearch = Cassandra.from_documents(\n",
" documents=docs,\n",
" embedding=embedding_function,\n",
" session=session,\n",
" keyspace=keyspace_name,\n",
" table_name=table_name,\n",
")\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = docsearch.similarity_search(query)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f509ee02",
"metadata": {},
"outputs": [],
"source": [
"## if you already have an index, you can load it and use it like this:\n",
"\n",
"# docsearch_preexisting = Cassandra(\n",
"# embedding=embedding_function,\n",
"# session=session,\n",
"# keyspace=keyspace_name,\n",
"# table_name=table_name,\n",
"# )\n",
"\n",
"# docs = docsearch_preexisting.similarity_search(query, k=2)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9c608226",
"metadata": {},
"outputs": [],
"source": [
"print(docs[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "d46d1452",
"metadata": {},
"source": [
"### Maximal Marginal Relevance Searches\n",
"\n",
"In addition to using similarity search in the retriever object, you can also use `mmr` as retriever.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a359ed74",
"metadata": {},
"outputs": [],
"source": [
"retriever = docsearch.as_retriever(search_type=\"mmr\")\n",
"matched_docs = retriever.get_relevant_documents(query)\n",
"for i, d in enumerate(matched_docs):\n",
" print(f\"\\n## Document {i}\\n\")\n",
" print(d.page_content)"
]
},
{
"cell_type": "markdown",
"id": "7c477287",
"metadata": {},
"source": [
"Or use `max_marginal_relevance_search` directly:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9ca82740",
"metadata": {},
"outputs": [],
"source": [
"found_docs = docsearch.max_marginal_relevance_search(query, k=2, fetch_k=10)\n",
"for i, doc in enumerate(found_docs):\n",
" print(f\"{i + 1}.\", doc.page_content, \"\\n\")"
]
},
{
"cell_type": "markdown",
"id": "da791c5f",
"metadata": {},
"source": [
"### Metadata filtering\n",
"\n",
"You can specify filtering on metadata when running searches in the vector store. By default, when inserting documents, the only metadata is the `\"source\"` (but you can customize the metadata at insertion time).\n",
"\n",
"Since only one files was inserted, this is just a demonstration of how filters are passed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "93f132fa",
"metadata": {},
"outputs": [],
"source": [
"filter = {\"source\": SOURCE_FILE_NAME}\n",
"filtered_docs = docsearch.similarity_search(query, filter=filter, k=5)\n",
"print(f\"{len(filtered_docs)} documents retrieved.\")\n",
"print(f\"{filtered_docs[0].page_content[:64]} ...\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1b413ec4",
"metadata": {},
"outputs": [],
"source": [
"filter = {\"source\": \"nonexisting_file.txt\"}\n",
"filtered_docs2 = docsearch.similarity_search(query, filter=filter)\n",
"print(f\"{len(filtered_docs2)} documents retrieved.\")"
]
},
{
"cell_type": "markdown",
"id": "a0fea764",
"metadata": {},
"source": [
"Please visit the [cassIO documentation](https://cassio.org/frameworks/langchain/about/) for more on using vector stores with Langchain."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -104,11 +104,14 @@
"source": [
"from dingodb import DingoDB\n",
"\n",
"index_name = \"langchain-demo\"\n",
"index_name = \"langchain_demo\"\n",
"\n",
"dingo_client = DingoDB(user=\"\", password=\"\", host=[\"127.0.0.1:13000\"])\n",
"# First, check if our index already exists. If it doesn't, we create it\n",
"if index_name not in dingo_client.get_index():\n",
"if (\n",
" index_name not in dingo_client.get_index()\n",
" and index_name.upper() not in dingo_client.get_index()\n",
"):\n",
" # we create a new index, modify to your own\n",
" dingo_client.create_index(\n",
" index_name=index_name, dimension=1536, metric_type=\"cosine\", auto_id=False\n",

View File

@@ -776,6 +776,40 @@
"print(results[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Customize the Document Builder\n",
"\n",
"With ```doc_builder``` parameter at search, you are able to adjust how a Document is being built using data retrieved from Elasticsearch. This is especially useful if you have indices which were not created using Langchain."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from typing import Dict\n",
"from langchain.docstore.document import Document\n",
"\n",
"def custom_document_builder(hit: Dict) -> Document:\n",
" src = hit.get(\"_source\", {})\n",
" return Document(\n",
" page_content=src.get(\"content\", \"Missing content!\"),\n",
" metadata={\"page_number\": src.get(\"page_number\", -1), \"original_filename\": src.get(\"original_filename\", \"Missing filename!\")},\n",
" )\n",
"\n",
"results = db.similarity_search(\n",
" \"What did the president say about Ketanji Brown Jackson\",\n",
" k=4,\n",
" doc_builder=custom_document_builder,\n",
")\n",
"print(\"Results:\")\n",
"print(results[0])"
]
},
{
"cell_type": "markdown",
"id": "3242fd42",
@@ -929,7 +963,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.3"
"version": "3.9.7"
}
},
"nbformat": 4,

View File

@@ -180,7 +180,7 @@
"# Specify directly if testing\n",
"# SERVICE_URL = \"postgres://tsdbadmin:<password>@<id>.tsdb.cloud.timescale.com:<port>/tsdb?sslmode=require\"\n",
"\n",
"# # You can get also it from an enviornment variables. We suggest using a .env file.\n",
"# # You can get also it from an environment variables. We suggest using a .env file.\n",
"# import os\n",
"# SERVICE_URL = os.environ.get(\"TIMESCALE_SERVICE_URL\", \"\")"
]

View File

@@ -10,9 +10,19 @@
"\n",
">[Weaviate](https://weaviate.io/) is an open-source vector database. It allows you to store data objects and vector embeddings from your favorite ML-models, and scale seamlessly into billions of data objects.\n",
"\n",
"This notebook shows how to use functionality related to the `Weaviate`vector database.\n",
"This notebook shows how to use the functionality related to the `Weaviate` vector database.\n",
"\n",
"See the `Weaviate` [installation instructions](https://weaviate.io/developers/weaviate/installation)."
"`Weaviate` can be deployed in many different ways depending on your requirements. For example, you can either connect to a [Weaviate Cloud Services](https://console.weaviate.cloud) instance or a [local Docker instance](https://weaviate.io/developers/weaviate/installation/docker-compose). \n",
"See the `Weaviate` [installation instructions](https://weaviate.io/developers/weaviate/installation) for more information."
]
},
{
"cell_type": "markdown",
"id": "5fb59dec",
"metadata": {},
"source": [
"## Prerequisites\n",
"Install the `weaviate-client` package and set the relevant environment variables."
]
},
{
@@ -27,19 +37,21 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: weaviate-client in /workspaces/langchain/.venv/lib/python3.9/site-packages (3.19.1)\n",
"Requirement already satisfied: requests<2.29.0,>=2.28.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (2.28.2)\n",
"Requirement already satisfied: validators<=0.21.0,>=0.18.2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (0.20.0)\n",
"Requirement already satisfied: tqdm<5.0.0,>=4.59.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (4.65.0)\n",
"Requirement already satisfied: authlib>=1.1.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (1.2.0)\n",
"Requirement already satisfied: cryptography>=3.2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from authlib>=1.1.0->weaviate-client) (40.0.2)\n",
"Requirement already satisfied: charset-normalizer<4,>=2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (3.1.0)\n",
"Requirement already satisfied: idna<4,>=2.5 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (3.4)\n",
"Requirement already satisfied: urllib3<1.27,>=1.21.1 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (1.26.15)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (2023.5.7)\n",
"Requirement already satisfied: decorator>=3.4.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from validators<=0.21.0,>=0.18.2->weaviate-client) (5.1.1)\n",
"Requirement already satisfied: cffi>=1.12 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from cryptography>=3.2->authlib>=1.1.0->weaviate-client) (1.15.1)\n",
"Requirement already satisfied: pycparser in /workspaces/langchain/.venv/lib/python3.9/site-packages (from cffi>=1.12->cryptography>=3.2->authlib>=1.1.0->weaviate-client) (2.21)\n"
"Requirement already satisfied: weaviate-client in /opt/homebrew/lib/python3.11/site-packages (3.23.1)\n",
"Requirement already satisfied: requests<=2.31.0,>=2.28.0 in /opt/homebrew/lib/python3.11/site-packages (from weaviate-client) (2.31.0)\n",
"Requirement already satisfied: validators<=0.21.0,>=0.18.2 in /opt/homebrew/lib/python3.11/site-packages (from weaviate-client) (0.21.0)\n",
"Requirement already satisfied: tqdm<5.0.0,>=4.59.0 in /opt/homebrew/lib/python3.11/site-packages (from weaviate-client) (4.66.1)\n",
"Requirement already satisfied: authlib>=1.1.0 in /opt/homebrew/lib/python3.11/site-packages (from weaviate-client) (1.2.1)\n",
"Requirement already satisfied: cryptography>=3.2 in /opt/homebrew/lib/python3.11/site-packages (from authlib>=1.1.0->weaviate-client) (41.0.4)\n",
"Requirement already satisfied: charset-normalizer<4,>=2 in /opt/homebrew/lib/python3.11/site-packages (from requests<=2.31.0,>=2.28.0->weaviate-client) (2.0.12)\n",
"Requirement already satisfied: idna<4,>=2.5 in /opt/homebrew/lib/python3.11/site-packages (from requests<=2.31.0,>=2.28.0->weaviate-client) (3.4)\n",
"Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/homebrew/lib/python3.11/site-packages (from requests<=2.31.0,>=2.28.0->weaviate-client) (1.26.17)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /opt/homebrew/lib/python3.11/site-packages (from requests<=2.31.0,>=2.28.0->weaviate-client) (2023.7.22)\n",
"Requirement already satisfied: cffi>=1.12 in /opt/homebrew/lib/python3.11/site-packages (from cryptography>=3.2->authlib>=1.1.0->weaviate-client) (1.16.0)\n",
"Requirement already satisfied: pycparser in /opt/homebrew/lib/python3.11/site-packages (from cffi>=1.12->cryptography>=3.2->authlib>=1.1.0->weaviate-client) (2.21)\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.1\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpython3.11 -m pip install --upgrade pip\u001b[0m\n"
]
}
],
@@ -48,7 +60,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "6b34828d-e627-4d85-aabd-eeb15d9f4b00",
"metadata": {},
@@ -81,7 +92,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 4,
"id": "53b7ce2d-3c09-4d1c-b66b-5769ce6746ae",
"metadata": {},
"outputs": [],
@@ -90,9 +101,18 @@
"WEAVIATE_API_KEY = os.environ[\"WEAVIATE_API_KEY\"]"
]
},
{
"cell_type": "markdown",
"id": "b867eb31",
"metadata": {},
"source": [
"## Similarity search\n",
"Below you can see a minimal example of how to approach a simple similarity search."
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"id": "aac9563e",
"metadata": {
"tags": []
@@ -107,7 +127,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 6,
"id": "a3c3999a",
"metadata": {},
"outputs": [],
@@ -124,17 +144,22 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 7,
"id": "21e9e528",
"metadata": {},
"outputs": [],
"source": [
"db = Weaviate.from_documents(docs, embeddings, weaviate_url=WEAVIATE_URL, by_text=False)"
"db = Weaviate.from_documents(\n",
" docs, \n",
" embeddings, \n",
" weaviate_url=WEAVIATE_URL, \n",
" by_text=False\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 8,
"id": "b4170176",
"metadata": {},
"outputs": [],
@@ -145,7 +170,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 9,
"id": "ecf3b890",
"metadata": {},
"outputs": [
@@ -186,7 +211,7 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 10,
"id": "f6604f1d",
"metadata": {},
"outputs": [
@@ -202,7 +227,8 @@
"import weaviate\n",
"\n",
"client = weaviate.Client(\n",
" url=WEAVIATE_URL, auth_client_secret=weaviate.AuthApiKey(WEAVIATE_API_KEY)\n",
" url=WEAVIATE_URL, \n",
" auth_client_secret=weaviate.AuthApiKey(WEAVIATE_API_KEY)\n",
")\n",
"\n",
"# client = weaviate.Client(\n",
@@ -214,7 +240,10 @@
"# )\n",
"\n",
"vectorstore = Weaviate.from_documents(\n",
" documents, embeddings, client=client, by_text=False\n",
" documents, \n",
" embeddings, \n",
" client=client, \n",
" by_text=False\n",
")"
]
},
@@ -239,7 +268,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 11,
"id": "102105a1",
"metadata": {},
"outputs": [
@@ -265,7 +294,7 @@
"id": "8fc3487b",
"metadata": {},
"source": [
"# Persistence"
"## Persistence"
]
},
{
@@ -273,7 +302,7 @@
"id": "281c0fcc",
"metadata": {},
"source": [
"Anything uploaded to weaviate is automatically persistent into the database. You do not need to call any specific method or pass any param for this to happen."
"Anything uploaded to Weaviate is automatically persistent into the database. You do not need to call any specific method or pass any parameters for this to happen."
]
},
{
@@ -285,14 +314,14 @@
"\n",
"This section goes over different options for how to use Weaviate as a retriever.\n",
"\n",
"### MMR\n",
"### Maximal marginal relevance search (MMR)\n",
"\n",
"In addition to using similarity search in the retriever object, you can also use `mmr`."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 12,
"id": "8b7df7ae",
"metadata": {},
"outputs": [
@@ -312,12 +341,53 @@
"retriever.get_relevant_documents(query)[0]"
]
},
{
"cell_type": "markdown",
"id": "4b14a3a5",
"metadata": {},
"source": [
"### Hybrid search\n",
"Weaviate also offers hybrid search. See [`WeaviateHybridSearchRetriever`](https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid) for reference."
]
},
{
"cell_type": "markdown",
"id": "508016e8",
"metadata": {},
"source": [
"## Use cases\n",
"As the following example shows, LLMs don't have access to knowledge outside of their training data. Thus, vector stores come in handy to provide LLMs with additional context."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "5299b13b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"As an AI language model, I don't have real-time information or the ability to browse the internet. Therefore, I cannot provide you with the most recent statements made by the president about Justice Breyer. However, it's worth noting that the president's opinions on Justice Breyer may vary depending on the specific context and time period. It would be best to refer to reliable news sources or official statements to get the most accurate and up-to-date information on this topic.\""
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"llm = ChatOpenAI(model_name=\"gpt-3.5-turbo\", temperature=0)\n",
"llm.predict(\"What did the president say about Justice Breyer\")"
]
},
{
"cell_type": "markdown",
"id": "fbd7a6cb",
"metadata": {},
"source": [
"## Question Answering with Sources"
"### Question Answering with Sources"
]
},
{
@@ -330,7 +400,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 14,
"id": "5e824f3b",
"metadata": {},
"outputs": [],
@@ -341,7 +411,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 15,
"id": "61209cc3",
"metadata": {},
"outputs": [],
@@ -354,7 +424,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 16,
"id": "4abc3d37",
"metadata": {},
"outputs": [],
@@ -370,7 +440,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 17,
"id": "c7062393",
"metadata": {},
"outputs": [],
@@ -382,7 +452,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 18,
"id": "7e41b773",
"metadata": {},
"outputs": [
@@ -404,6 +474,115 @@
" return_only_outputs=True,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "05007f8a",
"metadata": {},
"source": [
"### Retrieval-Augmented Generation"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "30f285a1",
"metadata": {},
"outputs": [],
"source": [
"with open(\"../../modules/state_of_the_union.txt\") as f:\n",
" state_of_the_union = f.read()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_text(state_of_the_union)"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "08490f15",
"metadata": {},
"outputs": [],
"source": [
"docsearch = Weaviate.from_texts(\n",
" texts,\n",
" embeddings,\n",
" weaviate_url=WEAVIATE_URL,\n",
" by_text=False,\n",
" metadatas=[{\"source\": f\"{i}-pl\"} for i in range(len(texts))],\n",
")\n",
"\n",
"retriever = docsearch.as_retriever()"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "499cb1f5",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"input_variables=['context', 'question'] messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context', 'question'], template=\"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\\nQuestion: {question} \\nContext: {context} \\nAnswer:\\n\"))]\n"
]
}
],
"source": [
"from langchain.prompts import ChatPromptTemplate\n",
"\n",
"template = \"\"\"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\n",
"Question: {question} \n",
"Context: {context} \n",
"Answer:\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"print(prompt)"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "28d95686",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model_name=\"gpt-3.5-turbo\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "c697d0cd",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The president thanked Justice Breyer for his service and dedication to the country.'"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.schema.runnable import RunnablePassthrough\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"\n",
"rag_chain = (\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()} \n",
" | prompt \n",
" | llm\n",
" | StrOutputParser() \n",
")\n",
"\n",
"rag_chain.invoke(\"What did the president say about Justice Breyer\")"
]
}
],
"metadata": {
@@ -422,7 +601,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.11.4"
}
},
"nbformat": 4,

View File

Before

Width:  |  Height:  |  Size: 766 KiB

After

Width:  |  Height:  |  Size: 766 KiB

View File

Before

Width:  |  Height:  |  Size: 815 KiB

After

Width:  |  Height:  |  Size: 815 KiB

View File

@@ -1,11 +1,13 @@
# LangSmith
---
sidebar_class_name: hidden
---
import DocCardList from "@theme/DocCardList";
# LangSmith
[LangSmith](https://smith.langchain.com) helps you trace and evaluate your language model applications and intelligent agents to help you
move from prototype to production.
Check out the [interactive walkthrough](/docs/guides/langsmith/walkthrough) below to get started.
Check out the [interactive walkthrough](/docs/langsmith/walkthrough) to get started.
For more information, please refer to the [LangSmith documentation](https://docs.smith.langchain.com/).
@@ -18,5 +20,3 @@ check out the [LangSmith Cookbook](https://github.com/langchain-ai/langsmith-coo
- How to fine-tune a LLM on real usage data ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/fine-tuning-examples/export-to-openai/fine-tuning-on-chat-runs.ipynb)).
- How to use the [LangChain Hub](https://smith.langchain.com/hub) to version your prompts ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/hub-examples/retrieval-qa-chain/retrieval-qa.ipynb))
<DocCardList />

View File

@@ -8,7 +8,7 @@
},
"source": [
"# LangSmith Walkthrough\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/langsmith/walkthrough.ipynb)\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/langsmith/walkthrough.ipynb)\n",
"\n",
"LangChain makes it easy to prototype LLM applications and Agents. However, delivering LLM applications to production can be deceptively difficult. You will likely have to heavily customize and iterate on your prompts, chains, and other components to create a high-quality product.\n",
"\n",
@@ -140,7 +140,7 @@
"source": [
"from langchain import hub\n",
"from langchain.agents import AgentExecutor\n",
"from langchain.agents.format_scratchpad import format_to_openai_functions\n",
"from langchain.agents.format_scratchpad import format_to_openai_function_messages\n",
"from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.tools import DuckDuckGoSearchResults\n",
@@ -165,7 +165,7 @@
"runnable_agent = (\n",
" {\n",
" \"input\": lambda x: x[\"input\"],\n",
" \"agent_scratchpad\": lambda x: format_to_openai_functions(\n",
" \"agent_scratchpad\": lambda x: format_to_openai_function_messages(\n",
" x[\"intermediate_steps\"]\n",
" ),\n",
" }\n",
@@ -335,7 +335,7 @@
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.agents import AgentType, initialize_agent, load_tools, AgentExecutor\n",
"from langchain.agents.format_scratchpad import format_to_openai_functions\n",
"from langchain.agents.format_scratchpad import format_to_openai_function_messages\n",
"from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser\n",
"from langchain.tools.render import format_tool_to_openai_function\n",
"from langchain import hub\n",
@@ -351,7 +351,7 @@
" runnable_agent = (\n",
" {\n",
" \"input\": lambda x: x[\"input\"],\n",
" \"agent_scratchpad\": lambda x: format_to_openai_functions(\n",
" \"agent_scratchpad\": lambda x: format_to_openai_function_messages(\n",
" x[\"intermediate_steps\"]\n",
" ),\n",
" }\n",
@@ -790,7 +790,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -38,7 +38,7 @@ It uses the ReAct framework to decide which tool to use, and uses memory to reme
## [Self-ask with search](/docs/modules/agents/agent_types/self_ask_with_search)
This agent utilizes a single tool that should be named `Intermediate Answer`.
This tool should be able to lookup factual answers to questions. This agent
This tool should be able to look up factual answers to questions. This agent
is equivalent to the original [self-ask with search paper](https://ofir.io/self-ask.pdf),
where a Google search API was provided as the tool.
@@ -46,7 +46,7 @@ where a Google search API was provided as the tool.
This agent uses the ReAct framework to interact with a docstore. Two tools must
be provided: a `Search` tool and a `Lookup` tool (they must be named exactly as so).
The `Search` tool should search for a document, while the `Lookup` tool should lookup
The `Search` tool should search for a document, while the `Lookup` tool should look up
a term in the most recently found document.
This agent is equivalent to the
original [ReAct paper](https://arxiv.org/pdf/2210.03629.pdf), specifically the Wikipedia example.

View File

@@ -143,7 +143,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents.format_scratchpad import format_to_openai_functions\n",
"from langchain.agents.format_scratchpad import format_to_openai_function_messages\n",
"from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser"
]
},
@@ -157,7 +157,7 @@
"agent = (\n",
" {\n",
" \"input\": lambda x: x[\"input\"],\n",
" \"agent_scratchpad\": lambda x: format_to_openai_functions(\n",
" \"agent_scratchpad\": lambda x: format_to_openai_function_messages(\n",
" x[\"intermediate_steps\"]\n",
" ),\n",
" }\n",

View File

@@ -115,9 +115,7 @@
"cell_type": "code",
"execution_count": 6,
"id": "ba8e4cbe",
"metadata": {
"scrolled": false
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -254,9 +252,7 @@
"cell_type": "code",
"execution_count": 19,
"id": "4362ebc7",
"metadata": {
"scrolled": false
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -458,7 +454,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,250 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e10aa932",
"metadata": {},
"source": [
"# OpenAI tools\n",
"\n",
"With LCEL we can easily construct agents that take advantage of [OpenAI parallel function calling](https://platform.openai.com/docs/guides/function-calling/parallel-function-calling) (a.k.a. tool calling)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "ec89be68",
"metadata": {},
"outputs": [],
"source": [
"# !pip install -U openai duckduckgo-search"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "b812b982",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import initialize_agent, AgentExecutor, AgentType, Tool\n",
"from langchain.agents.format_scratchpad.openai_tools import (\n",
" format_to_openai_tool_messages,\n",
")\n",
"from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain.tools import DuckDuckGoSearchRun, BearlyInterpreterTool\n",
"from langchain.tools.render import format_tool_to_openai_tool"
]
},
{
"cell_type": "markdown",
"id": "6ef71dfc-074b-409a-8451-863feef937ae",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"For this agent let's give it the ability to search [DuckDuckGo](/docs/integrations/tools/ddg) and use [Bearly's code interpreter](/docs/integrations/tools/bearly). You'll need a Bearly API key, which you can [get here](https://bearly.ai/dashboard)."
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "23fc0aa6",
"metadata": {},
"outputs": [],
"source": [
"lc_tools = [DuckDuckGoSearchRun(), BearlyInterpreterTool(api_key=\"...\").as_tool()]\n",
"oai_tools = [format_tool_to_openai_tool(tool) for tool in lc_tools]"
]
},
{
"cell_type": "markdown",
"id": "90c293df-ce11-4600-b912-e937215ec644",
"metadata": {},
"source": [
"## Prompt template\n",
"\n",
"We need to make sure we have a user input message and an \"agent_scratchpad\" messages placeholder, which is where the AgentExecutor will track AI messages invoking tools and Tool messages returning the tool output."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "55292bed",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a helpful assistant\"),\n",
" (\"user\", \"{input}\"),\n",
" MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "32904250-c53e-415e-abdf-7ce8b1357fb7",
"metadata": {},
"source": [
"## Model\n",
"\n",
"Only certain models support parallel function calling, so make sure you're using a compatible model."
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "552421b3",
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-1106\")"
]
},
{
"cell_type": "markdown",
"id": "6fc73aa5-e185-4c6a-8770-1279c3ae5530",
"metadata": {},
"source": [
"## Agent\n",
"\n",
"We use the `OpenAIToolsAgentOutputParser` to convert the tool calls returned by the model into `AgentAction`s objects that our `AgentExecutor` can then route to the appropriate tool."
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "bf514eb4",
"metadata": {},
"outputs": [],
"source": [
"agent = (\n",
" {\n",
" \"input\": lambda x: x[\"input\"],\n",
" \"agent_scratchpad\": lambda x: format_to_openai_tool_messages(\n",
" x[\"intermediate_steps\"]\n",
" ),\n",
" }\n",
" | prompt\n",
" | llm.bind(tools=oai_tools)\n",
" | OpenAIToolsAgentOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ea032e1c-523d-4509-a008-e693529324be",
"metadata": {},
"source": [
"## Agent executor"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "bdc7e506",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['memory', 'callbacks', 'callback_manager', 'verbose', 'tags', 'metadata', 'agent', 'tools', 'return_intermediate_steps', 'max_iterations', 'max_execution_time', 'early_stopping_method', 'handle_parsing_errors', 'trim_intermediate_steps']\n"
]
}
],
"source": [
"agent_executor = AgentExecutor(agent=agent, tools=lc_tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "2cd65218",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `duckduckgo_search` with `average temperature in Los Angeles today`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mNext week, there is a growing potential for 1 to 2 storms Tuesday through Friday bringing a 90% chance of rain to the area. There is a 50% chance of a moderate storm with 1 to 3 inches of total rainfall, and a 10% chance of a major storm of 3 to 6+ inches. Quick Facts Today's weather: Sunny, windy Beaches: 70s-80s Mountains: 60s-70s/63-81 Inland: 70s Warnings and advisories: Red Flag Warning, Wind Advisory Todays highs along the coast will be in... yesterday temp 66.6 °F Surf Forecast in Los Angeles for today Another important indicators for a comfortable holiday on the beach are the presence and height of the waves, as well as the speed and direction of the wind. Please find below data on the swell size for Los Angeles. Daily max (°C) 19 JAN 18 FEB 19 MAR 20 APR 21 MAY 22 JUN 24 JUL 24 AUG 24 SEP 23 OCT 21 NOV 19 DEC Rainfall (mm) 61 JAN 78° | 53° 60 °F like 60° Clear N 0 Today's temperature is forecast to be NEARLY THE SAME as yesterday. Radar Satellite WunderMap |Nexrad Today Wed 11/08 High 78 °F 0% Precip. / 0.00 in Sunny....\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `duckduckgo_search` with `average temperature in New York City today`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mWeather Underground provides local & long-range weather forecasts, weatherreports, maps & tropical weather conditions for the New York City area. ... Today Tue 11/07 High 68 ... Climate Central's prediction for an even more distant date — 2100 — is that the average temperature in 247 cities across the country will be 8 degrees higher than it is now. New York will ... Extended Forecast for New York NY Similar City Names Overnight Mostly Cloudy Low: 48 °F Saturday Partly Sunny High: 58 °F Saturday Night Mostly Cloudy Low: 48 °F Sunday Mostly Sunny High: 64 °F Sunday Night Mostly Clear Low: 45 °F Monday Weather report for New York City. Night and day a few clouds are expected. It is a sunny day. Temperatures peaking at 62 °F. During the night and in the first hours of the day blows a light breeze (4 to 8 mph). For the afternoon a gentle breeze is expected (8 to 12 mph). Graphical Climatology of New York Central Park - Daily Temperatures, Precipitation, and Snowfall (1869 - Present) The following is a graphical climatology of New York Central Park daily temperatures, precipitation, and snowfall, from January 1869 into 2023. The graphics consist of summary overview charts (in some cases including data back into the late 1860's) followed […]\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `duckduckgo_search` with `average temperature in San Francisco today`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mToday Hourly 10-Day Calendar History Wundermap access_time 10:24 PM PST on November 4, 2023 (GMT -8) | Updated 1 day ago 63° | 48° 59 °F like 59° Partly Cloudy N 0 Today's temperature is... The National Weather Service forecast for the greater San Francisco Bay Area on Thursday calls for clouds increasing over the region during the day. Daytime highs are expected to be in the 60s on ... San Francisco (United States of America) weather - Met Office Today 17° 9° Sunny. Sunrise: 06:41 Sunset: 17:05 M UV Wed 8 Nov 19° 8° Thu 9 Nov 16° 9° Fri 10 Nov 16° 10° Sat 11 Nov 18° 9° Sun 12... Today's weather in San Francisco Bay. The sun rose at 6:42am and the sunset will be at 5:04pm. There will be 10 hours and 22 minutes of sun and the average temperature is 54°F. At the moment water temperature is 58°F and the average water temperature is 58°F. Wintry Impacts in Alaska and New England; Critical Fire Conditions in Southern California. A winter storm continues to bring hazardous travel conditions to south-central Alaska with heavy snow, a wintry mix, ice accumulation, and rough seas. A wintry mix including freezing rain is expected in Upstate New York and interior New England.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `duckduckgo_search` with `current temperature in Los Angeles`\n",
"responded: It seems that the search results did not provide the specific average temperatures for today in Los Angeles, New York City, and San Francisco. Let me try another approach to gather this information for you.\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mFire Weather Show Caption Click a location below for detailed forecast. Last Map Update: Tue, Nov. 7, 2023 at 5:03:23 pm PST Watches, Warnings & Advisories Zoom Out Gale Warning Small Craft Advisory Wind Advisory Fire Weather Watch Text Product Selector (Selected product opens in current window) Hazards Observations Marine Weather Fire Weather 78° | 53° 60 °F like 60° Clear N 0 Today's temperature is forecast to be NEARLY THE SAME as yesterday. Radar Satellite WunderMap |Nexrad Today Wed 11/08 High 78 °F 0% Precip. / 0.00 in Sunny.... Los Angeles and Orange counties will see a few clouds in the morning, but they'll clear up in the afternoon to bring a high of 76 degrees. Daytime temperatures should stay in the 70s most of... Weather Forecast Office NWS Forecast Office Los Angeles, CA Weather.gov > Los Angeles, CA Current Hazards Current Conditions Radar Forecasts Rivers and Lakes Climate and Past Weather Local Programs Click a location below for detailed forecast. Last Map Update: Fri, Oct. 13, 2023 at 12:44:23 am PDT Watches, Warnings & Advisories Zoom Out Want a minute-by-minute forecast for Los-Angeles, CA? MSN Weather tracks it all, from precipitation predictions to severe weather warnings, air quality updates, and even wildfire alerts.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `duckduckgo_search` with `current temperature in New York City`\n",
"responded: It seems that the search results did not provide the specific average temperatures for today in Los Angeles, New York City, and San Francisco. Let me try another approach to gather this information for you.\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mCurrent Weather for Popular Cities . San Francisco, CA 55 ... New York City, NY Weather Conditions star_ratehome. 55 ... Low: 47°F Sunday Mostly Sunny High: 62°F change location New York, NY Weather Forecast Office NWS Forecast Office New York, NY Weather.gov > New York, NY Current Hazards Current Conditions Radar Forecasts Rivers and Lakes Climate and Past Weather Local Programs Click a location below for detailed forecast. Today Increasing Clouds High: 50 °F Tonight Mostly Cloudy Low: 47 °F Thursday Slight Chance Rain High: 67 °F Thursday Night Mostly Cloudy Low: 48 °F Friday Mostly Cloudy then Slight Chance Rain High: 54 °F Friday Weather report for New York City Night and day a few clouds are expected. It is a sunny day. Temperatures peaking at 62 °F. During the night and in the first hours of the day blows a light breeze (4 to 8 mph). For the afternoon a gentle breeze is expected (8 to 12 mph). Today 13 October, weather in New York City +61°F. Clear sky, Light Breeze, Northwest 5.1 mph. Atmosphere pressure 29.9 inHg. Relative humidity 45%. Tomorrow's night air temperature will drop to +54°F, wind will change to North 2.7 mph. Pressure will remain unchanged 29.9 inHg. Day temperature will remain unchanged +54°F, and night 15 October ...\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `duckduckgo_search` with `current temperature in San Francisco`\n",
"responded: It seems that the search results did not provide the specific average temperatures for today in Los Angeles, New York City, and San Francisco. Let me try another approach to gather this information for you.\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m59 °F like 59° Partly Cloudy N 0 Today's temperature is forecast to be COOLER than yesterday. Radar Satellite WunderMap |Nexrad Today Thu 11/09 High 63 °F 3% Precip. / 0.00 in A mix of clouds and... Weather Forecast Office NWS Forecast Office San Francisco, CA Weather.gov > San Francisco Bay Area, CA Current Hazards Current Conditions Radar Forecasts Rivers and Lakes Climate and Past Weather Local Programs Click a location below for detailed forecast. Last Map Update: Wed, Nov. 8, 2023 at 5:03:31 am PST Watches, Warnings & Advisories Zoom Out The weather right now in San Francisco, CA is Cloudy. The current temperature is 62°F, and the expected high and low for today, Sunday, November 5, 2023, are 67° high temperature and 57°F low temperature. The wind is currently blowing at 5 miles per hour, and coming from the South Southwest. The wind is gusting to 5 mph. With the wind and ... San Francisco 7 day weather forecast including weather warnings, temperature, rain, wind, visibility, humidity and UV National - Current Temperatures National - First Alert Doppler Latest Stories More ... San Francisco's 'Rev. G' honored with national Jefferson Award for service, seeking peace\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `bearly_interpreter` with `{'python_code': '(78 + 53 + 55) / 3'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[33;1m\u001b[1;3m{'stdout': '', 'stderr': '', 'fileLinks': [], 'exitCode': 0}\u001b[0m\u001b[32;1m\u001b[1;3mThe average of the temperatures in Los Angeles, New York City, and San Francisco today is approximately 62 degrees Fahrenheit.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': \"What's the average of the temperatures in LA, NYC, and SF today?\",\n",
" 'output': 'The average of the temperatures in Los Angeles, New York City, and San Francisco today is approximately 62 degrees Fahrenheit.'}"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.invoke(\n",
" {\"input\": \"What's the average of the temperatures in LA, NYC, and SF today?\"}\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -205,7 +205,7 @@
"\n",
"- prompt: a simple prompt with placeholders for the user's question and then the `agent_scratchpad` (any intermediate steps)\n",
"- tools: we can attach the tools and `Response` format to the LLM as functions\n",
"- format scratchpad: in order to format the `agent_scratchpad` from intermediate steps, we will use the standard `format_to_openai_functions`. This takes intermediate steps and formats them as AIMessages and FunctionMessages.\n",
"- format scratchpad: in order to format the `agent_scratchpad` from intermediate steps, we will use the standard `format_to_openai_function_messages`. This takes intermediate steps and formats them as AIMessages and FunctionMessages.\n",
"- output parser: we will use our custom parser above to parse the response of the LLM\n",
"- AgentExecutor: we will use the standard AgentExecutor to run the loop of agent-tool-agent-tool..."
]
@@ -220,7 +220,7 @@
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.tools.render import format_tool_to_openai_function\n",
"from langchain.agents.format_scratchpad import format_to_openai_functions\n",
"from langchain.agents.format_scratchpad import format_to_openai_function_messages\n",
"from langchain.agents import AgentExecutor"
]
},
@@ -278,7 +278,7 @@
" {\n",
" \"input\": lambda x: x[\"input\"],\n",
" # Format agent scratchpad from intermediate steps\n",
" \"agent_scratchpad\": lambda x: format_to_openai_functions(\n",
" \"agent_scratchpad\": lambda x: format_to_openai_function_messages(\n",
" x[\"intermediate_steps\"]\n",
" ),\n",
" }\n",

View File

@@ -1,4 +1,4 @@
# Custom LLM agent
# Custom LLM Agent
This notebook goes through how to create your own custom LLM agent.

View File

@@ -1,13 +1,13 @@
# Custom LLM Agent (with a ChatModel)
# Custom LLM Chat Agent
This notebook goes through how to create your own custom agent based on a chat model.
This notebook explains how to create your own custom agent based on a chat model.
An LLM chat agent consists of three parts:
An LLM chat agent consists of four key components:
- `PromptTemplate`: This is the prompt template that can be used to instruct the language model on what to do
- `ChatModel`: This is the language model that powers the agent
- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found
- `OutputParser`: This determines how to parse the LLM output into an `AgentAction` or `AgentFinish` object
- `PromptTemplate`: This is the prompt template that instructs the language model on what to do.
- `ChatModel`: This is the language model that powers the agent.
- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found.
- `OutputParser`: This determines how to parse the LLM output into an `AgentAction` or `AgentFinish` object.
The LLM Agent is used in an `AgentExecutor`. This `AgentExecutor` can largely be thought of as a loop that:
1. Passes user input and any previous steps to the Agent (in this case, the LLM Agent)

View File

@@ -3,7 +3,7 @@
This walkthrough demonstrates how to replicate the [MRKL](https://arxiv.org/pdf/2205.00445.pdf) system using agents.
This uses the example Chinook database.
To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the `.db` file in a notebooks folder at the root of this repository.
To set it up, follow the instructions on https://database.guide/2-sample-databases-sqlite/ and place the `.db` file in a "notebooks" folder at the root of this repository.
```python
from langchain.chains import LLMMathChain
@@ -127,7 +127,7 @@ mrkl.run("What is the full name of the artist who recently released an album cal
</CodeOutputBlock>
## With a chat model
## Using a Chat Model
```python
from langchain.chat_models import ChatOpenAI

View File

@@ -0,0 +1,670 @@
{
"cells": [
{
"cell_type": "raw",
"id": "97e00fdb-f771-473f-90fc-d6038e19fd9a",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 3\n",
"sidebar_class_name: hidden\n",
"title: Agents\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f4c03f40-1328-412d-8a48-1db0cd481b77",
"metadata": {},
"source": [
"The core idea of agents is to use a language model to choose a sequence of actions to take.\n",
"In chains, a sequence of actions is hardcoded (in code).\n",
"In agents, a language model is used as a reasoning engine to determine which actions to take and in which order.\n",
"\n",
"## Concepts\n",
"There are several key components here:\n",
"\n",
"### Agent\n",
"\n",
"This is the chain responsible for deciding what step to take next.\n",
"This is powered by a language model and a prompt.\n",
"The inputs to this chain are:\n",
"\n",
"1. Tools: Descriptions of available tools\n",
"2. User input: The high level objective\n",
"3. Intermediate steps: Any (action, tool output) pairs previously executed in order to achieve the user input\n",
"\n",
"The output is the next action(s) to take or the final response to send to the user (`AgentAction`s or `AgentFinish`). An action specifies a tool and the input to that tool. \n",
"\n",
"Different agents have different prompting styles for reasoning, different ways of encoding inputs, and different ways of parsing the output.\n",
"For a full list of built-in agents see [agent types](/docs/modules/agents/agent_types/).\n",
"You can also **easily build custom agents**, which we show how to do in the Get started section below.\n",
"\n",
"### Tools\n",
"\n",
"Tools are functions that an agent can invoke.\n",
"There are two important design considerations around tools:\n",
"\n",
"1. Giving the agent access to the right tools\n",
"2. Describing the tools in a way that is most helpful to the agent\n",
"\n",
"Without thinking through both, you won't be able to build a working agent.\n",
"If you don't give the agent access to a correct set of tools, it will never be able to accomplish the objectives you give it.\n",
"If you don't describe the tools well, the agent won't know how to use them properly.\n",
"\n",
"LangChain provides a wide set of built-in tools, but also makes it easy to define your own (including custom descriptions).\n",
"For a full list of built-in tools, see the [tools integrations section](/docs/integrations/tools/)\n",
"\n",
"### Toolkits\n",
"\n",
"For many common tasks, an agent will need a set of related tools.\n",
"For this LangChain provides the concept of toolkits - groups of around 3-5 tools needed to accomplish specific objectives.\n",
"For example, the GitHub toolkit has a tool for searching through GitHub issues, a tool for reading a file, a tool for commenting, etc.\n",
"\n",
"LangChain provides a wide set of toolkits to get started.\n",
"For a full list of built-in toolkits, see the [toolkits integrations section](/docs/integrations/toolkits/)\n",
"\n",
"### AgentExecutor\n",
"\n",
"The agent executor is the runtime for an agent.\n",
"This is what actually calls the agent, executes the actions it chooses, passes the action outputs back to the agent, and repeats.\n",
"In pseudocode, this looks roughly like:\n",
"\n",
"```python\n",
"next_action = agent.get_action(...)\n",
"while next_action != AgentFinish:\n",
" observation = run(next_action)\n",
" next_action = agent.get_action(..., next_action, observation)\n",
"return next_action\n",
"```\n",
"\n",
"While this may seem simple, there are several complexities this runtime handles for you, including:\n",
"\n",
"1. Handling cases where the agent selects a non-existent tool\n",
"2. Handling cases where the tool errors\n",
"3. Handling cases where the agent produces output that cannot be parsed into a tool invocation\n",
"4. Logging and observability at all levels (agent decisions, tool calls) to stdout and/or to [LangSmith](/docs/langsmith).\n",
"\n",
"### Other types of agent runtimes\n",
"\n",
"The `AgentExecutor` class is the main agent runtime supported by LangChain.\n",
"However, there are other, more experimental runtimes we also support.\n",
"These include:\n",
"\n",
"- [Plan-and-execute Agent](/docs/use_cases/more/agents/autonomous_agents/plan_and_execute)\n",
"- [Baby AGI](/docs/use_cases/more/agents/autonomous_agents/baby_agi)\n",
"- [Auto GPT](/docs/use_cases/more/agents/autonomous_agents/autogpt)\n",
"\n",
"You can also always create your own custom execution logic, which we show how to do below.\n",
"\n",
"## Get started\n",
"\n",
"To best understand the agent framework, lets build an agent from scratch using LangChain Expression Language (LCEL).\n",
"We'll need to build the agent itself, define custom tools, and run the agent and tools in a custom loop. At the end we'll show how to use the standard LangChain `AgentExecutor` to make execution easier.\n",
"\n",
"Some important terminology (and schema) to know:\n",
"\n",
"1. `AgentAction`: This is a dataclass that represents the action an agent should take. It has a `tool` property (which is the name of the tool that should be invoked) and a `tool_input` property (the input to that tool)\n",
"2. `AgentFinish`: This is a dataclass that signifies that the agent has finished and should return to the user. It has a `return_values` parameter, which is a dictionary to return. It often only has one key - `output` - that is a string, and so often it is just this key that is returned.\n",
"3. `intermediate_steps`: These represent previous agent actions and corresponding outputs that are passed around. These are important to pass to future iteration so the agent knows what work it has already done. This is typed as a `List[Tuple[AgentAction, Any]]`. Note that observation is currently left as type `Any` to be maximally flexible. In practice, this is often a string.\n",
"\n",
"### Setup: LangSmith\n",
"\n",
"By definition, agents take a self-determined, input-dependent sequence of steps before returning a user-facing output. This makes debugging these systems particularly tricky, and observability particularly important. [LangSmith](/docs/langsmith) is especially useful for such cases.\n",
"\n",
"When building with LangChain, any built-in agent or custom agent built with LCEL will automatically be traced in LangSmith. And if we use the `AgentExecutor`, we'll get full tracing of not only the agent planning steps but also the tool inputs and outputs.\n",
"\n",
"To set up LangSmith we just need set the following environment variables:\n",
"\n",
"```bash\n",
"export LANGCHAIN_TRACING_V2=\"true\"\n",
"export LANGCHAIN_API_KEY=\"<your-api-key>\"\n",
"```\n",
"\n",
"### Define the agent\n",
"\n",
"We first need to create our agent.\n",
"This is the chain responsible for determining what action to take next.\n",
"\n",
"In this example, we will use OpenAI Function Calling to create this agent.\n",
"**This is generally the most reliable way to create agents.**\n",
"\n",
"For this guide, we will construct a custom agent that has access to a custom tool.\n",
"We are choosing this example because for most real world use cases you will NEED to customize either the agent or the tools. \n",
"We'll create a simple tool that computes the length of a word.\n",
"This is useful because it's actually something LLMs can mess up due to tokenization.\n",
"We will first create it WITHOUT memory, but we will then show how to add memory in.\n",
"Memory is needed to enable conversation.\n",
"\n",
"First, let's load the language model we're going to use to control the agent."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "89cf72b4-6046-4b47-8f27-5522d8cb8036",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)"
]
},
{
"cell_type": "markdown",
"id": "0afe32b4-5b67-49fd-9f05-e94c46fbcc08",
"metadata": {},
"source": [
"We can see that it struggles to count the letters in the string \"educa\"."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "d8eafbad-4084-4f27-b880-308430c44bcf",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='There are 6 letters in the word \"educa\".')"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm.invoke(\"how many letters in the word educa?\")"
]
},
{
"cell_type": "markdown",
"id": "20f353a1-7b03-4692-ba6c-581d82de454b",
"metadata": {},
"source": [
"Next, let's define some tools to use.\n",
"Let's write a really simple Python function to calculate the length of a word that is passed in."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "6bf6c6a6-4aa2-44fc-9d90-5981de827c2f",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import tool\n",
"\n",
"@tool\n",
"def get_word_length(word: str) -> int:\n",
" \"\"\"Returns the length of a word.\"\"\"\n",
" return len(word)\n",
"\n",
"\n",
"tools = [get_word_length]"
]
},
{
"cell_type": "markdown",
"id": "22dc3aeb-012f-4fe6-a980-2bd6d7612e1d",
"metadata": {},
"source": [
"Now let us create the prompt.\n",
"Because OpenAI Function Calling is finetuned for tool usage, we hardly need any instructions on how to reason, or how to output format.\n",
"We will just have two input variables: `input` and `agent_scratchpad`. `input` should be a string containing the user objective. `agent_scratchpad` should be a sequence of messages that contains the previous agent tool invocations and the corresponding tool outputs."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "62c98f77-d203-42cf-adcf-7da9ee93f7c8",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are very powerful assistant, but bad at calculating lengths of words.\",\n",
" ),\n",
" (\"user\", \"{input}\"),\n",
" MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "be29b821-b988-4921-8a1f-f04ec87e2863",
"metadata": {},
"source": [
"How does the agent know what tools it can use?\n",
"In this case we're relying on OpenAI function calling LLMs, which take functions as a separate argument and have been specifically trained to know when to invoke those functions.\n",
"\n",
"To pass in our tools to the agent, we just need to format them to the OpenAI function format and pass them to our model. (By `bind`-ing the functions, we're making sure that they're passed in each time the model is invoked.)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "5231ffd7-a044-4ebd-8e31-d1fe334334c6",
"metadata": {},
"outputs": [],
"source": [
"from langchain.tools.render import format_tool_to_openai_function\n",
"\n",
"llm_with_tools = llm.bind(functions=[format_tool_to_openai_function(t) for t in tools])"
]
},
{
"cell_type": "markdown",
"id": "6efbf02b-8686-4559-8b4c-c2be803cb475",
"metadata": {},
"source": [
"Putting those pieces together, we can now create the agent.\n",
"We will import two last utility functions: a component for formatting intermediate steps (agent action, tool output pairs) to input messages that can be sent to the model, and a component for converting the output message into an agent action/agent finish."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "b2f24d11-1133-48f3-ba70-fc3dd1da5f2c",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents.format_scratchpad import format_to_openai_function_messages\n",
"from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser\n",
"\n",
"agent = (\n",
" {\n",
" \"input\": lambda x: x[\"input\"],\n",
" \"agent_scratchpad\": lambda x: format_to_openai_function_messages(\n",
" x[\"intermediate_steps\"]\n",
" ),\n",
" }\n",
" | prompt\n",
" | llm_with_tools\n",
" | OpenAIFunctionsAgentOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "7d55d2ad-6608-44ab-9949-b16ae8031f53",
"metadata": {},
"source": [
"Now that we have our agent, let's play around with it!\n",
"Let's pass in a simple question and empty intermediate steps and see what it returns:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "01cb7adc-97b6-4713-890e-5d1ddeba909c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AgentActionMessageLog(tool='get_word_length', tool_input={'word': 'educa'}, log=\"\\nInvoking: `get_word_length` with `{'word': 'educa'}`\\n\\n\\n\", message_log=[AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\\n \"word\": \"educa\"\\n}', 'name': 'get_word_length'}})])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.invoke({\"input\": \"how many letters in the word educa?\", \"intermediate_steps\": []})"
]
},
{
"cell_type": "markdown",
"id": "689ec562-3ec1-4b28-928b-c78c788aa097",
"metadata": {},
"source": [
"We can see that it responds with an `AgentAction` to take (it's actually an `AgentActionMessageLog` - a subclass of `AgentAction` which also tracks the full message log). \n",
"\n",
"If we've set up LangSmith, we'll see a trace that let's us inspect the input and output to each step in the sequence: https://smith.langchain.com/public/04110122-01a8-413c-8cd0-b4df6eefa4b7/r\n",
"\n",
"### Define the runtime\n",
"\n",
"So this is just the first step - now we need to write a runtime for this.\n",
"The simplest one is just one that continuously loops, calling the agent, then taking the action, and repeating until an `AgentFinish` is returned.\n",
"Let's code that up below:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "29bbf63b-f866-4b8c-aeea-2f9cffe70b78",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"TOOL NAME: get_word_length\n",
"TOOL INPUT: {'word': 'educa'}\n",
"There are 5 letters in the word \"educa\".\n"
]
}
],
"source": [
"from langchain.schema.agent import AgentFinish\n",
"\n",
"user_input = \"how many letters in the word educa?\"\n",
"intermediate_steps = []\n",
"while True:\n",
" output = agent.invoke(\n",
" {\n",
" \"input\": user_input,\n",
" \"intermediate_steps\": intermediate_steps,\n",
" }\n",
" )\n",
" if isinstance(output, AgentFinish):\n",
" final_result = output.return_values[\"output\"]\n",
" break\n",
" else:\n",
" print(f\"TOOL NAME: {output.tool}\")\n",
" print(f\"TOOL INPUT: {output.tool_input}\")\n",
" tool = {\"get_word_length\": get_word_length}[output.tool]\n",
" observation = tool.run(output.tool_input)\n",
" intermediate_steps.append((output, observation))\n",
"print(final_result)"
]
},
{
"cell_type": "markdown",
"id": "2de8e688-fed4-4efc-a2bc-8d3c504dd764",
"metadata": {},
"source": [
"Woo! It's working.\n",
"\n",
"### Using AgentExecutor\n",
"\n",
"To simplify this a bit, we can import and use the `AgentExecutor` class.\n",
"This bundles up all of the above and adds in error handling, early stopping, tracing, and other quality-of-life improvements that reduce safeguards you need to write."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "9c94ee41-f146-403e-bd0a-5756a53d7842",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor\n",
"\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "markdown",
"id": "9cbd94a2-b456-45e6-835c-a33be3475119",
"metadata": {},
"source": [
"Now let's test it out!"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "6e1e64c7-627c-4713-82ca-8f6db3d9c8f5",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `get_word_length` with `{'word': 'educa'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m5\u001b[0m\u001b[32;1m\u001b[1;3mThere are 5 letters in the word \"educa\".\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': 'how many letters in the word educa?',\n",
" 'output': 'There are 5 letters in the word \"educa\".'}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.invoke({\"input\": \"how many letters in the word educa?\"})"
]
},
{
"cell_type": "markdown",
"id": "1578aede-2ad2-4c15-832e-3e0a1660b342",
"metadata": {},
"source": [
"And looking at the trace, we can see that all of our agent calls and tool invocations are automatically logged: https://smith.langchain.com/public/957b7e26-bef8-4b5b-9ca3-4b4f1c96d501/r"
]
},
{
"cell_type": "markdown",
"id": "a29c0705-b9bc-419f-aae4-974fc092faab",
"metadata": {},
"source": [
"### Adding memory\n",
"\n",
"This is great - we have an agent!\n",
"However, this agent is stateless - it doesn't remember anything about previous interactions.\n",
"This means you can't ask follow up questions easily.\n",
"Let's fix that by adding in memory.\n",
"\n",
"In order to do this, we need to do two things:\n",
"\n",
"1. Add a place for memory variables to go in the prompt\n",
"2. Keep track of the chat history\n",
"\n",
"First, let's add a place for memory in the prompt.\n",
"We do this by adding a placeholder for messages with the key `\"chat_history\"`.\n",
"Notice that we put this ABOVE the new user input (to follow the conversation flow)."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "ceef8c26-becc-4893-b55c-efcf52c4b9d9",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import MessagesPlaceholder\n",
"\n",
"MEMORY_KEY = \"chat_history\"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are very powerful assistant, but bad at calculating lengths of words.\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=MEMORY_KEY),\n",
" (\"user\", \"{input}\"),\n",
" MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "fc4f1e1b-695d-4b25-88aa-d46c015e6342",
"metadata": {},
"source": [
"We can then set up a list to track the chat history"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "935abfee-ab5d-4e9a-b33c-6a40a6fa4777",
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema.messages import HumanMessage, AIMessage\n",
"\n",
"chat_history = []"
]
},
{
"cell_type": "markdown",
"id": "c107b5dd-b934-48a0-a8c5-3b5bd76f2b98",
"metadata": {},
"source": [
"We can then put it all together!"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "24b094ff-bbea-45c4-8000-ed2b5de459a9",
"metadata": {},
"outputs": [],
"source": [
"agent = (\n",
" {\n",
" \"input\": lambda x: x[\"input\"],\n",
" \"agent_scratchpad\": lambda x: format_to_openai_function_messages(\n",
" x[\"intermediate_steps\"]\n",
" ),\n",
" \"chat_history\": lambda x: x[\"chat_history\"],\n",
" }\n",
" | prompt\n",
" | llm_with_tools\n",
" | OpenAIFunctionsAgentOutputParser()\n",
")\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "markdown",
"id": "e34ee9bd-20be-4ab7-b384-a5f0335e7611",
"metadata": {},
"source": [
"When running, we now need to track the inputs and outputs as chat history\n"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "f238022b-3348-45cd-bd6a-c6770b7dc600",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `get_word_length` with `{'word': 'educa'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m5\u001b[0m\u001b[32;1m\u001b[1;3mThere are 5 letters in the word \"educa\".\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mNo, \"educa\" is not a real word in English.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': 'is that a real word?',\n",
" 'chat_history': [HumanMessage(content='how many letters in the word educa?'),\n",
" AIMessage(content='There are 5 letters in the word \"educa\".')],\n",
" 'output': 'No, \"educa\" is not a real word in English.'}"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"input1 = \"how many letters in the word educa?\"\n",
"result = agent_executor.invoke({\"input\": input1, \"chat_history\": chat_history})\n",
"chat_history.extend([\n",
" HumanMessage(content=input1),\n",
" AIMessage(content=result[\"output\"]),\n",
"])\n",
"agent_executor.invoke({\"input\": \"is that a real word?\", \"chat_history\": chat_history})"
]
},
{
"cell_type": "markdown",
"id": "6ba072cd-eb58-409d-83be-55c8110e37f0",
"metadata": {},
"source": [
"Here's the LangSmith trace: https://smith.langchain.com/public/1e1b7e07-3220-4a6c-8a1e-f04182a755b3/r"
]
},
{
"cell_type": "markdown",
"id": "9e8b9127-758b-4dab-b093-2e6357dca3e6",
"metadata": {},
"source": [
"## Next Steps\n",
"\n",
"Awesome! You've now run your first end-to-end agent.\n",
"To dive deeper, you can:\n",
"\n",
"- Check out all the different [agent types](/docs/modules/agents/agent_types/) supported\n",
"- Learn all the controls for [AgentExecutor](/docs/modules/agents/how_to/)\n",
"- Explore the how-to's of [tools](/docs/modules/agents/tools/) and all the [tool integrations](/docs/integrations/tools)\n",
"- See a full list of all the off-the-shelf [toolkits](/docs/integrations/toolkits/) we provide"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "abbe7160-7c82-48ba-a4d3-4426c62edd2a",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,312 +0,0 @@
---
sidebar_position: 4
---
# Agents
The core idea of agents is to use an LLM to choose a sequence of actions to take.
In chains, a sequence of actions is hardcoded (in code).
In agents, a language model is used as a reasoning engine to determine which actions to take and in which order.
Some important terminology (and schema) to know:
1. `AgentAction`: This is a dataclass that represents the action an agent should take. It has a `tool` property (which is the name of the tool that should be invoked) and a `tool_input` property (the input to that tool)
2. `AgentFinish`: This is a dataclass that signifies that the agent has finished and should return to the user. It has a `return_values` parameter, which is a dictionary to return. It often only has one key - `output` - that is a string, and so often it is just this key that is returned.
3. `intermediate_steps`: These represent previous agent actions and corresponding outputs that are passed around. These are important to pass to future iteration so the agent knows what work it has already done. This is typed as a `List[Tuple[AgentAction, Any]]`. Note that observation is currently left as type `Any` to be maximally flexible. In practice, this is often a string.
There are several key components here:
## Agent
This is the chain responsible for deciding what step to take next.
This is powered by a language model and a prompt.
The inputs to this chain are:
1. List of available tools
2. User input
3. Any previously executed steps (`intermediate_steps`)
This chain then returns either the next action to take or the final response to send to the user (`AgentAction` or `AgentFinish`).
Different agents have different prompting styles for reasoning, different ways of encoding input, and different ways of parsing the output.
For a full list of agent types see [agent types](/docs/modules/agents/agent_types/)
## Tools
Tools are functions that an agent calls.
There are two important considerations here:
1. Giving the agent access to the right tools
2. Describing the tools in a way that is most helpful to the agent
Without both, the agent you are trying to build will not work.
If you don't give the agent access to a correct set of tools, it will never be able to accomplish the objective.
If you don't describe the tools properly, the agent won't know how to properly use them.
LangChain provides a wide set of tools to get started, but also makes it easy to define your own (including custom descriptions).
For a full list of tools, see [here](/docs/modules/agents/tools/)
## Toolkits
Often the set of tools an agent has access to is more important than a single tool.
For this LangChain provides the concept of toolkits - groups of tools needed to accomplish specific objectives.
There are generally around 3-5 tools in a toolkit.
LangChain provides a wide set of toolkits to get started.
For a full list of toolkits, see [here](/docs/modules/agents/toolkits/)
## AgentExecutor
The agent executor is the runtime for an agent.
This is what actually calls the agent and executes the actions it chooses.
Pseudocode for this runtime is below:
```python
next_action = agent.get_action(...)
while next_action != AgentFinish:
observation = run(next_action)
next_action = agent.get_action(..., next_action, observation)
return next_action
```
While this may seem simple, there are several complexities this runtime handles for you, including:
1. Handling cases where the agent selects a non-existent tool
2. Handling cases where the tool errors
3. Handling cases where the agent produces output that cannot be parsed into a tool invocation
4. Logging and observability at all levels (agent decisions, tool calls) either to stdout or [LangSmith](https://smith.langchain.com).
## Other types of agent runtimes
The `AgentExecutor` class is the main agent runtime supported by LangChain.
However, there are other, more experimental runtimes we also support.
These include:
- [Plan-and-execute Agent](/docs/use_cases/more/agents/autonomous_agents/plan_and_execute)
- [Baby AGI](/docs/use_cases/more/agents/autonomous_agents/baby_agi)
- [Auto GPT](/docs/use_cases/more/agents/autonomous_agents/autogpt)
## Get started
This will go over how to get started building an agent.
We will create this agent from scratch, using LangChain Expression Language.
We will then define custom tools, and then run it in a custom loop (we will also show how to use the standard LangChain `AgentExecutor`).
### Set up the agent
We first need to create our agent.
This is the chain responsible for determining what action to take next.
In this example, we will use OpenAI Function Calling to create this agent.
This is generally the most reliable way create agents.
In this example we will show what it is like to construct this agent from scratch, using LangChain Expression Language.
For this guide, we will construct a custom agent that has access to a custom tool.
We are choosing this example because we think for most use cases you will NEED to customize either the agent or the tools.
The tool we will give the agent is a tool to calculate the length of a word.
This is useful because this is actually something LLMs can mess up due to tokenization.
We will first create it WITHOUT memory, but we will then show how to add memory in.
Memory is needed to enable conversation.
First, let's load the language model we're going to use to control the agent.
```python
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(temperature=0)
```
Next, let's define some tools to use.
Let's write a really simple Python function to calculate the length of a word that is passed in.
```python
from langchain.agents import tool
@tool
def get_word_length(word: str) -> int:
"""Returns the length of a word."""
return len(word)
tools = [get_word_length]
```
Now let us create the prompt.
Because OpenAI Function Calling is finetuned for tool usage, we hardly need any instructions on how to reason, or how to output format.
We will just have two input variables: `input` (for the user question) and `agent_scratchpad` (for any previous steps taken)
```python
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
prompt = ChatPromptTemplate.from_messages([
("system", "You are very powerful assistant, but bad at calculating lengths of words."),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
```
How does the agent know what tools it can use?
Those are passed in as a separate argument, so we can bind those as keyword arguments to the LLM.
```python
from langchain.tools.render import format_tool_to_openai_function
llm_with_tools = llm.bind(
functions=[format_tool_to_openai_function(t) for t in tools]
)
```
Putting those pieces together, we can now create the agent.
We will import two last utility functions: a component for formatting intermediate steps to messages, and a component for converting the output message into an agent action/agent finish.
```python
from langchain.agents.format_scratchpad import format_to_openai_functions
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
agent = {
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_functions(x['intermediate_steps'])
} | prompt | llm_with_tools | OpenAIFunctionsAgentOutputParser()
```
Now that we have our agent, let's play around with it!
Let's pass in a simple question and empty intermediate steps and see what it returns:
```python
agent.invoke({
"input": "how many letters in the word educa?",
"intermediate_steps": []
})
```
We can see that it responds with an `AgentAction` to take (it's actually an `AgentActionMessageLog` - a subclass of `AgentAction` which also tracks the full message log).
So this is just the first step - now we need to write a runtime for this.
The simplest one is just one that continuously loops, calling the agent, then taking the action, and repeating until an `AgentFinish` is returned.
Let's code that up below:
```python
from langchain.schema.agent import AgentFinish
intermediate_steps = []
while True:
output = agent.invoke({
"input": "how many letters in the word educa?",
"intermediate_steps": intermediate_steps
})
if isinstance(output, AgentFinish):
final_result = output.return_values["output"]
break
else:
print(output.tool, output.tool_input)
tool = {
"get_word_length": get_word_length
}[output.tool]
observation = tool.run(output.tool_input)
intermediate_steps.append((output, observation))
print(final_result)
```
We can see this prints out the following:
<CodeOutputBlock lang="python">
```
get_word_length {'word': 'educa'}
There are 5 letters in the word "educa".
```
</CodeOutputBlock>
Woo! It's working.
To simplify this a bit, we can import and use the `AgentExecutor` class.
This bundles up all of the above and adds in error handling, early stopping, tracing, and other quality-of-life improvements that reduce safeguards you need to write.
```python
from langchain.agents import AgentExecutor
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
```
Now let's test it out!
```python
agent_executor.invoke({"input": "how many letters in the word educa?"})
```
<CodeOutputBlock lang="python">
```
> Entering new AgentExecutor chain...
Invoking: `get_word_length` with `{'word': 'educa'}`
5
There are 5 letters in the word "educa".
> Finished chain.
'There are 5 letters in the word "educa".'
```
</CodeOutputBlock>
This is great - we have an agent!
However, this agent is stateless - it doesn't remember anything about previous interactions.
This means you can't ask follow up questions easily.
Let's fix that by adding in memory.
In order to do this, we need to do two things:
1. Add a place for memory variables to go in the prompt
2. Keep track of the chat history
First, let's add a place for memory in the prompt.
We do this by adding a placeholder for messages with the key `"chat_history"`.
Notice that we put this ABOVE the new user input (to follow the conversation flow).
```python
from langchain.prompts import MessagesPlaceholder
MEMORY_KEY = "chat_history"
prompt = ChatPromptTemplate.from_messages([
("system", "You are very powerful assistant, but bad at calculating lengths of words."),
MessagesPlaceholder(variable_name=MEMORY_KEY),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
```
We can then set up a list to track the chat history
```
from langchain.schema.messages import HumanMessage, AIMessage
chat_history = []
```
We can then put it all together!
```python
agent = {
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_functions(x['intermediate_steps']),
"chat_history": lambda x: x["chat_history"]
} | prompt | llm_with_tools | OpenAIFunctionsAgentOutputParser()
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
```
When running, we now need to track the inputs and outputs as chat history
```
input1 = "how many letters in the word educa?"
result = agent_executor.invoke({"input": input1, "chat_history": chat_history})
chat_history.append(HumanMessage(content=input1))
chat_history.append(AIMessage(content=result['output']))
agent_executor.invoke({"input": "is that a real word?", "chat_history": chat_history})
```
## Next Steps
Awesome! You've now run your first end-to-end agent.
To dive deeper, you can:
- Check out all the different [agent types](/docs/modules/agents/agent_types/) supported
- Learn all the controls for [AgentExecutor](/docs/modules/agents/how_to/)
- See a full list of all the off-the-shelf [toolkits](/docs/modules/agents/toolkits/) we provide
- Explore all the individual [tools](/docs/modules/agents/tools/) supported

View File

@@ -1,10 +0,0 @@
---
sidebar_position: 3
---
# Toolkits
:::info
Head to [Integrations](/docs/integrations/toolkits/) for documentation on built-in toolkit integrations.
:::
Toolkits are collections of tools that are designed to be used together for specific tasks and have convenience loading methods.

View File

@@ -29,7 +29,8 @@
"outputs": [],
"source": [
"# Import things that are needed generically\n",
"from langchain.chains import LLMMathChain\nfrom langchain.utilities import SerpAPIWrapper\n",
"from langchain.chains import LLMMathChain\n",
"from langchain.utilities import SerpAPIWrapper\n",
"from langchain.agents import AgentType, initialize_agent\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.tools import BaseTool, StructuredTool, Tool, tool"
@@ -230,7 +231,7 @@
"id": "6f12eaf0",
"metadata": {},
"source": [
"### Subclassing the BaseTool class\n",
"### Subclassing the BaseTool\n",
"\n",
"You can also directly subclass `BaseTool`. This is useful if you want more control over the instance variables or if you want to propagate callbacks to nested chains or other tools."
]
@@ -367,7 +368,7 @@
"id": "824eaf74",
"metadata": {},
"source": [
"## Using the `tool` decorator\n",
"### Using the decorator\n",
"\n",
"To make it easier to define custom tools, a `@tool` decorator is provided. This decorator can be used to quickly create a `Tool` from a simple function. The decorator uses the function name as the tool name by default, but this can be overridden by passing a string as the first argument. Additionally, the decorator will use the function's docstring as the tool's description."
]
@@ -531,7 +532,7 @@
"id": "fb0a38eb",
"metadata": {},
"source": [
"## Subclassing the BaseTool\n",
"### Subclassing the BaseTool\n",
"\n",
"The BaseTool automatically infers the schema from the `_run` method's signature."
]
@@ -624,7 +625,7 @@
"id": "7d68b0ac",
"metadata": {},
"source": [
"## Using the decorator\n",
"### Using the decorator\n",
"\n",
"The `tool` decorator creates a structured tool automatically if the signature has multiple arguments."
]
@@ -774,7 +775,8 @@
"from langchain.agents import initialize_agent, Tool\n",
"from langchain.agents import AgentType\n",
"from langchain.llms import OpenAI\n",
"from langchain.chains import LLMMathChain\nfrom langchain.utilities import SerpAPIWrapper\n",
"from langchain.chains import LLMMathChain\n",
"from langchain.utilities import SerpAPIWrapper\n",
"\n",
"search = SerpAPIWrapper()\n",
"tools = [\n",

View File

@@ -4,17 +4,17 @@ sidebar_position: 2
# Tools
:::info
Head to [Integrations](/docs/integrations/tools/) for documentation on built-in tool integrations.
For documentation on built-in tool integrations, visit [Integrations](/docs/integrations/tools/).
:::
Tools are interfaces that an agent can use to interact with the world.
## Get started
## Getting Started
Tools are functions that agents can use to interact with the world.
These tools can be generic utilities (e.g. search), other chains, or even other agents.
Currently, tools can be loaded with the following snippet:
Currently, tools can be loaded using the following snippet:
```python
from langchain.agents import load_tools

View File

@@ -0,0 +1,10 @@
---
sidebar_position: 3
---
# Toolkits
:::info
For documentation on built-in toolkit integrations, visit [Integrations](/docs/integrations/toolkits/).
:::
Toolkits are collections of tools that are designed to be used together for specific tasks and have convenient loading methods.

View File

@@ -1,5 +1,6 @@
---
sidebar_position: 5
sidebar_class_name: hidden
---
# Callbacks

View File

@@ -8,6 +8,7 @@
"---\n",
"sidebar_position: 2\n",
"title: Chains\n",
"sidebar_class_name: hidden\n",
"---"
]
},

View File

@@ -1,5 +1,6 @@
---
sidebar_position: 1
sidebar_class_name: hidden
---
# Retrieval

View File

@@ -58,9 +58,9 @@
"1. Do not use with a store that has been pre-populated with content independently of the indexing API, as the record manager will not know that records have been inserted previously.\n",
"2. Only works with LangChain `vectorstore`'s that support:\n",
" * document addition by id (`add_documents` method with `ids` argument)\n",
" * delete by id (`delete` method with)\n",
" * delete by id (`delete` method with `ids` argument)\n",
"\n",
"Compatible Vectorstores: `AnalyticDB`, `AwaDB`, `Bagel`, `Cassandra`, `Chroma`, `DashVector`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `MyScale`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `ScaNN`, `SupabaseVectorStore`, `TimescaleVector`, `Vald`, `Vearch`, `VespaStore`, `Weaviate`, `ZepVectorStore`.\n",
"Compatible Vectorstores: `AnalyticDB`, `AstraDB`, `AwaDB`, `Bagel`, `Cassandra`, `Chroma`, `DashVector`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `MyScale`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `ScaNN`, `SupabaseVectorStore`, `TimescaleVector`, `Vald`, `Vearch`, `VespaStore`, `Weaviate`, `ZepVectorStore`.\n",
" \n",
"## Caution\n",
"\n",

View File

@@ -4,16 +4,18 @@ sidebar_class_name: hidden
# Modules
LangChain provides standard, extendable interfaces and external integrations for the following modules, listed from least to most complex:
LangChain provides standard, extendable interfaces and external integrations for the following main modules:
#### [Model I/O](/docs/modules/model_io/)
Interface with language models
#### [Retrieval](/docs/modules/data_connection/)
Interface with application-specific data
#### [Chains](/docs/modules/chains/)
Construct sequences of calls
#### [Agents](/docs/modules/agents/)
Let chains choose which tools to use given high-level directives
## Additional
#### [Chains](/docs/modules/chains/)
Common, building block compositions
#### [Memory](/docs/modules/memory/)
Persist application state between runs of a chain
#### [Callbacks](/docs/modules/callbacks/)

View File

@@ -1,5 +1,6 @@
---
sidebar_position: 3
sidebar_class_name: hidden
---
# Memory

View File

@@ -24,9 +24,7 @@
"While chat models use language models under the hood, the interface they use is a bit different.\n",
"Rather than using a \"text in, text out\" API, they use an interface where \"chat messages\" are the inputs and outputs.\n",
"\n",
"## Get started\n",
"\n",
"### Setup\n",
"## Setup\n",
"\n",
"For this example we'll need to install the OpenAI Python package:\n",
"\n",
@@ -79,7 +77,7 @@
"id": "4ca3a777-8641-42fb-9e02-a7770a633d29",
"metadata": {},
"source": [
"### Messages\n",
"## Messages\n",
"\n",
"The chat model interface is based around messages rather than raw text.\n",
"The types of messages currently supported in LangChain are `AIMessage`, `HumanMessage`, `SystemMessage`, `FunctionMessage` and `ChatMessage` -- `ChatMessage` takes in an arbitrary role parameter. Most of the time, you'll just be dealing with `HumanMessage`, `AIMessage`, and `SystemMessage`"
@@ -90,7 +88,7 @@
"id": "54e5088f-98dd-437e-bac8-99b750946b29",
"metadata": {},
"source": [
"### LCEL\n",
"## LCEL\n",
"\n",
"Chat models implement the [Runnable interface](/docs/expression_language/interface), the basic building block of the [LangChain Expression Language (LCEL)](/docs/expression_language/). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n",
"\n",
@@ -590,12 +588,30 @@
" print(chunk)"
]
},
{
"cell_type": "markdown",
"id": "a4a7d783-4ddf-42e7-b143-8050891663c2",
"metadata": {},
"source": [
"## [LangSmith](/docs/langsmith)\n",
"\n",
"All `ChatModel`s come with built-in LangSmith tracing. Just set the following environment variables:\n",
"```bash\n",
"export LANGCHAIN_TRACING_V2=\"true\"\n",
"export LANGCHAIN_API_KEY=<your-api-key>\n",
"```\n",
"\n",
"and any `ChatModel` invocation (whether it's nested in a chain or not) will automatically be traced. A trace will include inputs, outputs, latency, token usage, invocation params, environment params, and more. See an example here: https://smith.langchain.com/public/a54192ae-dd5c-4f7a-88d1-daa1eaba1af7/r.\n",
"\n",
"In LangSmith you can then provide feedback for any trace, compile annotated datasets for evals, debug performance in the playground, and more."
]
},
{
"cell_type": "markdown",
"id": "7b289727-3983-43f7-a8b2-dd5582d49b6a",
"metadata": {},
"source": [
"### `__call__`\n",
"## [Legacy] `__call__`\n",
"#### Messages in -> message out\n",
"\n",
"For convenience you can also treat chat models as callables. You can get chat completions by passing one or more messages to the chat model. The response will be a message."
@@ -670,7 +686,7 @@
"id": "2b996c69-fd5d-4889-af4a-19dfd2833021",
"metadata": {},
"source": [
"### `generate`\n",
"## [Legacy] `generate`\n",
"#### Batch calls, richer outputs\n",
"\n",
"You can go one step further and generate completions for multiple sets of messages using `generate`. This returns an `LLMResult` with an additional `message` parameter. This will include additional information about each generation beyond the returned message (e.g. the finish reason) and additional information about the full API call (e.g. total tokens used)."

View File

@@ -0,0 +1,88 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d0df7646-b1e1-4014-a841-6dae9b3c50d9",
"metadata": {},
"source": [
"# Streaming\n",
"\n",
"All ChatModels implement the Runnable interface, which comes with default implementations of all methods, ie. ainvoke, batch, abatch, stream, astream. This gives all ChatModels basic support for streaming.\n",
"\n",
"Streaming support defaults to returning an Iterator (or AsyncIterator in the case of async streaming) of a single value, the final result returned by the underlying ChatModel provider. This obviously doesn't give you token-by-token streaming, which requires native support from the ChatModel provider, but ensures your code that expects an iterator of tokens can work for any of our ChatModel integrations.\n",
"\n",
"See which [integrations support token-by-token streaming here](/docs/integrations/chat/)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "03080a2c-45e8-45b9-a367-62816eae54c4",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatAnthropic"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "975c4f32-21f6-4a71-9091-f87b56347c33",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Here's a song I just improvised about goldfish on the moon:\n",
"\n",
"Floating in space, looking for a place \n",
"To call their home, all alone\n",
"Swimming through stars, these goldfish from Mars\n",
"Left their fishbowl behind, a new life to find\n",
"On the moon, where the craters loom\n",
"Searching for food, maybe some lunar food\n",
"Out of their depth, close to death\n",
"How they wish, for just one small fish\n",
"To join them up here, their future unclear\n",
"On the moon, where the Earth looms\n",
"Dreaming of home, filled with foam\n",
"Their bodies adapt, continuing to last \n",
"On the moon, where they learn to swoon\n",
"Over cheese that astronauts tease\n",
"As they stare back at Earth, the planet of birth\n",
"These goldfish out of water, swim on and on\n",
"Lunar pioneers, conquering their fears\n",
"On the moon, where they happily swoon"
]
}
],
"source": [
"chat = ChatAnthropic(model=\"claude-2\")\n",
"for chunk in chat.stream(\"Write me a song about goldfish on the moon\"):\n",
" print(chunk.content, end=\"\", flush=True)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,181 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e5715368",
"metadata": {},
"source": [
"# Tracking token usage\n",
"\n",
"This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API.\n",
"\n",
"Let's first look at an extremely simple example of tracking token usage for a single Chat model call."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9455db35",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.callbacks import get_openai_callback"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "d1c55cc9",
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(model_name=\"gpt-4\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "31667d54",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 24\n",
"\tPrompt Tokens: 11\n",
"\tCompletion Tokens: 13\n",
"Successful Requests: 1\n",
"Total Cost (USD): $0.0011099999999999999\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" print(cb)"
]
},
{
"cell_type": "markdown",
"id": "c0ab6d27",
"metadata": {},
"source": [
"Anything inside the context manager will get tracked. Here's an example of using it to track multiple calls in sequence."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "e09420f4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"48\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" result2 = llm.invoke(\"Tell me a joke\")\n",
" print(cb.total_tokens)"
]
},
{
"cell_type": "markdown",
"id": "d8186e7b",
"metadata": {},
"source": [
"If a chain or agent with multiple steps in it is used, it will track all those steps."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "5d1125c6",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import load_tools\n",
"from langchain.agents import initialize_agent\n",
"from langchain.agents import AgentType\n",
"from langchain.llms import OpenAI\n",
"\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n",
"agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "2f98c536",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `Search` with `Olivia Wilde's current boyfriend`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m['Things are looking golden for Olivia Wilde, as the actress has jumped back into the dating pool following her split from Harry Styles — read ...', \"“I did not want service to take place at the home of Olivia's current partner because Otis and Daisy might be present,” Sudeikis wrote in his ...\", \"February 2021: Olivia Wilde praises Harry Styles' modesty. One month after the duo made headlines with their budding romance, Wilde gave her new beau major ...\", 'An insider revealed to People that the new couple had been dating for some time. \"They were in Montecito, California this weekend for a wedding, ...', 'A source told People last year that Wilde and Styles were still friends despite deciding to take a break. \"He\\'s still touring and is now going ...', \"... love life. “He's your typical average Joe.” The source adds, “She's not giving too much away right now and wants to keep the relationship ...\", \"Multiple sources said the two were “taking a break” from dating because of distance and different priorities. “He's still touring and is now ...\", 'Comments. Filed under. celebrity couples · celebrity dating · harry styles · jason sudeikis · olivia wilde ... Now Holds A Darker MeaningNYPost.', '... dating during filming. The 39-year-old did however look very cosy with the comedian, although his relationship status is unknown. Olivia ...']\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `Search` with `Harry Styles current age`\n",
"responded: Olivia Wilde's current boyfriend is Harry Styles. Let me find out his age for you.\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m29 years\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `Calculator` with `29 ^ 0.23`\n",
"\n",
"\n",
"\u001b[0m\u001b[33;1m\u001b[1;3mAnswer: 2.169459462491557\u001b[0m\u001b[32;1m\u001b[1;3mHarry Styles' current age (29 years) raised to the 0.23 power is approximately 2.17.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Total Tokens: 1929\n",
"Prompt Tokens: 1799\n",
"Completion Tokens: 130\n",
"Total Cost (USD): $0.06176999999999999\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" response = agent.run(\n",
" \"Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?\"\n",
" )\n",
" print(f\"Total Tokens: {cb.total_tokens}\")\n",
" print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
" print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
" print(f\"Total Cost (USD): ${cb.total_cost}\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -2,6 +2,7 @@
sidebar_position: 0
sidebar_custom_props:
description: Interface with language models
sidebar_class_name: hidden
---
# Model I/O
@@ -9,8 +10,18 @@ sidebar_custom_props:
The core element of any language model application is...the model. LangChain gives you the building blocks to interface with any language model.
- [Prompts](/docs/modules/model_io/prompts/): Templatize, dynamically select, and manage model inputs
- [Language models](/docs/modules/model_io/models/): Make calls to language models through common interfaces
- [Chat models](/docs/modules/model_io/chat/): Models that are backed by a language model but take a list of Chat Messages as input and return a Chat Message
- [LLMs](/docs/modules/model_io/llms/): Models that take a text string as input and return a text string
- [Output parsers](/docs/modules/model_io/output_parsers/): Extract information from model outputs
![model_io_diagram](/img/model_io.jpg)
## LLMs vs Chat models
LLMs and chat models are subtly but importantly different. LLMs in LangChain refer to pure text completion models.
The APIs they wrap take a string prompt as input and output a string completion. OpenAI's GPT-3 is implemented as an LLM.
Chat models are often backed by LLMs but tuned specifically for having conversations.
And, crucially, their provider APIs use a different interface than pure text completion models. Instead of a single string,
they take a list of chat messages as input. Usually these messages are labeled with the speaker (usually one of "System",
"AI", and "Human"). And they return an AI chat message as output. GPT-4 and Anthropic's Claude-2 are both implemented as chat models.

View File

@@ -0,0 +1,121 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f6574496-b360-4ffa-9523-7fd34a590164",
"metadata": {},
"source": [
"# Async API\n",
"\n",
"All `LLM`s implement the `Runnable` interface, which comes with default implementations of all methods, ie. ainvoke, batch, abatch, stream, astream. This gives all `LLM`s basic support for asynchronous calls.\n",
"\n",
"Async support defaults to calling the `LLM`'s respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the `LLM` is being executed, by moving this call to a background thread. Where `LLM`s providers have native implementations for async, that is used instead of the default `LLM` implementation.\n",
"\n",
"See which [integrations provide native async support here](/docs/integrations/llms/).\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5e49e96c-0f88-466d-b3d3-ea0966bdf19e",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[1mConcurrent executed in 1.03 seconds.\u001b[0m\n",
"\u001b[1mSerial executed in 6.80 seconds.\u001b[0m\n"
]
}
],
"source": [
"import time\n",
"import asyncio\n",
"\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\", temperature=0.9)\n",
"\n",
"\n",
"def invoke_serially():\n",
" for _ in range(10):\n",
" resp = llm.invoke(\"Hello, how are you?\")\n",
"\n",
"\n",
"async def async_invoke(llm):\n",
" resp = await llm.ainvoke(\"Hello, how are you?\")\n",
"\n",
"\n",
"async def invoke_concurrently():\n",
" tasks = [async_invoke(llm) for _ in range(10)]\n",
" await asyncio.gather(*tasks)\n",
"\n",
"\n",
"s = time.perf_counter()\n",
"# If running this outside of Jupyter, use asyncio.run(generate_concurrently())\n",
"await invoke_concurrently()\n",
"elapsed = time.perf_counter() - s\n",
"print(\"\\033[1m\" + f\"Concurrent executed in {elapsed:0.2f} seconds.\" + \"\\033[0m\")\n",
"\n",
"s = time.perf_counter()\n",
"invoke_serially()\n",
"elapsed = time.perf_counter() - s\n",
"print(\"\\033[1m\" + f\"Serial executed in {elapsed:0.2f} seconds.\" + \"\\033[0m\")"
]
},
{
"cell_type": "markdown",
"id": "e0b60caf-f99e-46a6-bdad-46b2cfea29ac",
"metadata": {},
"source": [
"To simplify things we could also just use `abatch` to run a batch concurrently:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "bd11000f-2232-491a-9f70-abcbb4611fbf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[1mBatch executed in 1.31 seconds.\u001b[0m\n"
]
}
],
"source": [
"s = time.perf_counter()\n",
"# If running this outside of Jupyter, use asyncio.run(generate_concurrently())\n",
"await llm.abatch([\"Hello, how are you?\"] * 10)\n",
"elapsed = time.perf_counter() - s\n",
"print(\"\\033[1m\" + f\"Batch executed in {elapsed:0.2f} seconds.\" + \"\\033[0m\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -6,7 +6,7 @@
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"sidebar_position: 2\n",
"title: LLMs\n",
"---"
]
@@ -23,7 +23,6 @@
"Large Language Models (LLMs) are a core component of LangChain.\n",
"LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs.\n",
"\n",
"## Get started\n",
"\n",
"There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - the `LLM` class is designed to provide a standard interface for all of them.\n",
"\n",
@@ -86,7 +85,7 @@
"id": "966b5d74-defd-4f89-8c37-a68ca4a161d9",
"metadata": {},
"source": [
"### LCEL\n",
"## LCEL\n",
"\n",
"LLMs implement the [Runnable interface](/docs/expression_language/interface), the basic building block of the [LangChain Expression Language (LCEL)](/docs/expression_language/). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n",
"\n",
@@ -455,12 +454,30 @@
" print(chunk)"
]
},
{
"cell_type": "markdown",
"id": "09108687-ed15-468b-9ac5-674e75785199",
"metadata": {},
"source": [
"## [LangSmith](/docs/langsmith)\n",
"\n",
"All `LLM`s come with built-in LangSmith tracing. Just set the following environment variables:\n",
"```bash\n",
"export LANGCHAIN_TRACING_V2=\"true\"\n",
"export LANGCHAIN_API_KEY=<your-api-key>\n",
"```\n",
"\n",
"and any `LLM` invocation (whether it's nested in a chain or not) will automatically be traced. A trace will include inputs, outputs, latency, token usage, invocation params, environment params, and more. See an example here: https://smith.langchain.com/public/7924621a-ff58-4b1c-a2a2-035a354ef434/r.\n",
"\n",
"In LangSmith you can then provide feedback for any trace, compile annotated datasets for evals, debug performance in the playground, and more."
]
},
{
"cell_type": "markdown",
"id": "20ef52be-6e51-43a3-be2a-b1a862d5fc80",
"metadata": {},
"source": [
"### `__call__`: string in -> string out\n",
"### [Legacy] `__call__`: string in -> string out\n",
"The simplest way to use an LLM is a callable: pass in a string, get a string completion."
]
},
@@ -490,7 +507,7 @@
"id": "7b4ad9e5-50ec-4031-bfaa-23a0130da3c6",
"metadata": {},
"source": [
"### `generate`: batch calls, richer outputs\n",
"### [Legacy] `generate`: batch calls, richer outputs\n",
"`generate` lets you call the model with a list of strings, getting back a more complete response than just the text. This complete response can include things like multiple top responses and other LLM provider-specific information:\n",
"\n"
]

View File

@@ -0,0 +1,179 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "73f9bf40",
"metadata": {},
"source": [
"# Serialization\n",
"\n",
"LangChain Python and LangChain JS share a serialization scheme. You can check if a LangChain class is serializable by running with the `is_lc_serializable` class method."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9c9fb6ff",
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.llms.loading import load_llm"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "441d28cb-e898-47fd-8f27-f620a9cd6c34",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"OpenAI.is_lc_serializable()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "87b8a7c6-35b7-4fab-938b-4d05e9cc06f1",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\")"
]
},
{
"cell_type": "markdown",
"id": "88ce018b",
"metadata": {},
"source": [
"## Dump\n",
"\n",
"Any serializable object can be serialized to a dict or json string."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "f12b28f3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'lc': 1,\n",
" 'type': 'constructor',\n",
" 'id': ['langchain', 'llms', 'openai', 'OpenAI'],\n",
" 'kwargs': {'model': 'gpt-3.5-turbo-instruct',\n",
" 'openai_api_key': {'lc': 1, 'type': 'secret', 'id': ['OPENAI_API_KEY']}}}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.load import dumpd, dumps\n",
"\n",
"dumpd(llm)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "095b1d56",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'{\"lc\": 1, \"type\": \"constructor\", \"id\": [\"langchain\", \"llms\", \"openai\", \"OpenAI\"], \"kwargs\": {\"model\": \"gpt-3.5-turbo-instruct\", \"openai_api_key\": {\"lc\": 1, \"type\": \"secret\", \"id\": [\"OPENAI_API_KEY\"]}}}'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dumps(llm)"
]
},
{
"cell_type": "markdown",
"id": "ab3e4223",
"metadata": {},
"source": [
"## Load\n",
"\n",
"Any serialized object can be loaded."
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "68e45b1c",
"metadata": {},
"outputs": [],
"source": [
"from langchain.load import loads\n",
"from langchain.load.load import load\n",
"\n",
"loaded_1 = load(dumpd(llm))\n",
"loaded_2 = loads(dumps(llm))"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "c9272667-7fe3-4e5f-a1cc-69e8829b9e8f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"I am an AI and do not have the capability to experience emotions. But thank you for asking. Is there anything I can assist you with?\n"
]
}
],
"source": [
"print(loaded_1.invoke(\"How are you doing?\"))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,112 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "fc37c39a-7406-4c13-a754-b8e95fd970a0",
"metadata": {},
"source": [
"# Streaming\n",
"\n",
"All `LLM`s implement the `Runnable` interface, which comes with default implementations of all methods, ie. ainvoke, batch, abatch, stream, astream. This gives all `LLM`s basic support for streaming.\n",
"\n",
"Streaming support defaults to returning an Iterator (or AsyncIterator in the case of async streaming) of a single value, the final result returned by the underlying `LLM` provider. This obviously doesn't give you token-by-token streaming, which requires native support from the `LLM` provider, but ensures your code that expects an iterator of tokens can work for any of our `LLM` integrations.\n",
"\n",
"See which [integrations support token-by-token streaming here](/docs/integrations/llms/)."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "9baa0527-b97d-41d3-babd-472ec5e59e3e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"Verse 1:\n",
"Bubbles dancing in my glass\n",
"Clear and crisp, it's such a blast\n",
"Refreshing taste, it's like a dream\n",
"Sparkling water, you make me beam\n",
"\n",
"Chorus:\n",
"Oh sparkling water, you're my delight\n",
"With every sip, you make me feel so right\n",
"You're like a party in my mouth\n",
"I can't get enough, I'm hooked no doubt\n",
"\n",
"Verse 2:\n",
"No sugar, no calories, just pure bliss\n",
"You're the perfect drink, I must confess\n",
"From lemon to lime, so many flavors to choose\n",
"Sparkling water, you never fail to amuse\n",
"\n",
"Chorus:\n",
"Oh sparkling water, you're my delight\n",
"With every sip, you make me feel so right\n",
"You're like a party in my mouth\n",
"I can't get enough, I'm hooked no doubt\n",
"\n",
"Bridge:\n",
"Some may say you're just plain water\n",
"But to me, you're so much more\n",
"You bring a sparkle to my day\n",
"In every single way\n",
"\n",
"Chorus:\n",
"Oh sparkling water, you're my delight\n",
"With every sip, you make me feel so right\n",
"You're like a party in my mouth\n",
"I can't get enough, I'm hooked no doubt\n",
"\n",
"Outro:\n",
"So here's to you, my dear sparkling water\n",
"You'll always be my go-to drink forever\n",
"With your effervescence and refreshing taste\n",
"You'll always have a special place."
]
}
],
"source": [
"from langchain.llms import OpenAI\n",
"\n",
"\n",
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\", temperature=0, max_tokens=512)\n",
"for chunk in llm.stream(\"Write me a song about sparkling water.\"):\n",
" print(chunk, end=\"\", flush=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d81140f2-384b-4470-bf93-957013c6620b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -14,7 +14,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "9455db35",
"metadata": {},
"outputs": [],
@@ -25,17 +25,17 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "d1c55cc9",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(model_name=\"text-davinci-002\", n=2, best_of=2)"
"llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\", n=2, best_of=2)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"id": "31667d54",
"metadata": {},
"outputs": [
@@ -43,17 +43,17 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 42\n",
"Tokens Used: 37\n",
"\tPrompt Tokens: 4\n",
"\tCompletion Tokens: 38\n",
"\tCompletion Tokens: 33\n",
"Successful Requests: 1\n",
"Total Cost (USD): $0.00084\n"
"Total Cost (USD): $7.2e-05\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" result = llm(\"Tell me a joke\")\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" print(cb)"
]
},
@@ -67,7 +67,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"id": "e09420f4",
"metadata": {},
"outputs": [
@@ -75,14 +75,14 @@
"name": "stdout",
"output_type": "stream",
"text": [
"91\n"
"72\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" result = llm(\"Tell me a joke\")\n",
" result2 = llm(\"Tell me a joke\")\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" result2 = llm.invoke(\"Tell me a joke\")\n",
" print(cb.total_tokens)"
]
},
@@ -96,7 +96,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 5,
"id": "5d1125c6",
"metadata": {},
"outputs": [],
@@ -115,7 +115,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 6,
"id": "2f98c536",
"metadata": {},
"outputs": [
@@ -129,24 +129,23 @@
"\u001b[32;1m\u001b[1;3m I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n",
"Action: Search\n",
"Action Input: \"Olivia Wilde boyfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mSudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Harry Styles' age.\n",
"Observation: \u001b[36;1m\u001b[1;3m[\"Olivia Wilde and Harry Styles took fans by surprise with their whirlwind romance, which began when they met on the set of Don't Worry Darling.\", 'Olivia Wilde started dating Harry Styles after ending her years-long engagement to Jason Sudeikis — see their relationship timeline.', 'Olivia Wilde and Harry Styles were spotted early on in their relationship walking around London. (. Image ...', \"Looks like Olivia Wilde and Jason Sudeikis are starting 2023 on good terms. Amid their highly publicized custody battle and the actress' ...\", 'The two started dating after Wilde split up with actor Jason Sudeikisin 2020. However, their relationship came to an end last November.', \"Olivia Wilde and Harry Styles started dating during the filming of Don't Worry Darling. While the movie got a lot of backlash because of the ...\", \"Here's what we know so far about Harry Styles and Olivia Wilde's relationship.\", 'Olivia and the Grammy winner kept their romance out of the spotlight as their relationship began just two months after her split from ex-fiancé ...', \"Harry Styles and Olivia Wilde first met on the set of Don't Worry Darling and stepped out as a couple in January 2021. Relive all their biggest relationship ...\"]\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m Harry Styles is Olivia Wilde's boyfriend.\n",
"Action: Search\n",
"Action Input: \"Harry Styles age\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m29 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 29 raised to the 0.23 power.\n",
"Action: Calculator\n",
"Action Input: 29^0.23\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.169459462491557\n",
"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 2.169459462491557\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: Harry Styles, Olivia Wilde's boyfriend, is 29 years old and his age raised to the 0.23 power is 2.169459462491557.\u001b[0m\n",
"Final Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Total Tokens: 1506\n",
"Prompt Tokens: 1350\n",
"Completion Tokens: 156\n",
"Total Cost (USD): $0.03012\n"
"Total Tokens: 2205\n",
"Prompt Tokens: 2053\n",
"Completion Tokens: 152\n",
"Total Cost (USD): $0.0441\n"
]
}
],
@@ -163,7 +162,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"id": "80ca77a3",
"metadata": {},
"outputs": [],
@@ -186,7 +185,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -1,20 +0,0 @@
# LLMChain
You can use the existing LLMChain in a very similar way to before - provide a prompt and a model.
```python
chain = LLMChain(llm=chat, prompt=chat_prompt)
```
```python
chain.run(input_language="English", output_language="French", text="I love programming.")
```
<CodeOutputBlock lang="python">
```
"J'adore la programmation."
```
</CodeOutputBlock>

View File

@@ -1,63 +0,0 @@
# Streaming
Some chat models provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated.
```python
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
HumanMessage,
)
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
chat = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)
resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
```
<CodeOutputBlock lang="python">
```
Verse 1:
Bubbles rising to the top
A refreshing drink that never stops
Clear and crisp, it's pure delight
A taste that's sure to excite
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Verse 2:
No sugar, no calories, just pure bliss
A drink that's hard to resist
It's the perfect way to quench my thirst
A drink that always comes first
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Bridge:
From the mountains to the sea
Sparkling water, you're the key
To a healthy life, a happy soul
A drink that makes me feel whole
Chorus:
Sparkling water, oh so fine
A drink that's always on my mind
With every sip, I feel alive
Sparkling water, you're my vibe
Outro:
Sparkling water, you're the one
A drink that's always so much fun
I'll never let you go, my friend
Sparkling
```
</CodeOutputBlock>

View File

@@ -1,23 +0,0 @@
---
sidebar_position: 1
---
# Language models
LangChain provides interfaces and integrations for two types of models:
- [LLMs](/docs/modules/model_io/models/llms/): Models that take a text string as input and return a text string
- [Chat models](/docs/modules/model_io/models/chat/): Models that are backed by a language model but take a list of Chat Messages as input and return a Chat Message
## LLMs vs chat models
LLMs and chat models are subtly but importantly different. LLMs in LangChain refer to pure text completion models.
The APIs they wrap take a string prompt as input and output a string completion. OpenAI's GPT-3 is implemented as an LLM.
Chat models are often backed by LLMs but tuned specifically for having conversations.
And, crucially, their provider APIs use a different interface than pure text completion models. Instead of a single string,
they take a list of chat messages as input. Usually these messages are labeled with the speaker (usually one of "System",
"AI", and "Human"). And they return an AI chat message as output. GPT-4 and Anthropic's Claude are both implemented as chat models.
To make it possible to swap LLMs and chat models, both implement the Base Language Model interface. This includes common
methods "predict", which takes a string and returns a string, and "predict messages", which takes messages and returns a message.
If you are using a specific model it's recommended you use the methods specific to that model class (i.e., "predict" for LLMs and "predict messages" for chat models),
but if you're creating an application that should work with different types of models the shared interface can be helpful.

View File

@@ -1,160 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f6574496-b360-4ffa-9523-7fd34a590164",
"metadata": {},
"source": [
"# Async API\n",
"\n",
"LangChain provides async support for LLMs by leveraging the [asyncio](https://docs.python.org/3/library/asyncio.html) library.\n",
"\n",
"Async support is particularly useful for calling multiple LLMs concurrently, as these calls are network-bound. Currently, `OpenAI`, `PromptLayerOpenAI`, `ChatOpenAI`, `Anthropic` and `Cohere` are supported, but async support for other LLMs is on the roadmap.\n",
"\n",
"You can use the `agenerate` method to call an OpenAI LLM asynchronously."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "5e49e96c-0f88-466d-b3d3-ea0966bdf19e",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"I'm doing well, thank you. How about you?\n",
"\n",
"\n",
"I'm doing well, thank you. How about you?\n",
"\n",
"\n",
"I'm doing well, how about you?\n",
"\n",
"\n",
"I'm doing well, thank you. How about you?\n",
"\n",
"\n",
"I'm doing well, thank you. How about you?\n",
"\n",
"\n",
"I'm doing well, thank you. How about yourself?\n",
"\n",
"\n",
"I'm doing well, thank you! How about you?\n",
"\n",
"\n",
"I'm doing well, thank you. How about you?\n",
"\n",
"\n",
"I'm doing well, thank you! How about you?\n",
"\n",
"\n",
"I'm doing well, thank you. How about you?\n",
"\u001B[1mConcurrent executed in 1.39 seconds.\u001B[0m\n",
"\n",
"\n",
"I'm doing well, thank you. How about you?\n",
"\n",
"\n",
"I'm doing well, thank you. How about you?\n",
"\n",
"I'm doing well, thank you. How about you?\n",
"\n",
"\n",
"I'm doing well, thank you. How about you?\n",
"\n",
"\n",
"I'm doing well, thank you. How about yourself?\n",
"\n",
"\n",
"I'm doing well, thanks for asking. How about you?\n",
"\n",
"\n",
"I'm doing well, thanks! How about you?\n",
"\n",
"\n",
"I'm doing well, thank you. How about you?\n",
"\n",
"\n",
"I'm doing well, thank you. How about yourself?\n",
"\n",
"\n",
"I'm doing well, thanks for asking. How about you?\n",
"\u001B[1mSerial executed in 5.77 seconds.\u001B[0m\n"
]
}
],
"source": [
"import time\n",
"import asyncio\n",
"\n",
"from langchain.llms import OpenAI\n",
"\n",
"\n",
"def generate_serially():\n",
" llm = OpenAI(temperature=0.9)\n",
" for _ in range(10):\n",
" resp = llm.generate([\"Hello, how are you?\"])\n",
" print(resp.generations[0][0].text)\n",
"\n",
"\n",
"async def async_generate(llm):\n",
" resp = await llm.agenerate([\"Hello, how are you?\"])\n",
" print(resp.generations[0][0].text)\n",
"\n",
"\n",
"async def generate_concurrently():\n",
" llm = OpenAI(temperature=0.9)\n",
" tasks = [async_generate(llm) for _ in range(10)]\n",
" await asyncio.gather(*tasks)\n",
"\n",
"\n",
"s = time.perf_counter()\n",
"# If running this outside of Jupyter, use asyncio.run(generate_concurrently())\n",
"await generate_concurrently()\n",
"elapsed = time.perf_counter() - s\n",
"print(\"\\033[1m\" + f\"Concurrent executed in {elapsed:0.2f} seconds.\" + \"\\033[0m\")\n",
"\n",
"s = time.perf_counter()\n",
"generate_serially()\n",
"elapsed = time.perf_counter() - s\n",
"print(\"\\033[1m\" + f\"Serial executed in {elapsed:0.2f} seconds.\" + \"\\033[0m\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e1d3a966-3a27-44e8-9441-ed72f01b86f4",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,12 +0,0 @@
{
"model_name": "text-davinci-003",
"temperature": 0.7,
"max_tokens": 256,
"top_p": 1.0,
"frequency_penalty": 0.0,
"presence_penalty": 0.0,
"n": 1,
"best_of": 1,
"request_timeout": null,
"_type": "openai"
}

Some files were not shown because too many files have changed in this diff Show More