Compare commits

..

450 Commits

Author SHA1 Message Date
Jacob Lee
187ddca02f docs: Make deprecation docs bar bigger (#28674)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-12-11 17:19:47 -08:00
Bagatur
f28e339129 infra: test release skip attestations (#27853) 2024-11-01 17:22:00 -07:00
Bagatur
974cf1f330 infra: fix test release permissions (#27852) 2024-11-01 17:09:47 -07:00
Bagatur
e711034713 core[patch]: rm image prompt file loading (#27848) 2024-11-01 23:42:14 +00:00
Erick Friis
bc6600d86f docs: canonical url fix (#27554)
canonical wrong on e.g.
https://python.langchain.com/v0.1/docs/modules/agents/concepts/
2024-10-22 12:46:18 -07:00
Erick Friis
3b0e993e20 docs: v0.1 canonical (#26515) 2024-09-16 13:32:04 -07:00
Erick Friis
5b5a782196 docs: v0.1 docs deprecation show (#26510) 2024-09-16 02:21:35 +00:00
Erick Friis
86b6d9df9a docs: v0.1 link to latest in dropdown (#26503) 2024-09-15 18:36:00 +00:00
Jacob Lee
d6e9d6fee1 Try removing v0.2 rewrite (#25918) 2024-09-01 07:33:22 -07:00
ccurme
4b7a7ed4b1 v0.1 docs: add callout to OllamaFunctions docs (#25579) 2024-08-20 08:59:11 -04:00
Erick Friis
be7a5aa726 docs: fix mdx fences (#25184) 2024-08-08 17:07:24 +00:00
Bagatur
27170b08da docs: nvidia v0.2 warning (#24831)
Add explicit warning and link to v0.2 docs. cc @dglogo
2024-07-30 12:46:46 -07:00
Jacob Lee
6ed6c7f409 docs[patch]: Increase v0.1 banner size (#23072)
CC @hwchase17
2024-06-18 02:34:35 -07:00
Erick Friis
cd17444711 multiple: fix codespell (v0.1) (#22785) 2024-06-11 13:31:06 -07:00
Jacob Lee
3a5523bf82 docs[patch]: Redirect LangGraph docs link (#22784)
FYI @efriis
2024-06-11 12:28:02 -07:00
Jacob Lee
f883981446 docs[patch]: Add canonical URLs pointing at 0.2 docs for popular pages (#22250)
@efriis
2024-05-31 10:55:10 -04:00
Erick Friis
b8550e7d3a infra: fix make docs_build (0.1) (#22247) 2024-05-28 13:54:49 -07:00
Nelson Auner
946a0a38bd experimental[patch]: Fix import errors and docs of experimental prompt injection identifier (#21586)
**Description**

I noticed that code in experimental was not functional due to small
errors, and users were complaining at
https://stackoverflow.com/questions/78272521/hugging-face-prompt-injection-identifier
and https://github.com/langchain-ai/langchain/discussions/19995.

I know folks are busy with `v2.0` now, but I believe this is worthy fix
because it is isolated, minor, backwards compatible, and fixes an error
with current v0.1 code

**Issue**
https://github.com/langchain-ai/langchain/discussions/19995

**Dependencies**
None

**Twitter handle**
https://twitter.com/nelsonauner


cc @asofter

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-21 16:29:25 -07:00
junefish
5a40413bfd docs: add Pinecone tab to vector stores page (#21969)
Thank you for contributing to LangChain!

- [x] **PR title**: docs: add Pinecone tab to [vector stores
page](https://python.langchain.com/v0.1/docs/modules/data_connection/vectorstores/).


- [x] **PR message**: Recreation of
https://github.com/langchain-ai/langchain/pull/21721.
Adds information about PineconeVectorStore to the LangChain vector
stores page. Although this page is deprecated, it still shows up
prominently in Google search results, so it will still be very helpful
to users to have correct information.
![search
results](https://github.com/langchain-ai/langchain/assets/19216250/e05d8d74-03da-44a1-b87f-0f8087d3c13a)


- [x] **Add tests and docs**: N/A


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-05-21 09:35:20 -07:00
Bagatur
cb45caa02e docs: link to v0.1 api ref (#21945) 2024-05-20 19:01:55 -07:00
SN
bc84bc44b6 docs: Update pairwise evaluations page to link to LangSmith evaluate_compar… (#21686)
…ative function
2024-05-20 16:22:24 -07:00
Erick Friis
b199db9767 docs: update announcement bar (v0.1) (#21855) 2024-05-18 00:35:42 +00:00
Erick Friis
91c6117fd5 docs: align vercel core dep (v0.1) (#21838) 2024-05-17 14:21:28 -07:00
Erick Friis
65e6239f14 docs: cookbook redirect (v0.1) (#21823) 2024-05-17 10:02:00 -07:00
Erick Friis
4bf7add9b8 docs: version dropdown (v0.1) (#21786) 2024-05-16 17:01:37 -07:00
Erick Friis
c9ec8c5d07 docs: dont rewrite ipynb links that have double slash (v0.1) (#21776) 2024-05-16 13:26:09 -07:00
Erick Friis
d53de65aca docs: anthropic forced tool calling (#21774) 2024-05-16 18:34:02 +00:00
Mohammad Mohtashim
f02e27b664 Docs[patch]: v0.1 UseCases Notebooks' Google Colab Links fix (#21722)
- **Description:** v0.1 Usecases Notebooks google colab link were
pointing to master but they should point to v0.1 which was raising the
following issue.
  - **Issue:** #21690
2024-05-15 15:44:35 -04:00
Bagatur
a79c8b3834 docs: openai bind tools nit (#21693) 2024-05-15 01:06:02 +00:00
Erick Friis
4ea6f882d4 infra: remove prints from notebook build (v0.1) (#21689) 2024-05-14 16:27:44 -07:00
Erick Friis
03bd0f51c9 docs: ignore nb echo:false blocks (v0.1) (#21642) 2024-05-14 00:22:53 +00:00
Erick Friis
be3d5904da infra: fix api ref link generation (v0.1) (#21632) 2024-05-13 15:16:54 -07:00
ccurme
9d850054bc docs: update heading in v0.1 extraction docs (#21605) 2024-05-13 07:07:43 -07:00
ccurme
5c07a96de1 langchain: release 0.1.20 (#21549) 2024-05-10 21:15:56 +00:00
ccurme
7440ce0cb4 langchain: cherry-pick moderation fix into v0.1 (#21544)
```
git checkout v0.1
git pull
git checkout -b cc/cherry_pick_into_v01
git cherry-pick d3ca2cc8c3
```

Co-authored-by: Matt Florence <matt@mattflo.com>
Co-authored-by: Emilia Katari <emilia@outpace.com>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-10 21:05:33 +00:00
Erick Friis
73a5b3dd98 infra: codespell in v1 (#21547) 2024-05-10 13:57:27 -07:00
Erick Friis
cfd827b574 docs: announcement bar (#21511) 2024-05-10 10:10:52 -07:00
Erick Friis
a71c4e22d1 docs: 404 page top level (#21510) 2024-05-09 17:14:56 -07:00
Erick Friis
29be604023 Revert "docs: redirect base slug (0.1)" (#21500)
Reverts langchain-ai/langchain#21458
2024-05-09 12:21:25 -07:00
Erick Friis
25f8467ffb docs: redirect base slug (0.1) (#21458) 2024-05-09 10:52:17 -07:00
Erick Friis
0c84aea1d4 docs: spell correction (#21459) 2024-05-09 10:51:04 -07:00
Erick Friis
d0f043a765 docs: baseUrl for ganalytics (0.1) (#21456) 2024-05-08 18:07:10 -07:00
Erick Friis
264f677528 langchain: bump core community (#21453) 2024-05-08 15:58:01 -07:00
Erick Friis
3b99f428a0 community: fix core version str (#21452) 2024-05-08 15:36:12 -07:00
Erick Friis
6eab0a4709 community: release 0.0.38 (#21451) 2024-05-08 15:33:08 -07:00
Erick Friis
be8006fa9c langchain: fix core version dep (#21449) 2024-05-08 15:08:55 -07:00
Erick Friis
e316d30dcb langchain: release 0.1.19 (#21447) 2024-05-08 14:52:03 -07:00
Erick Friis
61b5159ed0 docs: v0.1 build semantics [BEGIN v0.1 BRANCH] (#21444) 2024-05-08 14:29:49 -07:00
Tommi Holmgren
ee35b9ba56 langchain-robocorp: remove toolkit return content max length (#21436)
Robocorp (action server) toolkit had a limitation that the content
length returned by the tool was always cut to max 5000 chars. This was
from the time when context windows were much more limited.

This PR removes the limitation. Whatever the underlying tool provides
gets sent back to the agent.

As the robocorp toolkit no longer restricts the content, the implication
is that either the Action (tool) developer or the agent developer needs
to be aware of potentially oversized tool responses. Our point of view
is this should be the agent developer's responsibility, them being in
control of the use case and aware of the context window the LLM has.
2024-05-08 15:05:55 -04:00
JuHyung Son
710e57d779 upstage: deprecate UPSTAGE_DOCUMENT_AI_API_KEY (#21363)
Description: We are merging UPSTAGE_DOCUMENT_AI_API_KEY and
UPSTAGE_API_KEY into one, and only UPSTAGE_API_KEY will be used going
forward. And we changed the base class of ChatUpstage to BaseChatOpenAI.

---------

Co-authored-by: Sean <chosh0615@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-08 18:02:26 +00:00
Erick Friis
6a295d1ec0 upstage: release 0.1.4 (#21432) 2024-05-08 17:57:40 +00:00
Mateusz Szewczyk
7926cc1929 ibm: Fix llm and embeddings "verify" attribute default value (#21429)
Thank you for contributing to LangChain!

- [x] **PR title**: "langchain-ibm: Fix llm and embeddings 'verify'
attribute default value"


- [x] **PR message**: 
    - **Description:** fix default value of "verify" attribute
    - **Dependencies:** `ibm_watsonx_ai`


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-08 17:23:14 +00:00
Kevin Zhang
0715545378 docs: fix typo in text (#21393)
**Description:** The previous text had an unclosed parenthesis, this fix
adds the closing parenthesis
2024-05-08 15:58:15 +00:00
Dobiichi-Origami
5b00885b49 community: add bind_tools and with_structured_output support to QianfanChatEndpoint (#21412)
…Endpoint`

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** add `bind_tools` and `with_structured_output` support
to `QianfanChatEndpoint`


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-05-08 11:35:10 -04:00
Silas Xu
aafaf3e193 The predict_and_parse is deprecated, instead pass an output parser directly to LLMChain. (#20130)
The `predict_and_parse` method is deprecated, instead pass an output
parser directly to LLMChain.

- [x] **PR title**: "langchain: update chain_extract.py"


![image](https://github.com/langchain-ai/langchain/assets/40889019/e950d79f-5a0f-4086-86e9-89f627990fe5)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-08 09:32:17 -04:00
ccurme
3c31bd0ed0 langchain: update use of predict_and_parse in LLMChainFilter (#21389)
Following https://github.com/langchain-ai/langchain/pull/20130

Removes deprecation warnings in docs here:
https://python.langchain.com/docs/modules/data_connection/retrievers/contextual_compression/

Tested using the same docs notebook + existing integration test.
2024-05-08 09:31:33 -04:00
Tomaz Bratanic
dd70f2f473 Update graph docs (#21414)
Update the deprecated docs and added node properties to graph
construction
2024-05-08 09:05:39 -04:00
Erick Friis
bbdf0f8801 experimental[patch]: core and langchain dep (#21402) 2024-05-07 21:39:34 -07:00
Erick Friis
e4aca0d052 experimental[patch]: release 0.0.58 (#21397) 2024-05-08 03:52:44 +00:00
Erick Friis
893f06b5de infra: rewrite ipynb links to md (#21392) 2024-05-07 23:16:52 +00:00
Hassan El Mghari
225ceedcb6 docs: Add together docs in chat models & update provider docs (#21391)
- Added Together docs in chat models section
- Update Together provider docs to match the LLM & chat models sections

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-07 22:40:57 +00:00
Heidi Steen
af97d58c9e docs: update docs/integrations/retrievers/azure_ai_search.ipynb (#21160)
This is a doc update. It fixes up formatting and product name
references. The example code is updated to use a local built-in text
file.

@mmhangami Please take a look

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-05-07 22:33:46 +00:00
snova-jamesv
ca753e7c15 community: updated performance limitation wording in sambanova.ipynb (#21390)
- **Description:** updated performance limitation wording in
sambanova.ipynb
    - **Issue:** NA
    - **Dependencies:** NA
    - **Twitter handle:** NA
2024-05-07 22:21:46 +00:00
Leonid Ganeline
791d59a2c8 community: callbacks guard_imports (#21173)
Issue: we have several helper functions to import third-party libraries
like import_uptrain in
[community.callbacks](https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.uptrain_callback.import_uptrain.html#langchain_community.callbacks.uptrain_callback.import_uptrain).
And we have core.utils.utils.guard_import that works exactly for this
purpose.
The import_<package> functions work inconsistently and rather be private
functions.
Change: replaced these functions with the guard_import function.

Related to #21133
2024-05-07 15:04:54 -07:00
Hassan El Mghari
416549bed2 docs: Updated Together integration docs (#21388)
**Description:** Updated the together integration docs by leading with
the streaming example, explicitly specifying a model to show users how
to do that, and updating the sections to more closely match other
integrations.
2024-05-07 21:51:42 +00:00
Rahul Triptahi
7994cba18d [Community][Minor]: Fetch loader_source of GoogleDriveLoader in PebbloSafeLoader. (#21314)
Description: This PR includes fix for loader_source to be fetched from
metadata in case of GdriveLoaders.
Documentation: NA
Unit Test: NA

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-05-07 14:45:58 -07:00
Leonid Ganeline
7cbf1c31aa docs: table legend updated (#21351)
Compacted the table column legends. Added links. Similar to #21259
2024-05-07 14:45:04 -07:00
Erick Friis
d5bde4fa91 infra: use nbconvert for docs build (#21135)
todo

- [x] remove quarto build semantics
- [x] remove quarto download/install
- [x] make `uv` not verbose
2024-05-07 12:30:17 -07:00
Nuno Campos
ad0f3c14c2 core: allow mermaid node labels to have any characters (#21385)
- it's only node ids that are limited

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-07 12:16:49 -07:00
Eugene Yurtsev
6a1d61dbf1 community[patch]: Fix in memory vectorstore to take into account ids when adding docs (#21384)
Should respect `ids` if passed
2024-05-07 15:05:16 -04:00
Ikko Eltociear Ashimine
80170da6c5 docs: update cassandra_database.ipynb (#21145)
Enviroment -> Environment
2024-05-07 15:00:24 -04:00
Miroslav
04e2611fea Added additional headers for HuggingFaceInferenceAPIEmbeddings endpoint. (#21282)
Thank you for contributing to LangChain!

- [ ] **HuggingFaceInferenceAPIEmbeddings**: "Additional Headers"
  - Where: langchain, community, embeddings. huggingface.py.
- Community: add additional headers when needed by custom HuggingFace
TEI embedding endpoints. HuggingFaceInferenceAPIEmbeddings"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Adding the `additional_headers` to be passed to
requests library if needed
    - **Dependencies:** none
 

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. Tested with locally available TEI endpoints with and without
`additional_headers`
  2. Example  Usage
  
```python
embeddings=HuggingFaceInferenceAPIEmbeddings(
                             api_key=MY_CUSTOM_API_KEY,
                             api_url=MY_CUSTOM_TEI_URL,
                             additional_headers={
                                "Content-Type": "application/json"
                               }
)
```

 

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Massimiliano Pronesti <massimiliano.pronesti@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-05-07 14:17:53 -04:00
Ikko Eltociear Ashimine
c34419e200 docs: update quick_start.ipynb (#21358)
initalize -> initialize



- [x] **PR title**: "package: description"
2024-05-07 08:44:48 -07:00
Guangdong Liu
1fe66f5d39 community(patch) fix MoonshotChat moonshot_api_key is invaild for api key (#21361)
Description: close
https://github.com/langchain-ai/langchain/issues/21237
@baskaryan, @eyurtsev
2024-05-07 08:44:30 -07:00
snova-jamesv
c2ed484653 community: add Sambaverse rate limitation info to sambanova.ipynb (#21379)
- **Description:** add Sambaverse rate limitation info to
sambanova.ipynb
    - **Issue:** NA
    - **Dependencies:** NA
2024-05-07 15:42:44 +00:00
Tomaz Bratanic
0bf7596839 Add simple node properties to llm graph transformer (#21369)
Add support for simple node properties in llm graph transformer.

Linter and dynamic pydantic classes aren't friends, hence I added two
ignores
2024-05-07 08:41:09 -07:00
ccurme
080af0ec53 langchain: sync -> async methods in OpenAI assistants (#21378) 2024-05-07 10:25:55 -04:00
Tomaz Bratanic
ad3fd44a7f experimental: Fix llm graph transformer bug (#21362) 2024-05-06 23:59:55 -07:00
Erick Friis
bb81ae5c8c together: fix chat model and embedding classes (#21353) 2024-05-06 18:26:03 -07:00
Hassan El Mghari
d6ef5fe86a together: add chat models, use openai base (#21337)
**Description:** Adding chat completions to the Together AI package,
which is our most popular API. Also staying backwards compatible with
the old API so folks can continue to use the completions API as well.
Also moved the embedding API to use the OpenAI library to standardize it
further.

**Twitter handle:** @nutlope

- [x] **Add tests and docs**: If you're adding a new integration, please
include
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-06 17:47:06 -07:00
Jacob Lee
a2d31307bb Adds confirmation logs after creating a new project (#12618)
@efriis @hwchase17

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-06 23:28:12 +00:00
Erick Friis
0fb93cd740 core: release 0.1.52 (#21350) 2024-05-06 22:20:35 +00:00
Wu Enze
32c61b3ece community[patch]: chat message history mypy fixes #17048 (#20114)
Relates [#17048]
Description : Applied fix to redis and neo4j file.

Error was : `Cannot override writeable attribute with read-only
property`

fix with the same solution of
[[langchain/libs/community/langchain_community/chat_message_histories/elasticsearch.py](d5c412b0a9/libs/community/langchain_community/chat_message_histories/elasticsearch.py (L170-L175))]

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-06 22:17:45 +00:00
nrpd25
95cc8e3fc3 premai[patch]:Standardized model init args (#21308)
[Standardized model init args
#20085](https://github.com/langchain-ai/langchain/issues/20085)
- Enable premai chat model to be initialized with `model_name` as an
alias for `model`, `api_key` as an alias for `premai_api_key`.
- Add initialization test `test_premai_initialization`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-06 18:12:29 -04:00
Nuno Campos
6f17158606 fix: core: Include in json output also fields set outside the constructor (#21342) 2024-05-06 14:37:36 -07:00
Tomaz Bratanic
ac14f171ac Add indexed properties to neo4j enhanced schema (#21335) 2024-05-06 14:28:34 -07:00
scaserini
a6cdf6572f community: add Kendra DocumentRelevanceOverrideConfigurations request parameter (#20695)
- **Description:** add **DocumentRelevanceOverrideConfigurations**
request parameter to Kendra retriever

Co-authored-by: Simone Caserini <simone.caserini@klarna.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-06 14:26:36 -07:00
Nuno Campos
0345bcf4ef Fix failing test for serialization (#21344)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-06 21:19:54 +00:00
Trayan Azarov
93226b1945 community: Updated Chroma version range to include 0.5.0 release (#21224)
- Updated Chroma version range to allow releases in 0.5.x.
- Bumped mypy version as linting was failing
2024-05-06 13:31:40 -07:00
Jorge Piedrahita Ortiz
e65652c3e8 community: add SambaNova embeddings integration (#21227)
- **Description:**  SambaNova hosted embeddings integration
2024-05-06 13:29:59 -07:00
Jorge Piedrahita Ortiz
df1c10260c community: minor changes sambanova integration (#21231)
- **Description:** fix: variable names in root validator not allowing
pass credentials as named parameters in llm instancing, also added
sambanova's sambaverse and sambastudio llms to __init__.py for module
import
2024-05-06 13:28:35 -07:00
Jan Soubusta
d9a61c0fa9 fix: respect table_name argument when calling from_texts (#21252)
valid for from_documents() as well

fixes #21251
2024-05-06 20:28:22 +00:00
Pedro Lima
bebf46c4a2 community: added args_schema to YahooFinanceNewsTool (#21232)
Description: this change adds args_schema (pydantic BaseModel) to
YahooFinanceNewsTool for correct schema formatting on LLM function calls

Issue: currently using YahooFinanceNewsTool with OpenAI function calling
returns the following error "TypeError("YahooFinanceNewsTool._run() got
an unexpected keyword argument '__arg1'")". This happens because the
schema sent to the LLM is "input: "{'__arg1': 'MSFT'}"" while the method
should be called with the "query" parameter.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-06 13:27:54 -07:00
Mark Cusack
060987d755 community[minor]: Add indexing via locality sensitive hashing to the Yellowbrick vector store (#20856)
- **Description:** Add LSH-based indexing to the Yellowbrick vector
store module
- **Twitter handle:** @markcusack

---------

Co-authored-by: markcusack <markcusack@markcusacksmac.lan>
Co-authored-by: markcusack <markcusack@Mark-Cusack-sMac.local>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-05-06 20:18:02 +00:00
Rashmi Pawar
a2fdabdad2 mark NemoEmbeddings as deprecated (#21239)
The NemoEmbeddings is deprecated, instead use
langchain-nvidia-ai-endpoints NVIDIAEmbeddings interface.

cc: @mattf

---------

Co-authored-by: Daniel Glogowski <167348611+dglogo@users.noreply.github.com>
Co-authored-by: andyjessen <62343929+andyjessen@users.noreply.github.com>
Co-authored-by: Chris Germann <88305668+TAAGECH9@users.noreply.github.com>
Co-authored-by: gere <gere@kapo.zh.ch>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-06 19:44:58 +00:00
Erick Friis
9e4b24a2d6 langchain: release 0.1.18 (#21338) 2024-05-06 19:39:46 +00:00
Erick Friis
5c000f8d79 community: release 0.0.37 (#21332) 2024-05-06 12:17:42 -07:00
Leonid Ganeline
8c13e8a79b langchain: qa_chain fix (#21279)
Issue: `load_qa_chain` is placed in the __init__.py file. As a result,
it is not listed in the API Reference docs.
BTW `load_qa_chain` is heavily presented in the doc examples, but is
missed in API Ref.
Change: moved code from init.py into a new file. Related: #21266
2024-05-06 14:45:51 -04:00
Erick Friis
7ecf9996f1 community: Revert "community: langkit dependency" (#21333)
Reverts langchain-ai/langchain#21174

Hey team - going to revert this because it doesn't seem necessary for
testing. We should only be adding optional + extended_testing
dependencies for deps that have extended tests.

otherwise it just increases probability of dependency conflicts in the
community lockfile.
2024-05-06 18:44:41 +00:00
Param Singh
fee91d43b7 baichuan[patch]:standardize chat init args (#21298)
Thank you for contributing to LangChain!

community:baichuan[patch]: standardize init args

updated `baichuan_api_key` so that aliased to `api_key`. Added test that
it continues to set the same underlying attribute. Test checks for
`SecretStr`

updated `temperature` with Pydantic Field, added unit test. 

Related to https://github.com/langchain-ai/langchain/issues/20085
2024-05-06 18:33:57 +00:00
Leonid Ganeline
62559b20b3 docs: chains page format (#21259)
Compacted the table column descriptions.
2024-05-06 11:33:38 -07:00
Christophe Bornet
484a009012 community[minor]: Relax constraints on Cassandra VectorStore constructors (#21209)
If Session and/or keyspace are not provided, they are resolved from
cassio's context. So they are not required.
This change is fully backward compatible.
2024-05-06 14:32:32 -04:00
Daniel Glogowski
27e73ebe57 docs: update nvidia docs v2 (#21288)
More doc updates por favor @baskaryan!
2024-05-06 11:29:02 -07:00
Leonid Ganeline
6feddfae88 community: langkit dependency (#21174)
Issue: the `langkit` package is not presented in the `pyproject.toml`
but it is a requirement for the `WhyLabsCallbackHandler`
Change: added `langkit`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-06 18:09:31 +00:00
Erick Friis
811e9cee8b core: release 0.1.51 (#21328) 2024-05-06 10:40:19 -07:00
Pengcheng Liu
144f2821af docs: add example for loading data from LarkSuite wiki. (#21311)
**Description:** Update LarkSuite loader doc to give an example for
loading data from LarkSuite wiki.
**Issue:** None
**Dependencies:** None
**Twitter handle:** None
2024-05-06 09:56:12 -07:00
Mateusz Szewczyk
682d21c3de ibm: Add support for ibm-watsonx-ai new major version (#21313)
Thank you for contributing to LangChain!

- [x] **PR title**: "langchain-ibm: Add support for ibm-watsonx-ai new
major version"


- [x] **PR message**: 
    - **Description:** Add support for ibm-watsonx-ai new major version
    - **Dependencies:** `ibm_watsonx_ai`


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-06 16:48:26 +00:00
Chris Papademetrious
ee6c922c91 langchain[minor]: enhance LocalFileStore to offer update_atime parameter that updates access times on read (#20951)
**Description:**
The `LocalFileStore` class can be used to create an on-disk
`CacheBackedEmbeddings` cache. The number of files in these embeddings
caches can grow to be quite large over time (hundreds of thousands) as
embeddings are computed for new versions of content, but the embeddings
for old/deprecated content are not removed.

A *least-recently-used* (LRU) cache policy could be applied to the
`LocalFileStore` directory to delete cache entries that have not been
referenced for some time:

```bash
# delete files that have not been accessed in the last 90 days
find embeddings_cache_dir/ -atime 90 -print0 | xargs -0 rm
```

However, most filesystems in enterprise environments disable access time
modification on read to improve performance. As a result, the access
times of these cache entry files are not updated when their values are
read.

To resolve this, this pull request updates the `LocalFileStore`
constructor to offer an `update_atime` parameter that causes access
times to be updated when a cache entry is read.

For example,

```python
file_store = LocalFileStore(temp_dir, update_atime=True)
```

The default is `False`, which retains the original behavior.

**Testing:**
I updated the LocalFileStore unit tests to test the access time update.
2024-05-06 11:52:29 -04:00
Tomaz Bratanic
5b6d1a907d Add the extract types to diffbot graph transformer (#21315)
Before you could only extract triples (diffbot calls it facts) from
diffbot to avoid isolated nodes. However, sometimes isolated nodes can
still be useful like for prefiltering, so we want to allow users to
extract them if they want. Default behaviour is unchanged.
2024-05-06 09:19:52 -04:00
Jagadish Krishnamoorthy
c038991590 docs: Update pandas.ipynb (#21289)
Remove the redundant comment.
2024-05-05 20:22:17 +00:00
aditya thomas
b868c78a12 partners[anthropic]: update unit test for key passed in from the environment (#21290)
**Description:** Update unit test for ChatAnthropic
**Issue:** Test for key passed in from the environment should not have
the key initialized in the constructor
**Dependencies:** None
2024-05-05 16:19:10 -04:00
tanersekmen
d310f9c71e docs:update code structure (#21302)
update the structure of llm_chain variables

Co-authored-by: tanersemenn <0418>
2024-05-05 17:18:15 +00:00
Christophe Bornet
ba9dc04ffa docs: Add doc for hybrid search (#21245)
See
[preview](https://langchain-git-fork-cbornet-doc-hybrid-search-langchain.vercel.app/docs/use_cases/question_answering/hybrid/)

In the model of [per user
retrieval](https://python.langchain.com/docs/use_cases/question_answering/per_user/)
2024-05-04 08:22:56 -04:00
Rohan Aggarwal
8021d2a2ab community[minor]: Oraclevs integration (#21123)
Thank you for contributing to LangChain!

- Oracle AI Vector Search 
Oracle AI Vector Search is designed for Artificial Intelligence (AI)
workloads that allows you to query data based on semantics, rather than
keywords. One of the biggest benefit of Oracle AI Vector Search is that
semantic search on unstructured data can be combined with relational
search on business data in one single system. This is not only powerful
but also significantly more effective because you don't need to add a
specialized vector database, eliminating the pain of data fragmentation
between multiple systems.


- Oracle AI Vector Search is designed for Artificial Intelligence (AI)
workloads that allows you to query data based on semantics, rather than
keywords. One of the biggest benefit of Oracle AI Vector Search is that
semantic search on unstructured data can be combined with relational
search on business data in one single system. This is not only powerful
but also significantly more effective because you don't need to add a
specialized vector database, eliminating the pain of data fragmentation
between multiple systems.
This Pull Requests Adds the following functionalities
Oracle AI Vector Search : Vector Store
Oracle AI Vector Search : Document Loader
Oracle AI Vector Search : Document Splitter
Oracle AI Vector Search : Summary
Oracle AI Vector Search : Oracle Embeddings


- We have added unit tests and have our own local unit test suite which
verifies all the code is correct. We have made sure to add guides for
each of the components and one end to end guide that shows how the
entire thing runs.


- We have made sure that make format and make lint run clean.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: skmishraoracle <shailendra.mishra@oracle.com>
Co-authored-by: hroyofc <harichandan.roy@oracle.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-04 03:15:35 +00:00
ccurme
c9e9470c5a langchain: fix deprecation decorators on extraction chains (#21276)
Calling any of these raises
```
ValueError: A pending deprecation cannot have a scheduled removal
```
2024-05-03 18:29:40 -04:00
Wickes Wong
ee1adaacaa langchain[patch]: Fix summary buffer memory with return message flag (#21115)
## Description
Memory return could be set as `str` or `message` by `return_messages`
flag as mentioned in
https://python.langchain.com/docs/modules/memory/#whether-memory-is-a-string-or-a-list-of-messages,
where
`langchain.chains.conversation.memory.ConversationSummaryBufferMemory`
did not implement that.
This commit added `buffer_as_str` and `buffer_as_messages` function, and
`buffer` now affected by `return_messages` flag.

## Example Test Code and Output

```python
# Fix: ConversationSummaryBufferMemory with return_messages flag function
# Test code
from langchain.chains.conversation.memory import ConversationSummaryBufferMemory
from langchain_community.llms.ollama import Ollama

llm = Ollama()

# Create an instance of ConversationSummaryBufferMemory with return_messages set to True
memory = ConversationSummaryBufferMemory(return_messages=True, llm=llm)

# Add user and AI messages to the chat memory
memory.chat_memory.add_user_message("hi!")
memory.chat_memory.add_ai_message("what's up?")

# Print the buffer
print("Buffer:")
print(*map(type, memory.buffer), sep="\n")
print(memory.buffer, "\n")

# Print the buffer as a string
print("Buffer as String:")
print(type(memory.buffer_as_str))
print(memory.buffer_as_str, "\n")

# Print the buffer as messages
print("Buffer as Messages:")
print(*map(type, memory.buffer_as_messages), sep="\n")
print(memory.buffer_as_messages, "\n")

# Print the buffer after setting return_messages to False
memory.return_messages = False
print("Buffer after setting return_messages to False:")
print(type(memory.buffer))
print(memory.buffer, "\n")
```

```plaintext
Buffer:
<class 'langchain_core.messages.human.HumanMessage'>
<class 'langchain_core.messages.ai.AIMessage'>
[HumanMessage(content='hi!'), AIMessage(content="what's up?")] 

Buffer as String:
<class 'str'>
Human: hi!
AI: what's up? 

Buffer as Messages:
<class 'langchain_core.messages.human.HumanMessage'>
<class 'langchain_core.messages.ai.AIMessage'>
[HumanMessage(content='hi!'), AIMessage(content="what's up?")] 

Buffer after setting return_messages to False:
<class 'str'>
Human: hi!
AI: what's up? 
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-03 17:25:09 -04:00
Leonid Ganeline
9639457222 community[patch]: tools imports (#21156)
Issue: we have several helper functions to import third-party libraries
like tools.gmail.utils.import_google in
[community.tools](https://api.python.langchain.com/en/latest/community_api_reference.html#id37).
And we have core.utils.utils.guard_import that works exactly for this
purpose.
The import_<package> functions work inconsistently and rather be private
functions.
Change: replaced these functions with the guard_import function.

Related to #21133
2024-05-03 17:22:45 -04:00
Leonid Ganeline
3ef8b24277 core[patch]: utils.guard_import fix (#21133)
Issues (nit): 
1. `utils.guard_import` prints wrong error message when there is an
import `error.` It prints the whole `module_name` but should be only the
first part as the pip package name. E.i. `langchain_core.utils` -> print
not `langchain-core` but `langchain_core.utils`. Also replace '_' with
'-' in the pip package name.
2. it does not handle the `ModuleNotFoundError` which raised if
`guard_import("wrong_module")`

Fixed issues; added ut-s. Controversial: I've reraised
`ModuleNotFoundError` as `ImportError`, since in case of the error, the
proposed action is the same - we need to install a missed package.
2024-05-03 17:21:36 -04:00
Erick Friis
36c2ca3c8b mistralai: relax tokenizers dep (#21277) 2024-05-03 14:16:22 -07:00
Nuno Campos
6e1e0c7d5c fix: core: draw_mermaid() would create subgroup for edges with same src and tgt (#21275)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-03 13:51:08 -07:00
Eugene Yurtsev
26a37dce0a langchain[patch]: Remove jsonpatch from poetry file (#21272)
jsonpatch is only used in langchain-core not in langchain
2024-05-03 15:46:05 -04:00
Eugene Yurtsev
335bd01e45 langchain[patch]: Update deprecation warning (#21268)
Update deprecation warning
2024-05-03 15:31:29 -04:00
Leonid Ganeline
23a05c3986 langchain: summarize chain fix (#21266)
Issue: `load_summarize_chain` is placed in the __init__.py file. As a
result, it doesn't listed in the API Reference docs.
Change: moved code from __init__.py into a new file.
2024-05-03 14:44:39 -04:00
ccurme
6da3d92b42 (all): update removal in deprecation warnings from 0.2 to 0.3 (#21265)
We are pushing out the removal of these to 0.3.

`find . -type f -name "*.py" -exec sed -i ''
's/removal="0\.2/removal="0.3/g' {} +`
2024-05-03 14:29:36 -04:00
Eugene Yurtsev
d6e34f9ee5 langchain[patch]: Improve deprecation warnings (#21262)
* Remove spurious derprecation warning
* Make deprecation warnings consistent with 0.1 namespaces that were announced as deprecated
2024-05-03 13:40:16 -04:00
Eugene Yurtsev
487aff7e46 langchain[patch]: Revert 20794 until 0.2 release (#21257)
PR of 2079 was already released as part of 0.1.17rc.


Issue for 0.2 release:
https://github.com/langchain-ai/langchain/issues/21080
2024-05-03 17:02:48 +00:00
Eugene Yurtsev
ba4a309d98 langchain[patch]: Revert breaking change until 0.2 release (#21256)
Reverts a minor breaking change until 0.2 release
2024-05-03 09:42:27 -07:00
Eugene Yurtsev
66a1e3f083 langchain[patch]: Fix flaky unit test (#21258)
Should sort the results of the import test since it depends on import order
2024-05-03 15:55:46 +00:00
Eugene Yurtsev
0989c48028 langchain[minor]: Re-add deleted ainetwork tool (#21254)
* Adding __init__.py to turn it into a package in community
* Adding proxy imports that assume that langchain_community is optional
2024-05-03 11:39:40 -04:00
Christophe Bornet
2fbe82f5e6 community[minor]: Relax constraints on CassandraChatMessageHistory constructor (#21241) 2024-05-03 10:20:39 -04:00
Chris Germann
3a8d1d8838 Hotfix RetrievalQA Docs: docs: Fix formatting (#21183)
# Newline Characters breaking formatting 

**Description**: 
As you can see in the image below, the formatting in the documentation
is broken. As far as I can see the two added `\n` characters are
breaking the documentation. Therefore I would propose to remove those

![image](https://github.com/langchain-ai/langchain/assets/88305668/23b6e726-71b2-4812-91ea-3e8600683733)

**Dependencies**:
None

**Twitter Handle**
- epu9byj

---------

Co-authored-by: gere <gere@kapo.zh.ch>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-03 12:46:29 +00:00
andyjessen
64e17bd793 docs: Fix comment within "handle long text" example (#21248)
The current doc-string comment is referring to the wrong schema.
2024-05-03 12:36:53 +00:00
Daniel Glogowski
c3d169ab00 docs: Update Nvidia documentation (#21240)
Updating Nvidia docs ahead for 5/15 competition. 

Thanks!
2024-05-03 12:29:03 +00:00
Bagatur
70bde15480 docs: add tool choice to tool calling (#21229) 2024-05-03 03:10:22 -04:00
Bagatur
67a5cc34c6 openai[patch]: Release 0.1.6 (#21236) 2024-05-03 04:10:39 +00:00
Erick Friis
c1eb95b967 core: release 0.1.50 (#21230) 2024-05-02 22:44:18 +00:00
Nuno Campos
47ce8d5a57 core: tracer: remove numeric execution order (#21220)
- this hasn't been used in a long time and requires some additional
bookkeeping i'm going to streamline in the next pr
2024-05-02 15:38:55 -07:00
Bagatur
6ac6158a07 openai[patch]: support tool_choice="required" (#21216)
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-05-02 18:33:25 -04:00
Erick Friis
aa9faa8512 docs: model table keywords, remove tool calling from llm (#21225) 2024-05-02 21:04:29 +00:00
xindoo
c1aa237bc2 langchain: fix syntax error in code comment for create_tool_calling_agent (#21205)
**PR message**:
- **Description:** Corrected a syntax error in the code comments within
the `create_tool_calling_agent` function in the langchain package.
- **Issue:** N/A
- **Dependencies:** No additional dependencies required.
- **Twitter handle:** N/A
2024-05-02 19:17:23 +00:00
ccurme
eb0a2fd53a mistral: release 0.1.6 (#21214) 2024-05-02 13:59:19 -04:00
ccurme
2d77e5e3a1 (standard tests): add test for basic conversation sequence (#21213) 2024-05-02 13:47:10 -04:00
Maxime Perrin
1ebb5a70ad partners(mistralai): Removing unused variable in completion request (using tool_calls or content) (#21201)
This PR fixes #21196.

The error was occurring when calling chat completion API with a chat
history. Indeed, the Mistral API does not accept both `content` and
`tool_calls` in the same body.

This PR removes one of theses variables depending on the necessity.

---------

Co-authored-by: Maxime Perrin <mperrin@doing.fr>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-02 13:20:14 -04:00
Christophe Bornet
683fb45c6b community[patch]: Refactor CassandraDatabase wrapper (#21075)
* Introduce individual `fetch_` methods for easier typing.
* Rework some docstrings to google style
* Move some logic to the tool
* Merge the 2 cassandra utility files
2024-05-02 13:13:08 -04:00
Bagatur
b00fd1dbde infra: Undo gh cache removal (#21210)
Co-authored-by: Nuno Campos <nuno@langchain.dev>
2024-05-02 17:12:32 +00:00
Aditya
ee2c55ca09 docs: Added documentation on Anthropic models on vertex (#21070)
Description:Added documentation on Anthropic models on Vertex
@lkuligin for review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
2024-05-02 13:12:01 -04:00
Raghav Dixit
7d451d0041 community[patch]: Update lancedb.py (#21192)
very minor update in LanceDB integration, 'metric' argument was missing.
2024-05-02 17:06:39 +00:00
Bagatur
d297d90ad9 core[patch]: Release 0.1.49 (#21211) 2024-05-02 17:06:27 +00:00
Nuno Campos
663747b730 core[patch]: Fixes for convert_messages (#21207)
- support two-tuples of any sequence type (eg. json.loads never produces
tuples)
- support type alias for role key
- if id is passed in in dict form use it
- if tool_calls passed in in dict form use them

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-02 16:55:42 +00:00
Eugene Yurtsev
df49404794 langchain[patch]: Make more memory code handle community dependency as optional (#21199) 2024-05-02 11:05:26 -04:00
ccurme
bd5d2c2674 langchain: import InMemoryChatMessageHistory from core (#21198) 2024-05-02 14:53:07 +00:00
Eugene Yurtsev
3cd7fced5f langchain[patch],community[minor]: Migrate memory implementations to community (#20845)
Migrates memory implementations to community
2024-05-02 10:46:50 -04:00
Eugene Yurtsev
b5c3a04e4b langchain[patch]: chat histories to handle optional community dependence (#21194) 2024-05-02 10:36:08 -04:00
Eugene Yurtsev
c9119b0e75 langchain[patch],community[minor]: Move some unit tests from langchain to community, use core for fake models (#21190) 2024-05-02 09:57:52 -04:00
Eugene Yurtsev
c306364b06 langchain[patch]: Update more code to use langchain community as an optional dependency (#21170)
More code to use langchain community as an optional dependency
2024-05-02 09:05:48 -04:00
Erick Friis
cd4c54282a infra: cleanup docs build (#21134)
Refactors the docs build in order to:
- run the same `make build` command in both vercel and local build
- incrementally build artifacts in 2 distinct steps, instead of building
all docs in-place (in vercel) or in a _dist dir (locally)

Highlights:
- introduces `make build` in order to build the docs
- collects and generates all files for the build in
`docs/build/intermediate`
- renders those jupyter notebook + markdown files into
`docs/build/outputs`

And now the outputs to host are in `docs/build/outputs`, which will need
a vercel settings change.

Todo:
- [ ] figure out how to point the right directory (right now deleting
and moving docs dir in vercel_build.sh isn't great)
2024-05-01 17:34:05 -07:00
Bagatur
6fa8626e2f openai[patch]: fix azure open lc serialization, release 0.1.5 (#21159) 2024-05-01 18:03:29 -04:00
Eugene Yurtsev
94a838740e langchain[patch]: Migrate more code in utils to use optional langchain import (#21166)
Moving is interactive util to avoid circular deps
2024-05-01 17:18:42 -04:00
Eugene Yurtsev
23fdd320bc langchain[patch]: Migrate more code to use optional community in agents namespace (#21167) 2024-05-01 16:25:44 -04:00
Tomaz Bratanic
9e53fa7d2e Some more fixes to neo4j enhanced schema (#21139) 2024-05-01 13:12:43 -07:00
Erick Friis
0694538c39 ai21: fix core version (#21168) 2024-05-01 13:10:22 -07:00
Eugene Yurtsev
44602bdc20 langchain[patch],community[minor]: Move load_tools to community (#21158)
Move load tools to community
2024-05-01 16:05:41 -04:00
Eugene Yurtsev
9932f49b3e langchain[patch]: Migrate llms to use optional community imports (#21101) 2024-05-01 16:04:45 -04:00
Eugene Yurtsev
57e8e70daa langchain[patch]: Migrate chat models to optional community imports (#21090)
Migrate chat models to optional community imports
2024-05-01 16:04:12 -04:00
Eugene Yurtsev
2914abd747 langchain[patch]: Fix how the serializable test identifies serializable objects (#21165)
dir() will not work if we're using optional imports. The only way to do this is by using contents of __all__
2024-05-01 15:56:11 -04:00
Eugene Yurtsev
23c5d87311 langchain[patch]: Migrate utils to use optional langchain_community (#21163)
Migrate utils to use optional imports from langchain community
2024-05-01 15:24:02 -04:00
Eugene Yurtsev
bec3eee3fa langchain[patch]: Migrate retrievers to use optional langchain community imports (#21155) 2024-05-01 14:44:44 -04:00
Eugene Yurtsev
43110daea5 langchain[patch]: Update some agent tool kits to handle community import as optional (#21157)
A few things that were not caught by the migration script
2024-05-01 14:22:54 -04:00
Eugene Yurtsev
59f10ab3e0 langchain[patch]: Migrate embeddings to optional imports (#21099) 2024-05-01 13:47:37 -04:00
Eugene Yurtsev
2f709d94d7 langchain[patch]: Migrate vectorstores to use optional langchain community imports (#21150) 2024-05-01 13:33:37 -04:00
Eugene Yurtsev
7230e430db langchain[patch]: Migrate top level files to use optional langchain community (#21152)
Migrate a few top level files to treat langchain community as an optional dependency
2024-05-01 13:23:03 -04:00
Erick Friis
daab9789a8 ai21: release 0.1.4 (#21151) 2024-05-01 17:16:27 +00:00
Asaf Joseph Gardin
642975dd9f partners: AI21 Labs Jamba Support (#20815)
Description: Added support for AI21 new model - Jamba
Twitter handle: https://github.com/AI21Labs

---------

Co-authored-by: Asaf Gardin <asafg@ai21.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-01 10:12:44 -07:00
Eugene Yurtsev
7a39fe60da langchain[patch]: Migrate utilities to handle langchain community as optional (#21149) 2024-05-01 13:09:34 -04:00
Eugene Yurtsev
b879184595 langchain[patch]: embedddings distance move import of openai embeddings into local scope (#21148) 2024-05-01 12:51:51 -04:00
Bagatur
8b4b75e543 docs: standardize vertexai params (#20167)
Related to #20085

Requires https://github.com/langchain-ai/langchain-google/pull/121
2024-05-01 11:42:18 -04:00
Eugene Yurtsev
0e5bf16d00 langchain[patch]: Migrate document loaders to use optional langchain community imports (#21095) 2024-05-01 11:26:25 -04:00
Jacob Lee
bd38073d76 👥 Update LangChain people data (#21143)
👥 Update LangChain people data

Co-authored-by: github-actions <github-actions@github.com>
2024-05-01 11:01:43 -04:00
Harrison Chase
4d1c21d97d community[patch]: Fix alternative name in deprecation notice for sql_database (#21144) 2024-05-01 10:59:42 -04:00
East Agile
2a6f78a53f community[minor]: Rememberizer retriever (#20052)
**Description:**
This pull request introduces a new feature for LangChain: the
integration with the Rememberizer API through a custom retriever.
This enables LangChain applications to allow users to load and sync
their data from Dropbox, Google Drive, Slack, their hard drive into a
vector database that LangChain can query. Queries involve sending text
chunks generated within LangChain and retrieving a collection of
semantically relevant user data for inclusion in LLM prompts.
User knowledge dramatically improved AI applications.
The Rememberizer integration will also allow users to access general
purpose vectorized data such as Reddit channel discussions and US
patents.

**Issue:**
N/A

**Dependencies:**
N/A

**Twitter handle:**
https://twitter.com/Rememberizer
2024-05-01 10:41:44 -04:00
Eugene Yurtsev
1ce1a10f2b langchain[patch],community[minor]: Move graph index creator (#20795)
Move graph index creator to community
2024-05-01 10:04:30 -04:00
Eugene Yurtsev
aa0bc7467c langchain[patch]: Migrate agents module into optional imports for community (#21088) 2024-05-01 09:36:03 -04:00
Eugene Yurtsev
86ff8a3fb4 langchain[patch]: Update docstore module to use optional imports from community (#21091) 2024-05-01 09:35:05 -04:00
Eugene Yurtsev
d640605694 langchain[patch]: Migrate chat loaders to optional community imports (#21089)
Migrate chat loaders to optional community imports
2024-05-01 09:34:44 -04:00
Charlie Marsh
2b10c4dd52 ci: Use ruff check in Makefile (#21138)
## Summary

`ruff /path/to/file.py` works but is deprecated, and we now recommend
`ruff check /path/to/file.py` (to match `ruff format /path/to/file.py`).
2024-05-01 09:34:15 -04:00
Eugene Yurtsev
2fcab9acd9 langchain[patch]: Upgrade storage to treat langchain community as optional (#21105) 2024-05-01 09:33:31 -04:00
William FH
ab55f6996d [Core] Tracing: update parent run_tree's child_runs (#21049) 2024-05-01 06:33:08 -07:00
Abhishek Bhagwat
86fe484e24 docs: Docs (sample notebook) for Vertex DIY RAG Ranking API (#21054)
Vertex DIY RAG APIs helps to build complex RAG systems and provide more
granular control, and are suited for custom use cases.

The Ranking API takes in a list of documents and reranks those documents
based on how relevant the documents are to a given query. Compared to
embeddings that look purely at the semantic similarity of a document and
a query, the ranking API can give you a more precise score for how well
a document answers a given query.


[Reference](https://cloud.google.com/generative-ai-app-builder/docs/ranking)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-01 05:39:39 +00:00
Stuart Leeks
8a01760a0f infra: Sync devcontainer.json and compose file mount location (#20461)
**Sync the config in `devcontainer.json` and `docker-compose.yml`**

Issue: when opening the current `master` branch in a dev container in VS
Code, I get the following message as VS Code cannot find the mounted
source folder:


![image](https://github.com/langchain-ai/langchain/assets/1824461/41cf20c0-d1e0-4648-9578-edf80b99c2db)

Opening in a GitHub Codespace works (it seems to ignore the mounts in
the `docker-compose.yml`.

This PR updates the mount in `docker-compose.yml` and the config in
`devcontainer.json` so that the two align.

I have tested these changes in GitHub Codespaces and a VS Code dev
container and both loaded successfully.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-01 01:32:12 -04:00
aditya thomas
12b1caf295 openai[patch]: add tests for secret_str for keys (#20982)
**Description:** Add tests to check API keys and Active Directory tokens
are masked
**Issue:** Resolves #12165 for OpenAI and Azure OpenAI models
**Dependencies:** None

Also resolves #12473 which may be closed.

Additional contributors @alex4321 (#12473) and @onesolpark (#12542)
2024-05-01 01:26:20 -04:00
Noah
45ddf4d26f community[patch]: Update comments for lazy_load method (#21063)
- [ ] **PR message**: 
- **Description:** Refactored the lazy_load method to use asynchronous
execution for improved performance. The method now initiates scraping of
all URLs simultaneously using asyncio.gather, enhancing data fetching
efficiency. Each Document object is yielded immediately once its content
becomes available, streamlining the entire process.
    - **Issue:** N/A
- **Dependencies:** Requires the asyncio library for handling
asynchronous tasks, which should already be part of standard Python
libraries in Python 3.7 and above.
    - **Email:** [r73327118@gmail.com](mailto:r73327118@gmail.com)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-01 01:20:57 -04:00
Liu Xiaodong
3b473d10f2 experimental: clean python repl input(experimental:Added code for PythonREPL) (#20930)
Update python.py(experimental:Added code for PythonREPL)

Added code for PythonREPL, defining a static method 'sanitize_input'
that takes the string 'query' as input and returns a sanitizing string.
The purpose of this method is to remove unwanted characters from the
input string, Specifically:

1. Delete the whitespace at the beginning and end of the string (' \s').
2. Remove the quotation marks (`` ` ``) at the beginning and end of the
string.
3. Remove the keyword "python" at the beginning of the string (case
insensitive) because the user may have typed it.

This method uses regular expressions (regex) to implement sanitizing.

It all started with this code:
from langchain.agents import Tool
from langchain_experimental.utilities import PythonREPL

python_repl = PythonREPL()
repl_tool = Tool(
    name="python_repl",
description="Remove redundant formatting marks at the beginning and end
of source code from input.Use a Python shell to execute python commands.
If you want to see the output of a value, you should print it out with
`print(...)`.",
    func=python_repl.run,
)

When I call the agent to write a piece of code for me and execute it
with the defined code, I must get an error: SyntaxError('invalid
syntax', ('<string>', 1, 1,'In', 1, 2))

After checking, I found that pythonREPL has less formatting of input
code than the soon-to-be deprecated pythonREPL tool, so I added this
step to it, so that no matter what code I ask the agent to write for me,
it can be executed smoothly and get the output result.
I have tried modifying the prompt words to solve this problem before,
but it did not work, and by adding a simple format check, the problem is
well resolved.
<img width="1271" alt="image"
src="https://github.com/langchain-ai/langchain/assets/164149097/c49a685f-d246-4b11-b655-fd952fc2f04c">

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-01 05:19:09 +00:00
Ismail Hossain Polas
1fdf63fa6c community[patch]: update package name to bagelML (#19948)
**Description**
This pull request updates the Bagel Network package name from
"betabageldb" to "bagelML" to align with the latest changes made by the
Bagel Network team.

The following modifications have been made:

- Updated all references to the old package name ("betabageldb") with
the new package name ("bagelML") throughout the codebase.
- Modified the documentation, and any relevant scripts to reflect the
package name change.
- Tested the changes to ensure that the functionality remains intact and
no breaking changes were introduced.

By merging this pull request, our project will stay up to date with the
latest Bagel Network package naming convention, ensuring compatibility
and smooth integration with their updated library.

Please review the changes and provide any feedback or suggestions. Thank
you!
2024-05-01 01:17:33 -04:00
Tomaz Bratanic
7860e4c649 experimental[patch]: Add support for non-function calling LLMs in llm graph transformers (#21014) 2024-05-01 01:16:07 -04:00
Erick Friis
67e6744e0f docs: fix some notebook formatting (#21136) 2024-04-30 21:39:03 -07:00
tianzedavid
5a8909440b docs: remove repetitive words (#21058)
remove repetitive words

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-01 01:10:42 +00:00
Leonid Kuligin
a36935b520 docs: updated docs on langchain_google_community (#21064)
Thank you for contributing to LangChain!

- [ ] **PR title**: "docs: updated docs on langchain_google_community"


- [ ] **PR message**:
    - **Description:** updated docs on langchain_google_community
2024-04-30 20:20:49 -04:00
Tomaz Bratanic
c9e96bb5e2 community[patch]: Fix neo4j enhanced schema bugs (#21072) 2024-04-30 20:16:26 -04:00
junkeon
8d2909ee25 upstage[minor]: Update few codes and add upstage loader in pdf section (#21085)
**Description:** Update UpstageLayoutAnalysisParser and Loader and add
upstage loader example in pdf section
**Dependencies:** langchain_community
**Twitter handle:** [@upstageai](https://twitter.com/upstageai)

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-30 20:15:49 -04:00
Bagatur
bef50ded63 openai[patch]: fix special token default behavior (#21131)
By default handle special sequences as regular text
2024-04-30 20:08:24 -04:00
MacanPN
0f7f448603 community[patch]: add delete() method to AzureSearch vector store (#21127)
**Issue:**
Currently `AzureSearch` vector store does not implement `delete` method.
This PR implements it. This also makes it compatible with LangChain
indexer.

**Dependencies:**
None

**Twitter handle:**
@martintriska1

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-30 23:46:18 +00:00
Jorge Piedrahita Ortiz
3441a11b21 docs: minor changes in sambanova community integration docs (#21129)
- **Description:** minor changes in sambanova community integration
notebook docs

---------

Co-authored-by: Renate Kempf <165940384+renate-snova@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-30 23:44:26 +00:00
Bagatur
6d3e9eaf84 docs: format (#21132) 2024-04-30 23:32:41 +00:00
Erick Friis
14422a4220 langchain: fix core dep (#21128) 2024-04-30 14:55:12 -07:00
Erick Friis
6c938da302 langchain: release 0.1.17 (#21125) 2024-04-30 14:43:59 -07:00
Erick Friis
5f8a307565 infra: same tagging for langchain (#21126) 2024-04-30 14:43:45 -07:00
Eugene Yurtsev
bf95414758 langchain[minor]: enhance unit test to test imports recursively (#21122) 2024-04-30 17:05:53 -04:00
Eugene Yurtsev
e4f51f59a2 langchain[patch]: Migrate tools to treat community imports as optional (#21117)
Migrate tools to treat community imports as optional
2024-04-30 16:26:18 -04:00
Eugene Yurtsev
9e788f09c6 langchain[patch]: Migrate output parsers to support optional community imports (#21103)
Migrate output parsers
2024-04-30 16:24:29 -04:00
Eugene Yurtsev
3853fe9f64 langchain[patch]: Migrate graphs to use optional community imports (#21100)
Migrate graphs to use optional community imports.
2024-04-30 16:24:06 -04:00
Eugene Yurtsev
8658d52587 langchain[patch]: Upgrade prompts to optional imports (#21078)
Upgrades prompts module to use optional imports.

This code was generated with a migration script, but had to be adjusted
manually a bit.

Testing in preparation for applying this code modification across the
rest of the modules in langchain package to reverse the dependency
between langchain community and langchain.
2024-04-30 16:23:39 -04:00
Eugene Yurtsev
9b6d04a187 langchain[patch]: Migrate document transformers (#21098)
Migrate document transformers
2024-04-30 16:20:02 -04:00
Eugene Yurtsev
aec13a6123 langchain[patch]: Migrate callbacks module to use optional imports for community (#21086) 2024-04-30 16:19:13 -04:00
Erick Friis
8a62fb0570 community: release 0.0.36 (#21118) 2024-04-30 13:18:44 -07:00
Erick Friis
2407c353be core: release 0.1.48 (#21113) 2024-04-30 19:52:36 +00:00
Erick Friis
dbdfa3d34e infra: fix minimum version install to force pypi install (#21112) 2024-04-30 12:41:26 -07:00
Charlie Marsh
fd94aa8366 partner[patch]: Upgrade to Ruff v0.4.2 (#21108)
## Summary

No new diagnostics (given that the set of enabled rules hasn't changed),
but gains access to our new parser (much faster) and reduced false
positives all around.
2024-04-30 15:06:42 -04:00
Jamsheed Mistri
3e749369ef community[minor]: bump version of LayerupSecurity, add support for untrusted_input parameter (#19985)
**Description:** update version of LayerupSecurity package for the
Layerup Security integration. Add untrusted_input parameter.
2024-04-30 14:55:26 -04:00
fubuki8087
f1c3687aa5 community[patch]: Using the right encoding to parse the web page in RecursiveUrlLoader (#20632)
As shown in #13749 , `RecursiveUrlLoader` has encoding issue. This PR is
to solve this.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-30 18:41:36 +00:00
Jakub Pawłowski
b0b1a67771 community[patch]: Skip unexpected 404 HTTP Error in Arxiv download (#21042)
### Description:
When attempting to download PDF files from arXiv, an unexpected 404
error frequently occurs. This error halts the operation, regardless of
whether there are additional documents to process. As a solution, I
suggest implementing a mechanism to ignore and communicate this error
and continue processing the next document from the list.

Proposed Solution: To address the issue of unexpected 404 errors during
PDF downloads from arXiv, I propose implementing the following solution:

- Error Handling: Implement error handling mechanisms to catch and
handle 404 errors gracefully.
- Communication: Inform the user or logging system about the occurrence
of the 404 error.
- Continued Processing: After encountering a 404 error, continue
processing the remaining documents from the list without interruption.

This solution ensures that the application can handle unexpected errors
without terminating the entire operation. It promotes resilience and
robustness in the face of intermittent issues encountered during PDF
downloads from arXiv.

### Issue:
#20909 
### Dependencies:
none

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-30 18:29:22 +00:00
Erick Friis
b9c53e95b7 community: release 0.0.35 (#21104) 2024-04-30 17:48:56 +00:00
Eugene Yurtsev
3c064a757f core[minor],langchain[patch],community[patch]: Move storage interfaces to core (#20750)
* Move storage interface to core
* Move in memory and file system implementation to core
2024-04-30 13:14:26 -04:00
Charlie Marsh
8f38b7a725 multiple: Remove unnecessary Ruff suppression comments (#21050)
## Summary

I ran `ruff check --extend-select RUF100 -n` to identify `# noqa`
comments that weren't having any effect in Ruff, and then `ruff check
--extend-select RUF100 -n --fix` on select files to remove all of the
unnecessary `# noqa: F401` violations. It's possible that these were
needed at some point in the past, but they're not necessary in Ruff
v0.1.15 (used by LangChain) or in the latest release.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-30 17:13:48 +00:00
Erick Friis
748f2ba9ea core: release 0.1.47 (#21094) 2024-04-30 09:22:05 -07:00
Erick Friis
efe27ef849 infra: tag non-langchain releases (#20805) 2024-04-30 16:15:46 +00:00
Eugene Yurtsev
c8f18a2524 langchain[patch]: Update import handling in adapters (#21079) 2024-04-30 10:55:29 -04:00
William FH
5c63ac3dd7 [Patch] Dedent docstring (#20959)
Technically a slight prompt breaking change, but I think positive EV in
that it saves tokens and results in more sane / in-distribution prompts
2024-04-30 07:40:57 -07:00
Eugene Yurtsev
845d8e0025 langchain[patch]: Update handling of deprecation warnings (#21083)
Chains should not be emitting deprecation warnings.
2024-04-30 10:30:23 -04:00
Christophe Bornet
5c77f45b06 community[minor]: Add async methods to CassandraCache and CassandraSemanticCache (#20654) 2024-04-30 10:27:44 -04:00
Christophe Bornet
d6e9bd3011 docs: Bump cassio min version in docs (#21081)
Cassio 0.6+ is recommended for async vector store (not blocking on
getting the embedding dimension) and for hybrid search support.
2024-04-30 10:25:37 -04:00
William FH
db14d4326d [Core] Feat Pretty Print Tool calls (#20997)
Right now, `tool_calls` are not included in the `pretty_print()` output.
Would be nice to show!


![image](https://github.com/langchain-ai/langchain/assets/13333726/6a0ffca3-d02f-4e18-bc76-513eeca2e964)
2024-04-30 07:14:43 -07:00
Kuro Denjiro
fa4124b821 community[minor]: add mintbase loader to langchain (#20089)
- [x] **Add Near NFT loader**: "community: Load NFT near block chain
using mintbase graph API"

- [x] **PR message**: 
    - **Description:** a description of the change
    - **Twitter handle:**Kurodenjiro

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-30 04:11:56 +00:00
Alexander Dicke
d7e12750df community[patch]: allows using text-generation-inference /generate route with HuggingFaceEndpoint (#20100)
- **Description:** allows to use the /generate route of
`text-generation-inference` with the `HuggingFaceEndpoint`
2024-04-29 23:09:55 -04:00
Jonathan Evans
ea43c669f2 community[patch]: Fix Bedrock Mistral stop sequence request key (#20115)
- **Description:** Change Bedrock's Mistral stop sequence key mapping to
"stop" rather than "stop_sequences" which is the correct key [Bedrock
docs
link](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-mistral.html)
`{
    "prompt": string,
    "max_tokens" : int,
    "stop" : [string],    
    "temperature": float,
    "top_p": float,
    "top_k": int
}`
- **Issue:** #20053 
- **Dependencies:** N/A
- **Twitter handle:** N/a
2024-04-29 20:14:36 -04:00
davidkgp
28b0b0d863 community[patch]: Fix for github issue #17690 (#20117)
…/17690

Thank you for contributing to LangChain!

- [x] **Fix Google Lens knowledge graph issue**: "langchain: community"
- Fix for [No "knowledge_graph" property in Google Lens API call from
SerpAPI](https://github.com/langchain-ai/langchain/issues/17690)


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** handled the existence of keys in the json response of
Google Lens
- **Issue:** [No "knowledge_graph" property in Google Lens API call from
SerpAPI](https://github.com/langchain-ai/langchain/issues/17690)



- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-30 00:10:08 +00:00
高远
a7a4630bf4 community[patch]: Modify the text field type and add new exception handling (#20116)
Co-authored-by: gaoyuan <gaoyuan.20001218@bytedance.com>
2024-04-29 20:06:00 -04:00
Rahul Triptahi
c172611647 community[patch]: Add classifier_url argument in PebbloSafeLoader and documentation update. (#21030)
Description: Add classifier_url argument in PebbloSafeLoader.
Documentation: Updated PebbloSafeLoader documentation with above change
and new links for pebblo github pages.

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-04-29 17:41:09 -04:00
Leonid Ganeline
08d08d7c83 docs: langchain docstrings updates (#21032)
Added missed docstings. Formatted docstrings into a consistent format.
2024-04-29 17:40:44 -04:00
Leonid Ganeline
85094cbb3a docs: community docstring updates (#21040)
Added missed docstrings. Updated docstrings to consistent format.
2024-04-29 17:40:23 -04:00
Rodrigo Nogueira
90f19028e5 community[patch]: Add maritalk streaming (sync and async) (#19203)
Co-authored-by: RosevalJr <rdmalajr@gmail.com>
Co-authored-by: Roseval Donisete Malaquias Junior <roseval@maritaca.ai>
2024-04-29 21:31:14 +00:00
Cahid Arda Öz
cc6191cb90 community[minor]: Add support for Upstash Vector (#20824)
## Description

Adding `UpstashVectorStore` to utilize [Upstash
Vector](https://upstash.com/docs/vector/overall/getstarted)!

#17012 was opened to add Upstash Vector to langchain but was closed to
wait for filtering. Now filtering is added to Upstash vector and we open
a new PR. Additionally, [embedding
feature](https://upstash.com/docs/vector/features/embeddingmodels) was
added and we add this to our vectorstore aswell.

## Dependencies

[upstash-vector](https://pypi.org/project/upstash-vector/) should be
installed to use `UpstashVectorStore`. Didn't update dependencies
because of [this comment in the previous
PR](https://github.com/langchain-ai/langchain/pull/17012#pullrequestreview-1876522450).

## Tests

Tests are added and they pass. Tests are naturally network bound since
Upstash Vector is offered through an API.

There was [a discussion in the previous PR about mocking the
unittests](https://github.com/langchain-ai/langchain/pull/17012#pullrequestreview-1891820567).
We didn't make changes to this end yet. We can update the tests if you
can explain how the tests should be mocked.

---------

Co-authored-by: ytkimirti <yusuftaha9@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-29 17:25:01 -04:00
Leonid Ganeline
1a2ff56cd8 core[patch[: docstring update (#21036)
Added missed docstrings. Updated docstrings to consistent format.
2024-04-29 15:35:34 -04:00
Eugene Yurtsev
f479a337cc langchain[patch]: replace deprecated imports with imports from langchain_core (#21033)
* Output of running the migration script.
* Ran only against langchain code itself and not the unit tests.
2024-04-29 15:34:31 -04:00
Eugene Yurtsev
82d4afcac0 langchain[minor]: Code to handle dynamic imports (#20893)
Proposing to centralize code for handling dynamic imports. This allows treating langchain-community as an optional dependency.

---

The proposal is to scan the code base and to replace all existing imports with dynamic imports using this functionality.
2024-04-29 15:34:03 -04:00
Erick Friis
854ae3e1de mistralai: release 0.1.5, allow client passing in (#21034) 2024-04-29 17:14:26 +00:00
chyroc
3e241956d3 community[minor]: add coze chat model (#20770)
add coze chat model, to call coze.com apis
2024-04-29 12:26:16 -04:00
Eugene Yurtsev
29493bb598 cli[minor]: improve confirmation message with more details (#21027)
Improve confirmation message with more details
2024-04-29 12:20:42 -04:00
Eugene Yurtsev
aab78a37f3 cli[patch]: Ignore imports that change the name of the class (#21026)
Not currently handeled by migration script
2024-04-29 12:20:30 -04:00
Massimiliano Pronesti
ce89b34fc0 community[patch]: support hybrid search with threshold in Azure AI Search Retriever (#20907)
Support hybrid search with a score threshold -- similar to what we do
for similarity search.
2024-04-29 12:11:44 -04:00
Andrei Panferov
b3efa38cc0 community[patch]: GigaChat model selection fix (#20988)
Fixed the error that the model name is never actually put into GigaChat
request payload, always defaulting to `GigaChat-Lite`.

With this fix, model selection through
```python
import os
from langchain.chat_models.gigachat import GigaChat

chat = GigaChat(
    name="GigaChat-Pro", # <- HERE!!!!!
    ...
)
```
should actually work, as intended in
[here](804390ba4b/libs/community/langchain_community/llms/gigachat.py (L36)).

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-29 16:08:26 +00:00
Patrick McFadin
3331865f6b community[minor]: add Cassandra Database Toolkit (#20246)
**Description**: ToolKit and Tools for accessing data in a Cassandra
Database primarily for Agent integration. Initially, this includes the
following tools:
- `cassandra_db_schema` Gathers all schema information for the connected
database or a specific schema. Critical for the agent when determining
actions.
- `cassandra_db_select_table_data` Selects data from a specific keyspace
and table. The agent can pass paramaters for a predicate and limits on
the number of returned records.
- `cassandra_db_query` Expiriemental alternative to
`cassandra_db_select_table_data` which takes a query string completely
formed by the agent instead of parameters. May be removed in future
versions.

Includes unit test and two notebooks to demonstrate usage. 

**Dependencies**: cassio
**Twitter handle**: @PatrickMcFadin

---------

Co-authored-by: Phil Miesle <phil.miesle@datastax.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-29 15:51:43 +00:00
Igor Brai
b3e74f2b98 community[minor]: add mojeek search util (#20922)
**Description:** This pull request introduces a new feature to community
tools, enhancing its search capabilities by integrating the Mojeek
search engine
**Dependencies:** None

---------

Co-authored-by: Igor Brai <igor@mojeek.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-04-29 15:49:53 +00:00
hmn falahi
4822beb298 Ignore self/cls from required args of class functions in convert_to_openai_tool (#20691)
Removed redundant self/cls from required args of class functions in
_get_python_function_required_args:

```python
class MemberTool:
    def search_member(
            self,
            keyword: str,
            *args,
            **kwargs,
    ):
        """Search on members with any keyword like first_name, last_name, email

        Args:
            keyword: Any keyword of member
        """

        headers = dict(authorization=kwargs['token'])
        members = []
        try:
            members = request_(
                method='SEARCH',
                url=f'{service_url}/apiv1/members',
                headers=headers,
                json=dict(query=keyword),
            )

        except Exception as e:
            logger.info(e.__doc__)

        return members

convert_to_openai_tool(MemberTool.search_member)
```
expected result:
```
{'type': 'function', 'function': {'name': 'search_member', 'description': 'Search on members with any keyword like first_name, last_name, username, email', 'parameters': {'type': 'object', 'properties': {'keyword': {'type': 'string', 'description': 'Any keyword of member'}}, 'required': ['keyword']}}}
```

#20685

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-29 11:46:26 -04:00
Rahul Triptahi
a64a1943fd docs: Document update for load_extended_matadata in GoogleDriveLoader (#20950)
Document: Updated google_drive,ipynb for loading following extended
metadata.
 - full_path - Full path of the file/s in google drive.
 - owner - owner of the file/s.
 - size - size of the file/s.

Code changes:
[langchain-google/pull/179.](https://github.com/langchain-ai/langchain-google/pull/179)

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-29 11:41:57 -04:00
Eugene Yurtsev
4f4ee8e2cf cli[patch]: Update migrations file manually (#21021)
We need to replace occurrences in the code of RunnableMap not just the
import,
so for now, we don't replace RunnableMap.
2024-04-29 10:53:31 -04:00
Tomaz Bratanic
67428c4052 community[patch]: Neo4j enhanced schema (#20983)
Scan the database for example values and provide them to an LLM for
better inference of Text2cypher
2024-04-29 10:45:55 -04:00
Leonid Kuligin
dc70c23a11 docs: switched GCSLoaders docs to langchain-google-community (#20985)
Thank you for contributing to LangChain!

- [ ] **PR title**: "docs: switched GCSLoaders docs to
langchain-google-community"

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** switched GCSLoaders docs to
langchain-google-community
2024-04-29 10:45:11 -04:00
aditya thomas
8b59bddc03 anthropic[patch]: add tests for secret_str for api key (#20986)
**Description:** Add tests to check API keys are masked
**Issue:** Resolves
https://github.com/langchain-ai/langchain/issues/12165 for Anthropic
models
**Dependencies:** None
2024-04-29 10:39:14 -04:00
Pengcheng Liu
1fad39be1c community[minor]: Add LarkSuite wiki document loader. (#21016)
**Description:** Add LarkSuite wiki document loader. Refer to [LarkSuite
api document
](https://open.feishu.cn/document/server-docs/docs/wiki-v2/space-node/list)for
details.
**Issue:** None
**Dependencies:** None
**Twitter handle:** None
2024-04-29 10:37:50 -04:00
Tomaz Bratanic
d36332476c docs: Add neo4j relationship vector index docs (#20990)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-29 14:36:47 +00:00
Leonid Ganeline
dc7c06bc07 community[minor]: import fix (#20995)
Issue: When the third-party package is not installed, whenever we need
to `pip install <package>` the ImportError is raised.
But sometimes, the `ValueError` or `ModuleNotFoundError` is raised. It
is bad for consistency.
Change: replaced the `ValueError` or `ModuleNotFoundError` with
`ImportError` when we raise an error with the `pip install <package>`
message.
Note: Ideally, we replace all `try: import... except... raise ... `with
helper functions like `import_aim` or just use the existing
[langchain_core.utils.utils.guard_import](https://api.python.langchain.com/en/latest/utils/langchain_core.utils.utils.guard_import.html#langchain_core.utils.utils.guard_import)
But it would be much bigger refactoring. @baskaryan Please, advice on
this.
2024-04-29 10:32:50 -04:00
Karim Lalani
2ddac9a7c3 experimental[minor]: Add bind_tools and with_structured_output functions to OllamaFunctions (#20881)
Implemented bind_tools for OllamaFunctions.
Made OllamaFunctions sub class of ChatOllama.
Implemented with_structured_output for OllamaFunctions.

integration unit test has been updated.
notebook has been updated.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-29 14:13:33 +00:00
Eugene Yurtsev
d781560722 cli[minor]: Add ipynb support, add text_splitters (#20963) 2024-04-29 10:11:21 -04:00
Vadym Barda
5e0b6b3e75 docs: update langserve link in LCEL docs (#20992) 2024-04-29 09:06:10 -04:00
Aditya
07ce39bfe7 docs: updated tutorials for Image generation and Vector Search (#21000)
Description: docs: updated tutorials for Image generation and Vector
Search

@lkuligin for review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
2024-04-29 09:04:11 -04:00
Aditya
17bbb7d2a5 docs: updated tutorial for Gemini versions, included safety attribute updates (#21006)
Description:updated tutorial for Gemini versions, included safety
attribute updates

@lkuligin For review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
2024-04-29 09:01:54 -04:00
WilliamEspegren
804390ba4b community: Spider integration (#20937)
Added the [Spider.cloud](https://spider.cloud) document loader.
[Spider](https://github.com/spider-rs/spider) is the
[fastest](https://github.com/spider-rs/spider/blob/main/benches/BENCHMARKS.md)
and cheapest crawler that returns LLM-ready data.

```
- **Description:** Adds Spider data loader
- **Dependencies:** spider-client
- **Twitter handle:** @WilliamEspegren 
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: = <=>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-27 21:45:03 +00:00
Jamie Lemon
6342217b93 docs: Moves "Using PyMuPDF" to higher up the page. (#20832)
**Description:**
This PR moves the **PyMuPDF** PDF loader solution to be underneath
**PyPDF**. This is because it is the the 2nd most popular PyPI package
after **PyPDF**.

Please refer to these numbers, at the time of writing as follows:

PyPDF
https://www.pepy.tech/projects/PyPDF2
160 million

PyMuPDF
https://www.pepy.tech/projects/pymupdf
60 million

PDFPlumber
https://www.pepy.tech/projects/pdfplumber
23 million

PDFMiner
https://www.pepy.tech/projects/pdfminer
16 million

PyPDFium2
https://www.pepy.tech/projects/pypdfium2
8 million

Unstructured
https://www.pepy.tech/projects/unstructured
8 million


Please note I am an active contributor to
https://github.com/pymupdf/PyMuPDF

Many thanks!

----

**Twitter handle:**
@artifex
2024-04-27 20:40:20 +00:00
Chouaieb Nemri
8097bec472 Added LogEntry, Any, Dict, List, Optional, TypedDict imports (#20970)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: docs"

- [ ] **PR message**:
- **Description:** Uptaded docs: Rag streaming use-cases notebook with
LogEntry, Any, Dict, List, Optional, TypedDict imports
    - **Twitter handle:** c_nemri

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-27 20:13:54 +00:00
ccurme
9ec7151317 fireworks: fix integration tests (#20973) 2024-04-27 19:49:46 +00:00
William FH
9fa9f05e5d Catch System Error in ast parse (#20961)
I can't seem to reproduce, but i got this:

```
SystemError: AST constructor recursion depth mismatch (before=102, after=37)
```

And the operation isn't critical for the actual forward pass so seems
preferable to expand our caught exceptions
2024-04-26 19:31:55 -07:00
YH
2aca7fcdcf core[patch]: Enhance link extraction with query parameters (#20259)
**Description**: This update enhances the `extract_sub_links` function
within the `langchain_core/utils/html.py` module to include query
parameters in the extracted URLs.

**Issue**: N/A

**Dependencies**: No additional dependencies required for this change.

**Twitter handle**: N/A

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-27 02:22:36 +00:00
CT
0e917e319b docs: Add langchainhub to pip install (#20185)
Added langchainhub package in import statement which is required for
"from langchain import hub" to work.

Added sample code to add OpenAI key

Co-authored-by: Chi Yan Tang <100466443+poochiekittie@users.noreply.github.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-27 02:21:40 +00:00
Pamela Fox
45092a36a2 docs: Fix langgraph link (#20244)
Just a simple PR to fix a broken link. Apparently having backticks
outside a link makes it render as code.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-27 02:18:52 +00:00
Chip Davis
e818c75f8a infra: test directory loader multithreaded (#20281)
This is a unit test for #20230 which was a fix for using multithreaded
mode with directory loader @eyurtsev
2024-04-26 19:16:47 -07:00
Guilherme Zanotelli
f931a9ce60 community[patch]: Pass kwargs to SPARQLStore from RdfGraph (#20385)
This introduces `store_kwargs` which behaves similarly to `graph_kwargs`
on the `RdfGraph` object, which will enable users to pass `headers` and
other arguments to the underlying `SPARQLStore` object. I have also made
a [PR in `rdflib` to support passing
`default_graph`](https://github.com/RDFLib/rdflib/pull/2761).

Example usage:
```python
from langchain_community.graphs import RdfGraph

graph = RdfGraph(
    query_endpoint="http://localhost/sparql",
    standard="rdf",
    store_kwargs=dict(
        default_graph="http://example.com/mygraph"
    )
)
```

<!--If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.-->

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-27 01:38:29 +00:00
Chandre Van Der Westhuizen
e57cf73cf5 docs: Added MindsDB provider (#20322)
MindsDB integrates with LangChain, enabling users to deploy, serve, and
fine-tune models available via LangChain within MindsDB, making them
accessible to numerous data sources.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-27 01:36:08 +00:00
Jorge Piedrahita Ortiz
40b2e2916b community[minor]: Sambanova llm integration (#20955)
- **Description:** Added [Sambanova systems](https://sambanova.ai/)
integration, including sambaverse and sambastudio LLMs
- **Dependencies:**   sseclient-py  (optional)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-27 01:05:13 +00:00
Rahul Triptahi
955cf186d2 community[patch]: Ingest source, owner and full_path if present in Document's metadata. (#20949)
Description: The PebbloSafeLoader should first check for owner,
full_path and size in metadata before implementing its own logic.
Dependencies: None
Documentation: NA.

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-04-26 17:50:57 -07:00
Amine Djeghri
790ea75cf7 community[minor]: add exllamav2 library for GPTQ & EXL2 models (#17817)
Added 3 files : 
- Library : ExLlamaV2 
- Test integration
- Notebook

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-27 00:44:43 +00:00
Naveen Tatikonda
8bbdb4f6a0 community[patch]: Add OpenSearch as semantic cache (#20254)
### Description
Use OpenSearch vector store as Semantic Cache.

### Twitter Handle
**@OpenSearchProj**

---------

Signed-off-by: Naveen Tatikonda <navtat@amazon.com>
Co-authored-by: Harish Tatikonda <harishtatikonda@Harishs-MacBook-Air.local>
Co-authored-by: EC2 Default User <ec2-user@ip-172-31-31-155.ec2.internal>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-27 00:20:24 +00:00
Giacomo Berardi
61f14f00d7 docs: ElasticsearchCache in cache integrations documentation (#20790)
The package for LangChain integrations with Elasticsearch
https://github.com/langchain-ai/langchain-elastic is going to contain a
LLM cache integration in the next release (see
https://github.com/langchain-ai/langchain-elastic/pull/14). This is the
documentation contribution on the page dedicated to cache integrations
2024-04-26 15:43:58 -07:00
Mayank Solanki
8c085fc697 community[patch]: Added a function from_existing_collection in Qdrant vector database. (#20779)
Issue: #20514 
The current implementation of `construct_instance` expects a `texts:
List[str]` that will call the embedding function. This might not be
needed when we already have a client with collection and `path, you
don't want to add any text.

This PR adds a class method that returns a qdrant instance with an
existing client.

Here everytime
cb6e5e56c2/libs/community/langchain_community/vectorstores/qdrant.py (L1592)
`construct_instance` is called, this line sends some text for embedding
generation.

---------

Co-authored-by: Anush <anushshetty90@gmail.com>
2024-04-26 15:34:09 -07:00
Leonid Kuligin
893a924b90 core[minor], community[patch], langchain[patch]: move BaseChatLoader to core (#19607)
Thank you for contributing to LangChain!

- [ ] **PR title**: "core: move BaseChatLoader and BaseToolkit from
community"


- [ ] **PR message**: move BaseChatLoader and BaseToolkit

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-26 21:45:51 +00:00
Erick Friis
d4befd0cfb core: fix batch ordering test (#20952) 2024-04-26 21:17:26 +00:00
Eugene Yurtsev
8ed150b2fe cli[minor]: Fix bug to account for name changes (#20948)
* Fix bug to account for name changes / aliases
* Generate migration list from langchain to langchain_core
2024-04-26 15:45:11 -04:00
ccurme
989e4a92c2 (infra) pass input to test-release (#20947) 2024-04-26 15:17:40 -04:00
Eugene Yurtsev
2fa0ff1a2d cli[minor]: update code to generate migrations from langchain to community (#20946)
Updates code that generates migrations from langchain to community
2024-04-26 15:11:32 -04:00
Erick Friis
078c5d9bc6 infra: nonmaster release checkbox (#20945)
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-04-26 14:50:07 -04:00
Leonid Kuligin
d4aec8fc8f docs: adding langchain_google_community to the docs (#20665)
Thank you for contributing to LangChain!

- [ ] **PR title**: "docs: step1. adjusting langchain_community ->
langchain_google_community"


- [ ] 
- **Description:** step1. adjusting langchain_community ->
langchain_google_community
2024-04-26 18:49:03 +00:00
ccurme
bf16cefd18 langchain: deprecate create_structured_output_runnable (#20933) 2024-04-26 14:00:40 -04:00
Erick Friis
38eccab3ae upstage: release 0.1.3 (#20941) 2024-04-26 10:36:11 -07:00
Sean
e1c2e2fdfa upstage: Upstage Groundedness Check parameter update (#20914)
* Groundedness Check takes `str` or `list[Document]` as input.

* Deprecate `GroundednessCheck` due to its naming.
* Added `UpstageGroundednessCheck`. 

* Hotfix for Groundedness Check parameter. 
  The name `query` was misleading and it should be `answer` instead.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-26 17:34:05 +00:00
ccurme
84b8e67c9c mistral: release 0.1.4 (#20940) 2024-04-26 13:06:02 -04:00
ccurme
465fbaa30b openai: release 0.1.4 (#20939) 2024-04-26 09:56:49 -07:00
Eugene Yurtsev
12c906f6ce cli[minor]: Improve partner migrations (#20938)
This auto generates partner migrations.

At the moment the migration is from community -> partner.

So one would need to run the migration script twice to go from langchain to partner.
2024-04-26 12:30:15 -04:00
Eugene Yurtsev
5653f36adc cli[minor]: Add script to generate migrations for partner packages (#20932)
Add script to help generate migrations.

This works well for partner packages. Migrations are generated based on run time rather than static analysis (much simpler to get the correct migrations implemented).

The script for generating migrations from langchain to community still needs work.
2024-04-26 11:17:20 -04:00
ccurme
fe1304afc4 openai: add unit test (#20931)
Test a helper function that was added earlier.
2024-04-26 15:02:19 +00:00
Eugene Yurtsev
6598757037 cli[minor]: Add first version of migrate (#20902)
Adds a first version of the migrate script.
2024-04-26 10:50:21 -04:00
Pengcheng Liu
d95e9fb67f docs: add tool calling example in Tongyi chat model integration. (#20925)
**Description:** add tool calling example in Tongyi chat model
integration.
  **Issue:** None
  **Dependencies:** None
2024-04-26 10:18:54 -04:00
Lei Zhang
9281841cfe community[patch]: fix integrated test case test_recursive_url_loader.py assertions (issue-20919) (#20920)
**Description:** 
Fix integrated test case test_recursive_url_loader.py

Local testing successful

```shell
(venv) lei@LeideMacBook-Pro community % poetry run pytest tests/integration_tests/document_loaders/test_recursive_url_loader.py
================================================================================ test session starts ================================================================================
platform darwin -- Python 3.11.4, pytest-7.4.4, pluggy-1.4.0 -- /Users/zhanglei/Work/github/langchain/venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/zhanglei/Work/github/langchain/libs/community
configfile: pyproject.toml
plugins: syrupy-4.6.1, asyncio-0.20.3, cov-4.1.0, vcr-1.0.2, mock-3.12.0, anyio-3.7.1, dotenv-0.5.2, requests-mock-1.11.0, socket-0.6.0
asyncio: mode=Mode.AUTO
collected 6 items                                                                                                                                                                   

tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader PASSED                                                                 [ 16%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic PASSED                                                   [ 33%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader FAILED                                                                  [ 50%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent PASSED                                                                      [ 66%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_loading_invalid_url PASSED                                                                        [ 83%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties PASSED                                                   [100%]

===================================================================================== FAILURES ======================================================================================
__________________________________________________________________________ test_sync_recursive_url_loader ___________________________________________________________________________

    def test_sync_recursive_url_loader() -> None:
        url = "https://docs.python.org/3.9/"
        loader = RecursiveUrlLoader(
            url, extractor=lambda _: "placeholder", use_async=False, max_depth=2
        )
        docs = loader.load()
>       assert len(docs) == 23
E       AssertionError: assert 24 == 23
E        +  where 24 = len([Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/', 'content_type': 'text/html', 'title': '3.9.18 Documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/py-modindex.html', 'content_type': 'text/html', 'title': 'Python Module Index — Python 3.9.18 documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/download.html', 'content_type': 'text/html', 'title': 'Download — Python 3.9.18 documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/howto/index.html', 'content_type': 'text/html', 'title': 'Python HOWTOs — Python 3.9.18 documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/whatsnew/index.html', 'content_type': 'text/html', 'title': 'Whatâ\x80\x99s New in Python — Python 3.9.18 documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/c-api/index.html', 'content_type': 'text/html', 'title': 'Python/C API Reference Manual — Python 3.9.18 documentation', 'language': None}), ...])

tests/integration_tests/document_loaders/test_recursive_url_loader.py:38: AssertionError
================================================================================= warnings summary ==================================================================================
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties
  /Users/zhanglei/.pyenv/versions/3.11.4/lib/python3.11/html/parser.py:170: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features="xml"` into the BeautifulSoup constructor.
    k = self.parse_starttag(i)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================================ slowest 5 durations ================================================================================
56.75s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic
38.99s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader
31.20s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties
30.37s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent
15.44s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader
============================================================================== short test summary info ==============================================================================
FAILED tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader - AssertionError: assert 24 == 23
================================================================ 1 failed, 5 passed, 5 warnings in 172.97s (0:02:52) ================================================================
(venv) zhanglei@LeideMacBook-Pro community % poetry run pytest tests/integration_tests/document_loaders/test_recursive_url_loader.py
================================================================================ test session starts ================================================================================
platform darwin -- Python 3.11.4, pytest-7.4.4, pluggy-1.4.0 -- /Users/zhanglei/Work/github/langchain/venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/zhanglei/Work/github/langchain/libs/community
configfile: pyproject.toml
plugins: syrupy-4.6.1, asyncio-0.20.3, cov-4.1.0, vcr-1.0.2, mock-3.12.0, anyio-3.7.1, dotenv-0.5.2, requests-mock-1.11.0, socket-0.6.0
asyncio: mode=Mode.AUTO
collected 6 items                                                                                                                                                                   

tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader PASSED                                                                 [ 16%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic PASSED                                                   [ 33%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader PASSED                                                                  [ 50%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent PASSED                                                                      [ 66%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_loading_invalid_url PASSED                                                                        [ 83%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties PASSED                                                   [100%]

================================================================================= warnings summary ==================================================================================
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties
  /Users/zhanglei/.pyenv/versions/3.11.4/lib/python3.11/html/parser.py:170: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features="xml"` into the BeautifulSoup constructor.
    k = self.parse_starttag(i)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================================ slowest 5 durations ================================================================================
46.99s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic
32.43s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader
31.23s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent
30.75s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties
15.89s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader
===================================================================== 6 passed, 5 warnings in 157.42s (0:02:37) =====================================================================
(venv) lei@LeideMacBook-Pro community % 
```

**Issue:** https://github.com/langchain-ai/langchain/issues/20919

**Twitter handle:** @coolbeevip
2024-04-26 10:00:08 -04:00
ccurme
7d8d0229fa remove placeholder error message (#20340) 2024-04-26 13:48:48 +00:00
William FH
4c437ebb9c Use lstv2 (#20747) 2024-04-25 16:51:42 -07:00
ccurme
891ae37437 langchain: support PineconeVectorStore in self query retriever (#20905)
`langchain_pinecone.Pinecone` is deprecated in favor of
`PineconeVectorStore`, and is currently a subclass of
`PineconeVectorStore`.
```python
@deprecated(since="0.0.3", removal="0.2.0", alternative="PineconeVectorStore")
class Pinecone(PineconeVectorStore):
    """Deprecated. Use PineconeVectorStore instead."""

    pass
```
2024-04-25 20:54:58 +00:00
Matt
28df4750ef community[patch]: Add initial tests for AzureSearch vector store (#17663)
**Description:** AzureSearch vector store has no tests. This PR adds
initial tests to validate the code can be imported and used.
**Issue:** N/A
**Dependencies:** azure-search-documents and azure-identity are added as
optional dependencies for testing

---------

Co-authored-by: Matt Gotteiner <[email protected]>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 20:42:01 +00:00
Dristy Srivastava
5f1d1666e3 community[patch]: Add support for pebblo server and client version (#20269)
**Description**:
_PebbloSafeLoader_: Add support for pebblo server and client version


**Documentation:** NA
**Unit test:** NA
**Issue:** NA
**Dependencies:**  None

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-25 20:39:17 +00:00
am-kinetica
b54b19ba1c community[minor]: Implemented Kinetica Document Loader and added notebooks (#20002)
- [ ] **Kinetica Document Loader**: "community: a class to load
Documents from Kinetica"



- [ ] **Kinetica Document Loader**: 
- **Description:** implemented KineticaLoader in `kinetica_loader.py`
- **Dependencies:** install the Kinetica API using `pip install
gpudb==7.2.0.1 `
2024-04-25 13:39:00 -07:00
Michael Schock
5e60d65917 experimental[patch]: return from HuggingGPT task executor task.run() exception (#20219)
**Description:** Fixes a bug in the HuggingGPT task execution logic
here:

      except Exception as e:
          self.status = "failed"
          self.message = str(e)
      self.status = "completed"
      self.save_product()

where a caught exception effectively just sets `self.message` and can
then throw an exception if, e.g., `self.product` is not defined.

**Issue:** None that I'm aware of.
**Dependencies:** None
**Twitter handle:** https://twitter.com/michaeljschock

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-25 20:16:39 +00:00
Anish Chakraborty
898362de81 core[patch]: improve comma separated list output parser to handle non-space separated list (#20434)
- **Description:** Changes
`lanchain_core.output_parsers.CommaSeparatedListOutputParser` to handle
`,` as a delimiter alongside the previous implementation which used `, `
as delimiter.
- **Issue:** Started noticing that some results returned by LLMs were
not getting parsed correctly when the output contained `,` instead of `,
`.
  - **Dependencies:** No
  - **Twitter handle:** not active on twitter.


<!---
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
-->
2024-04-25 20:10:56 +00:00
Michael Schock
63a07f52df experimental[patch]: remove \n from AutoGPT feedback_tool exit check (#20132) 2024-04-25 20:10:33 +00:00
Shengsheng Huang
fd1061e7bf community[patch]: add more data types support to ipex-llm llm integration (#20833)
- **Description**:  
- **add support for more data types**: by default `IpexLLM` will load
the model in int4 format. This PR adds more data types support such as
`sym_in5`, `sym_int8`, etc. Data formats like NF3, NF4, FP4 and FP8 are
only supported on GPU and will be added in future PR.
    - Fix a small issue in saving/loading, update api docs
- **Dependencies**: `ipex-llm` library
- **Document**: In `docs/docs/integrations/llms/ipex_llm.ipynb`, added
instructions for saving/loading low-bit model.
- **Tests**: added new test cases to
`libs/community/tests/integration_tests/llms/test_ipex_llm.py`, added
config params.
- **Contribution maintainer**: @shane-huang
2024-04-25 12:58:18 -07:00
Rahul Triptahi
dc921f0823 community[patch]: Add semantic info to metadata, classified by pebblo-server. (#20468)
Description: Add support for Semantic topics and entities.
Classification done by pebblo-server is not used to enhance metadata of
Documents loaded by document loaders.
Dependencies: None
Documentation: Updated.

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-04-25 12:55:33 -07:00
Eugene Yurtsev
a5028b6356 cli[minor]: Add __version__ (#20903)
Add __version__ to cli
2024-04-25 15:51:33 -04:00
Jingpan Xiong
1202017c56 community[minor]: Add relyt vector database (#20316)
Co-authored-by: kaka <kaka@zbyte-inc.cloud>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: jingsi <jingsi@leadincloud.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-25 19:49:29 +00:00
davidefantiniIntel
f386f71bb3 community: fix tqdm import (#20263)
Description: Fix tqdm import in QuantizedBiEncoderEmbeddings
2024-04-25 19:44:53 +00:00
Andres Algaba
05ae8ca7d4 community[patch]: deprecate persist method in Chroma (#20855)
Thank you for contributing to LangChain!

- [x] **PR title**

- [x] **PR message**:
- **Description:** Deprecate persist method in Chroma no longer exists
in Chroma 0.4.x
    - **Issue:** #20851 
    - **Dependencies:** None
    - **Twitter handle:** AndresAlgaba1

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-25 19:42:03 +00:00
ccurme
fdabd3cdf5 mistral, openai: support custom tokenizers in chat models (#20901) 2024-04-25 15:23:29 -04:00
ccurme
6986e44959 docs: update chat model feature table (#20899) 2024-04-25 15:05:43 -04:00
ccurme
b8db73233c core, community: deprecate tool.__call__ (#20900)
Does not update docs.
2024-04-25 14:50:39 -04:00
merdan
52896258ee docs: hide model import in multiple_tools.ipynb (#20883)
**Description:** 
This PR removes an unnecessary code snippet from the documentation. The
snippet in question is not relevant to the content and does not
contribute to the overall understanding of the topic. It contained
redundant imports and unused code, potentially causing confusion for
readers.

**Issue:** 
There is no specific issue number associated with this change.

**Dependencies:** 
No additional dependencies are required for this change.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 18:47:22 +00:00
Tomaz Bratanic
520972fd0f community[patch]: Support passing graph object to Neo4j integrations (#20876)
For driver connection reusage, we introduce passing the graph object to
neo4j integrations
2024-04-25 11:30:22 -07:00
Lei Zhang
748a6ae609 community[patch]: add HTTP response headers Content-Type to metadata of RecursiveUrlLoader document (#20875)
**Description:** 
The RecursiveUrlLoader loader offers a link_regex parameter that can
filter out URLs. However, this filtering capability is limited, and if
the internal links of the website change, unexpected resources may be
loaded. These resources, such as font files, can cause problems in
subsequent embedding processing.

>
https://blog.langchain.dev/assets/fonts/source-sans-pro-v21-latin-ext_latin-regular.woff2?v=0312715cbf

We can add the Content-Type in the HTTP response headers to the document
metadata so developers can choose which resources to use. This allows
developers to make their own choices.

For example, the following may be a good choice for text knowledge.

- text/plain - simple text file
- text/html - HTML web page
- text/xml - XML format file
- text/json - JSON format data
- application/pdf - PDF file
- application/msword - Word document

and ignore the following

- text/css - CSS stylesheet
- text/javascript - JavaScript script
- application/octet-stream - binary data
- image/jpeg - JPEG image
- image/png - PNG image
- image/gif - GIF image
- image/svg+xml - SVG image
- audio/mpeg - MPEG audio files
- video/mp4 - MP4 video file
- application/font-woff - WOFF font file
- application/font-ttf - TTF font file
- application/zip - ZIP compressed file
- application/octet-stream - binary data

**Twitter handle:** @coolbeevip

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 11:29:41 -07:00
samanhappy
37cbbc00a9 docs: Fix broken link in agents.ipynb (#20872) 2024-04-25 10:42:06 -07:00
fzowl
a6b8ff23bd docs: Use voyage-law-2 in the examples (#20784)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


**Description:** In VoyageAI text-embedding examples use voyage-law-2
model


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-25 10:41:36 -07:00
Erick Friis
eca3640af7 upstage: release 0.1.2 (#20898) 2024-04-25 10:41:19 -07:00
Pavlo Paliychuk
82b5bdc7a1 docs: Fix misplaced zep cloud example links (#20867)
Thank you for contributing to LangChain!

- [x] **PR title**: Fix misplaced zep cloud example links
- [x] **PR message**: 
- **Description:** Fixes misplaced links for vector store and memory zep
cloud examples

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-25 10:41:08 -07:00
Joan Fontanals
baefbfb14e community[mionr]: add Jina Reranker in retrievers module (#19406)
- **Description:** Adapt JinaEmbeddings to run with the new Jina AI
Rerank API
- **Twitter handle:** https://twitter.com/JinaAI_


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 10:27:10 -07:00
Erick Friis
92969d49cb multiple: remove external repo mds (#20896)
api docs build doesn't tolerate them
2024-04-25 17:18:29 +00:00
Jason_Chen
53bb7dbd29 community[patch]: add BeautifulSoupTransformer remove_unwanted_classnames method (#20467)
Add the remove_unwanted_classnames method to the
BeautifulSoupTransformer class, which can filter more effectively.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 17:04:04 +00:00
YISH
ed26149a29 openai[patch]: Allow disablling safe_len_embeddings(OpenAIEmbeddings) (#19743)
OpenAI API compatible server may not support `safe_len_embedding`, 

use `disable_safe_len_embeddings=True` to disable it.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 09:45:52 -07:00
Bagatur
5b83130855 core[minor], langchain[patch], community[patch]: mv StructuredQuery (#20849)
mv StructuredQuery to core
2024-04-25 09:40:26 -07:00
Sean
540f384197 partner: Upstage quick documentation update (#20869)
* Updating the provider docs page. 
The RAG example was meant to be moved to cookbook, but was merged by
mistake.

* Fix bug in Groundedness Check

---------

Co-authored-by: JuHyung-Son <sonju0427@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-25 16:36:54 +00:00
Bagatur
ffad3985a1 core[patch]: Release 0.1.46 (#20891) 2024-04-25 15:40:17 +00:00
Mish Ushakov
6ccecf2363 community[minor]: added Browserbase loader (#20478) 2024-04-25 01:11:03 +00:00
aditya thomas
9e694963a4 docs: custom callback handlers page (#20494)
**Description:** Update to the Callbacks page on custom callback
handlers
**Issue:** #20493 
**Dependencies:** None

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 01:08:36 +00:00
Erick Friis
5da9dd1195 mistral: comment batching param (#20868)
Addresses #20523
2024-04-25 00:38:21 +00:00
Ivaylo Bratoev
7c5063ef60 infra: fix how Poetry is installed in the dev container (#20521)
Currently, when a new dev container is created, poetry does not work in
it with the error "No module named 'rapidfuzz'".

Install Poetry outside the project venv so that poetry and project
dependencies do not get mixed. Use pipx to install poetry securely in
its own isolated environment.

Issue: #12237

Twitter handle: https://twitter.com/ibratoev

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 17:33:25 -07:00
GustavoSept
c2d09a5186 experimental[patch]: Makes regex customizable in text_splitter.py (SemanticChunker class) (#20485)
- **Description:** Currently, the regex is static (`r"(?<=[.?!])\s+"`),
which is only useful for certain use cases. The current change only
moves this to be a parameter of split_text(). Which adds flexibility
without making it more complex (as the default regex is still the same).
- **Issue:** Not applicable (I searched, no one seems to have created
this issue yet).
  - **Dependencies:** None.


_If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17._

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 00:32:40 +00:00
William FH
a936f696a6 [Core] Feat: update config CVar in tool.invoke (#20808) 2024-04-24 17:17:21 -07:00
Lei Zhang
2cd907ad7e text-splitters[patch]: fix MarkdownHeaderTextSplitter fails to parse headers with non-printable characters (#20645)
Description: MarkdownHeaderTextSplitter Fails to Parse Headers with
non-printable characters. more #20643

The following is the official test case. Just replacing `# Foo\n\n` with
`\ufeff# Foo\n\n` will cause the test case to fail.

chunk metadata is empty

```python
def test_md_header_text_splitter_1() -> None:
    """Test markdown splitter by header: Case 1."""

    markdown_document = (
        "\ufeff# Foo\n\n"
        "    ## Bar\n\n"
        "Hi this is Jim\n\n"
        "Hi this is Joe\n\n"
        " ## Baz\n\n"
        " Hi this is Molly"
    )
    headers_to_split_on = [
        ("#", "Header 1"),
        ("##", "Header 2"),
    ]
    markdown_splitter = MarkdownHeaderTextSplitter(
        headers_to_split_on=headers_to_split_on,
    )
    output = markdown_splitter.split_text(markdown_document)
    expected_output = [
        Document(
            page_content="Hi this is Jim  \nHi this is Joe",
            metadata={"Header 1": "Foo", "Header 2": "Bar"},
        ),
        Document(
            page_content="Hi this is Molly",
            metadata={"Header 1": "Foo", "Header 2": "Baz"},
        ),
    ]
    assert output == expected_output
```

twitter: @coolbeevip

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-25 00:07:42 +00:00
jtanios
2968f20970 docs: git dependency name correction (#20662)
This PR corrects the name of the `git` python package to `GitPython`.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 23:43:44 +00:00
ccurme
481d3855dc patch: remove usage of llm, chat model __call__ (#20788)
- `llm(prompt)` -> `llm.invoke(prompt)`
- `llm(prompt=prompt` -> `llm.invoke(prompt)` (same with `messages=`)
- `llm(prompt, callbacks=callbacks)` -> `llm.invoke(prompt,
config={"callbacks": callbacks})`
- `llm(prompt, **kwargs)` -> `llm.invoke(prompt, **kwargs)`
2024-04-24 19:39:23 -04:00
Raghav Dixit
9b7fb381a4 community[patch]: LanceDB integration patch update (#20686)
Description : 

- added functionalities - delete, index creation, using existing
connection object etc.
- updated usage 
- Added LaceDB cloud OSS support

make lint_diff , make test checks done
2024-04-24 16:27:43 -07:00
Nikita Pokidyshev
9e983c9500 langchain[patch]: fix agent_token_buffer_memory not working with openai tools (#20708)
- **Description:** fix a bug in the agent_token_buffer_memory
- **Issue:** agent_token_buffer_memory was not working with openai tools
- **Dependencies:** None
- **Twitter handle:** @pokidyshef
2024-04-24 15:51:58 -07:00
Salika Dave
6353991498 docs: [Retrieval > .. > PDF] update package installation instructions for Unstructured and PDFMiner (#20723)
**Description:** Adds the command to install packages required before
using _Unstructured_ and _PDFMiner_ from `langchain.community`
**Documentation Page Being Updated:** [LangChain > Retrieval > Document
loaders > PDF > Using
Unstructured](https://python.langchain.com/docs/modules/data_connection/document_loaders/pdf/#using-unstructured)
**Issue:** #20719 
**Dependencies:** no dependencies
**Twitter handle:** SalikaDave

<!--
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17. -->

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 22:24:11 +00:00
dpdjvhxm
a9e2e98708 docs: Update apache_age.ipynb (#20722)
typo
2024-04-24 22:18:59 +00:00
Erick Friis
1aef8116de upstage: release 0.1.1 (#20864) 2024-04-24 15:18:30 -07:00
junkeon
c8fd51e8c8 upstage: Add Upstage partner package LA and GC (#20651)
---------

Co-authored-by: Sean <chosh0615@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Sean Cho <sean@upstage.ai>
2024-04-24 15:17:20 -07:00
hsmtkk
5ecebf168c docs: imported List is not used (#20720)
# Description

Minor sample code fix

# Issue

Imported `List` is not used.

# Dependencies

N/A

# Twitter handle

N/A
2024-04-24 15:17:07 -07:00
Alex Lee
243ba71b28 langchain[patch]: add aprep_output method to langchain/chains/base.py (#20748)
## Description

Add `aprep_output` method to `langchain/chains/base.py`. Some downstream
`ChatMessageHistory` objects that use async connections require an async
way to append to the context.

It turned out that `ainvoke()` was calling `prep_output` which is
synchronous.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 22:16:25 +00:00
Harrison Chase
43c041cda5 support messages in messages out (#20862) 2024-04-24 14:58:58 -07:00
back2nix
a1614b88ac groq[patch]: groq proxy support (#20758)
# Proxy Fix for Groq Class 🐛 🚀

## Description
This PR fixes a bug related to proxy settings in the `Groq` class,
allowing users to connect to LangChain services via a proxy.

## Changes Made
-  FIX support for specifying proxy settings in the `Groq` class.
-  Resolved the bug causing issues with proxy settings.
-  Did not include unit tests and documentation updates.
-  Did not run make format, make lint, and make test to ensure code
quality and functionality because I couldn't get it to run, so I don't
program in Python and couldn't run `ruff`.
-  Ensured that the changes are backwards compatible.
-  No additional dependencies were added to `pyproject.toml`.

### Error Before Fix
```python
Traceback (most recent call last):
  File "/home/bg/Documents/code/github.com/back2nix/test/groq/main.py", line 9, in <module>
    chat = ChatGroq(
           ^^^^^^^^^
  File "/home/bg/Documents/code/github.com/back2nix/test/groq/venv310/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 120, in __init__
    super().__init__(**kwargs)
  File "/home/bg/Documents/code/github.com/back2nix/test/groq/venv310/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ChatGroq
__root__
  Invalid `http_client` argument; Expected an instance of `httpx.AsyncClient` but got <class 'httpx.Client'> (type=type_error)
  ```
  
### Example usage after fix
  ```python3
import os

import httpx
from langchain_core.prompts import ChatPromptTemplate
from langchain_groq import ChatGroq

chat = ChatGroq(
    temperature=0,
    groq_api_key=os.environ.get("GROQ_API_KEY"),
    model_name="mixtral-8x7b-32768",
    http_client=httpx.Client(
        proxies="socks5://127.0.0.1:1080",
        transport=httpx.HTTPTransport(local_address="0.0.0.0"),
    ),
    http_async_client=httpx.AsyncClient(
        proxies="socks5://127.0.0.1:1080",
        transport=httpx.HTTPTransport(local_address="0.0.0.0"),
    ),
)

system = "You are a helpful assistant."
human = "{text}"
prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])

chain = prompt | chat
out = chain.invoke({"text": "Explain the importance of low latency LLMs"})

print(out)
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-24 21:58:03 +00:00
volodymyr-memsql
493afe4d8d community[patch]: add hybrid search to singlestoredb vectorstore (#20793)
Implemented the ability to enable full-text search within the
SingleStore vector store, offering users a versatile range of search
strategies. This enhancement allows users to seamlessly combine
full-text search with vector search, enabling the following search
strategies:

* Search solely by vector similarity.
* Conduct searches exclusively based on text similarity, utilizing
Lucene internally.
* Filter search results by text similarity score, with the option to
specify a threshold, followed by a search based on vector similarity.
* Filter results by vector similarity score before conducting a search
based on text similarity.
* Perform searches using a weighted sum of vector and text similarity
scores.

Additionally, integration tests have been added to comprehensively cover
all scenarios.
Updated notebook with examples.

CC: @baskaryan, @hwchase17

---------

Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-24 21:34:50 +00:00
Tomaz Bratanic
9efab3ed66 community[patch]: Add driver config param for neo4j graph (#20772)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-24 21:14:41 +00:00
Leonid Ganeline
13751c3297 community: tigergraph fixes (#20034)
- added guard on the `pyTigerGraph` import
- added a missed example page in the `docs/integrations/graphs/`
- formatted the `docs/integrations/providers/` page to the consistent
format. Added links.
2024-04-24 16:49:21 -04:00
Martin Kolb
0186e4e633 community[patch]: Advanced filtering for HANA Cloud Vector Engine (#20821)
- **Description:**
This PR adds support for advanced filtering to the integration of HANA
Vector Engine.
The newly supported filtering operators are: $eq, $ne, $gt, $gte, $lt,
$lte, $between, $in, $nin, $like, $and, $or

  - **Issue:** N/A
  - **Dependencies:** no new dependencies added

Added integration tests to:
`libs/community/tests/integration_tests/vectorstores/test_hanavector.py`

Description of the new capabilities in notebook:
`docs/docs/integrations/vectorstores/hanavector.ipynb`
2024-04-24 13:47:27 -07:00
Alex Sherstinsky
12e5ec6de3 community: Support both Predibase SDK-v1 and SDK-v2 in Predibase-LangChain integration (#20859) 2024-04-24 13:31:01 -07:00
Erick Friis
8c95ac3145 docs, multiple: de-beta with_structured_output (#20850) 2024-04-24 19:34:57 +00:00
Nuno Campos
477eb1745c Better support for subgraphs in graph viz (#20840) 2024-04-24 12:32:52 -07:00
aditya thomas
a9c7d47c03 docs: update openai llm documentation (#20827)
**Description:** Bring OpenAI LLM page to the LCEL era
**Issue:** See discussion #20810
**Dependencies:** None
2024-04-24 12:26:57 -07:00
JeffKatzy
5ab3f9a995 community[patch]: standardize chat init args (#20844)
Thank you for contributing to LangChain!

community:perplexity[patch]: standardize init args

updated pplx_api_key and request_timeout so that aliased to api_key, and
timeout respectively. Added test that both continue to set the same
underlying attributes.

Related to
[20085](https://github.com/langchain-ai/langchain/issues/20085)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 12:26:05 -07:00
Pavlo Paliychuk
70ae59bcfe docs: Update Zep Messaging, add links to Zep Cloud Docs (#20848)
Thank you for contributing to LangChain!

- [x] **PR title**: docs: Update Zep Messaging, add links to Zep Cloud
Docs

- [x] **PR message**: 
- **Description:** This PR updates Zep messaging in the docs + links to
Langchain Zep Cloud examples in our documentation
    - **Twitter handle:** @paulpaliychuk51


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-24 19:14:54 +00:00
Massimiliano Pronesti
8d1167b32f community[patch]: add support for similarity_score_threshold search in… (#20852)
See
https://github.com/langchain-ai/langchain/issues/20600#issuecomment-2075569338
for details.

@chrislrobert
2024-04-24 19:14:33 +00:00
Bagatur
87d31a3ec0 docs: contributing note (#20843) 2024-04-24 10:41:19 -07:00
Eugene Yurtsev
d8aa72f51d core[minor],langchain[patch]: Move base indexing interface and logic to core (#20667)
This PR moves the interface and the logic to core.

The following changes to namespaces:


`indexes` -> `indexing`
`indexes._api` -> `indexing.api`


Testing code is intentionally duplicated for now since it's testing
different
implementations of the record manager (in-memory vs. SQL).

Common logic will need to be pulled out into the test client.


A follow up PR will move the SQL based implementation outside of
LangChain.
2024-04-24 13:18:42 -04:00
ccurme
3bcfbcc871 groq: handle null queue_time (#20839) 2024-04-24 09:50:09 -07:00
Eugene Yurtsev
30e48c9878 core[patch],community[patch]: Move file chat history back to community (#20834)
Marking as patch since we haven't had releases in between. This just reverting part of a PR from yesterday.
2024-04-24 12:47:25 -04:00
ccurme
6debadaa70 groq: bump core (#20838) 2024-04-24 11:51:46 -04:00
Erick Friis
7984206c95 groq: release 0.1.3 (#20836)
Fixes #20811
2024-04-24 08:06:06 -07:00
Nestor Qin
9111d3a636 community[patch]: Fix message formatting for Anthropic models on Amazon Bedrock (#20801)
**Description:**
This PR fixes an issue in message formatting function for Anthropic
models on Amazon Bedrock.

Currently, LangChain BedrockChat model will crash if it uses Anthropic
models and the model return a message in the following type:
- `AIMessageChunk`

Moreover, when use BedrockChat with for building Agent, the following
message types will trigger the same issue too:
- `HumanMessageChunk`
- `FunctionMessage`

**Issue:**
https://github.com/langchain-ai/langchain/issues/18831

**Dependencies:**
No.

**Testing:**
Manually tested. The following code was failing before the patch and
works after.

```
@tool
def square_root(x: str):
    "Useful when you need to calculate the square root of a number"
    return math.sqrt(int(x))

llm = ChatBedrock(
    model_id="anthropic.claude-3-sonnet-20240229-v1:0",
    model_kwargs={ "temperature": 0.0 },
)

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", FUNCTION_CALL_PROMPT),
        ("human", "Question: {user_input}"),
        MessagesPlaceholder(variable_name="agent_scratchpad"),
    ]
)

tools = [square_root]
tools_string = format_tool_to_anthropic_function(square_root)

agent = (
        RunnablePassthrough.assign(
            user_input=lambda x: x['user_input'],
            agent_scratchpad=lambda x: format_to_openai_function_messages(
                x["intermediate_steps"]
            )
        )
        | prompt
        | llm
        | AnthropicFunctionsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, return_intermediate_steps=True)
output = agent_executor.invoke({
    "user_input": "What is the square root of 2?",
    "tools_string": tools_string,
})
```
List of messages returned from Bedrock:
```
<SystemMessage> content='You are a helpful assistant.'
<HumanMessage> content='Question: What is the square root of 2?'
<AIMessageChunk> content="Okay, let's calculate the square root of 2.<scratchpad>\nTo calculate the square root of a number, I can use the square_root tool:\n\n<function_calls>\n  <invoke>\n    <tool_name>square_root</tool_name>\n    <parameters>\n      <__arg1>2</__arg1>\n    </parameters>\n  </invoke>\n</function_calls>\n</scratchpad>\n\n<function_results>\n<search_result>\nThe square root of 2 is approximately 1.414213562373095\n</search_result>\n</function_results>\n\n<answer>\nThe square root of 2 is approximately 1.414213562373095\n</answer>" id='run-92363df7-eff6-4849-bbba-fa16a1b2988c'"
<FunctionMessage> content='1.4142135623730951' name='square_root'
```
2024-04-23 22:40:39 +00:00
ccurme
06b04b80b8 groq: fix warning filter for integration test (#20806) 2024-04-23 18:11:41 -04:00
ccurme
5a3c65a756 standard tests: add xfails (#20659) 2024-04-23 17:14:16 -04:00
Erick Friis
ddc2274aea standard-tests: split tool calling test (#20803)
just making it a bit easier to grok
2024-04-23 20:59:45 +00:00
ccurme
6622829c67 mistral: catch GatedRepoError, release 0.1.3 (#20802)
https://github.com/langchain-ai/langchain/issues/20618

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-23 20:56:42 +00:00
Eugene Yurtsev
a7c347ab35 langchain[patch]: Update evaluation logic that instantiates a default LLM (#20760)
Favor langchain_openai over langchain_community for evaluation logic.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-04-23 16:09:32 -04:00
Eugene Yurtsev
72f720fa38 langchain[major]: Remove default instantations of LLMs from VectorstoreToolkit (#20794)
Remove default instantiation from vectorstore toolkit.
2024-04-23 16:09:14 -04:00
ccurme
42de5168b1 langchain: deprecate LLMChain, RetrievalQA, and ConversationalRetrievalChain (#20751) 2024-04-23 15:55:34 -04:00
Erick Friis
30c7951505 core: use qualname in beta message (#20361) 2024-04-23 11:20:13 -07:00
Aliaksandr Kuzmik
5560cc448c community[patch]: fix CometTracer bug (#20796)
Hi! My name is Alex, I'm an SDK engineer from
[Comet](https://www.comet.com/site/)

This PR updates the `CometTracer` class.

Fixed an issue when `CometTracer` failed while logging the data to Comet
because this data is not JSON-encodable.

The problem was in some of the `Run` attributes that could contain
non-default types inside, now these attributes are taken not from the
run instance, but from the `run.dict()` return value.
2024-04-23 13:24:41 -04:00
Eugene Yurtsev
1c89e45c14 langchain[major]: breaks some chains to remove hidden defaults (#20759)
Breaks some chains in langchain to remove hidden chat model / llm instantiation.
2024-04-23 11:11:40 -04:00
Eugene Yurtsev
ad6b5f84e5 community[patch],core[minor]: Move in memory cache implementation to core (#20753)
This PR moves the InMemoryCache implementation from community to core.
2024-04-23 11:10:11 -04:00
Stefano Ottolenghi
4f67ce485a docs: Fix typo to render list (#20774)
This _should_ fix the currently broken list in the [Neo4jVector
page](https://python.langchain.com/docs/integrations/vectorstores/neo4jvector/).

![Screenshot from 2024-04-23
08-40-37](https://github.com/langchain-ai/langchain/assets/114478074/ab5ad622-879e-4764-93db-5f502eae479b)
2024-04-23 14:46:58 +00:00
Eugene Yurtsev
a2cc9b55ba core[patch]: Remove autoupgrade to addable dict in Runnable/RunnableLambda/RunnablePassthrough transform (#20677)
Causes an issue for this code

```python
from langchain.chat_models.openai import ChatOpenAI
from langchain.output_parsers.openai_tools import JsonOutputToolsParser
from langchain.schema import SystemMessage

prompt = SystemMessage(content="You are a nice assistant.") + "{question}"

llm = ChatOpenAI(
    model_kwargs={
        "tools": [
            {
                "type": "function",
                "function": {
                    "name": "web_search",
                    "description": "Searches the web for the answer to the question.",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "query": {
                                "type": "string",
                                "description": "The question to search for.",
                            },
                        },
                    },
                },
            }
        ],
    },
    streaming=True,
)

parser = JsonOutputToolsParser(first_tool_only=True)

llm_chain = prompt | llm | parser | (lambda x: x)


for chunk in llm_chain.stream({"question": "tell me more about turtles"}):
    print(chunk)

# message = llm_chain.invoke({"question": "tell me more about turtles"})

# print(message)
```

Instead by definition, we'll assume that RunnableLambdas consume the
entire stream and that if the stream isn't addable then it's the last
message of the stream that's in the usable format.

---

If users want to use addable dicts, they can wrap the dict in an
AddableDict class.

---

Likely, need to follow up with the same change for other places in the
code that do the upgrade
2024-04-23 10:35:06 -04:00
Oleksandr Yaremchuk
9428923bab experimental[minor]: upgrade the prompt injection model (#20783)
- **Description:** In January, Laiyer.ai became part of ProtectAI, which
means the model became owned by ProtectAI. In addition to that,
yesterday, we released a new version of the model addressing issues the
Langchain's community and others mentioned to us about false-positives.
The new model has a better accuracy compared to the previous version,
and we thought the Langchain community would benefit from using the
[latest version of the
model](https://huggingface.co/protectai/deberta-v3-base-prompt-injection-v2).
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** @alex_yaremchuk
2024-04-23 10:23:39 -04:00
Eugene Yurtsev
645b1e142e core[minor],langchain[patch],community[patch]: Move InMemory and File implementations of Chat History to core (#20752)
This PR moves the implementations for chat history to core. So it's
easier to determine which dependencies need to be broken / add
deprecation warnings
2024-04-23 10:22:11 -04:00
ccurme
7a922f3e48 core, openai: support custom token encoders (#20762) 2024-04-23 13:57:05 +00:00
Chen94yue
b481b73805 Update custom_retriever.ipynb (#20776)
Fixed an error in the sample code to ensure that the code can run
directly.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-23 13:47:08 +00:00
Bagatur
ed980601e1 docs: update examples in api ref (#20768) 2024-04-23 00:47:52 +00:00
Bagatur
be51cd3bc9 docs: fix api ref link autogeneration (#20766) 2024-04-22 17:36:41 -07:00
monke111
c807f0a6dd Update google_drive.ipynb (#20731)
langchain_community.document_loaders depricated 
new langchain_google_community

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-22 23:30:46 +00:00
Katarina Supe
dc61e23886 docs: update Memgraph docs (#20736)
- **Description:** Memgraph Platform is being run differently now so I
updated this (I am DX engineer from Memgraph).
2024-04-22 19:27:12 -04:00
Tabish Mir
6a0d44d632 docs: Fix link for partition_pdf in Semi_Structured_RAG.ipynb cookbook (#20763)
docs: Fix link for `partition_pdf` in Semi_Structured_RAG.ipynb cookbook

- **Description:** Fix incorrect link to unstructured-io `partition_pdf`
section
2024-04-22 23:22:55 +00:00
Bagatur
fa4d6f9f8b docs: install partner pkgs vercel (#20761) 2024-04-22 23:08:02 +00:00
Christophe Bornet
0ae5027d98 community[patch]: Remove usage of deprecated StoredBlobHistory in CassandraChatMessageHistory (#20666) 2024-04-22 17:11:05 -04:00
Bagatur
eb18f4e155 infra: rm sep repo partner dirs (#20756)
so you can `poetry run pip install -e libs/partners/*/` to your hearts
content
2024-04-22 14:05:39 -07:00
Bagatur
2a11a30572 docs: automatically add api ref links (#20755)
![Screenshot 2024-04-22 at 1 51 13
PM](https://github.com/langchain-ai/langchain/assets/22008038/b8b09fec-3800-4b97-bd26-5571b8308f4a)
2024-04-22 14:05:29 -07:00
Eugene Yurtsev
936c6cc74a langchain[patch]: Add missing deprecation for openai adapters (#20668)
Add missing deprecation for openai adapters
2024-04-22 14:05:55 -04:00
Eugene Yurtsev
38adbfdf34 community[patch],core[minor]: Move BaseToolKit to core.tools (#20669) 2024-04-22 14:04:30 -04:00
Mark Needham
ce23f8293a Community patch clickhouse make it possible to not specify index (#20460)
Vector indexes in ClickHouse are experimental at the moment and can
sometimes break/change behaviour. So this PR makes it possible to say
that you don't want to specify an index type.

Any queries against the embedding column will be brute force/linear
scan, but that gives reasonable performance for small-medium dataset
sizes.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-22 10:46:37 -07:00
ccurme
c010ec8b71 patch: deprecate (a)get_relevant_documents (#20477)
- `.get_relevant_documents(query)` -> `.invoke(query)`
- `.get_relevant_documents(query=query)` -> `.invoke(query)`
- `.get_relevant_documents(query, callbacks=callbacks)` ->
`.invoke(query, config={"callbacks": callbacks})`
- `.get_relevant_documents(query, **kwargs)` -> `.invoke(query,
**kwargs)`

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-22 11:14:53 -04:00
A Noor
939d113d10 docs: Fixed grammar mistake (#20697)
Description: Changed "You are" to "You are a". Grammar issue.
Dependencies: None

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-22 02:55:05 +00:00
Matheus Henrique Raymundo
bb69819267 community: Fix the stop sequence key name for Mistral in Bedrock (#20709)
Fixing the wrong stop sequence key name that causes an error on AWS
Bedrock.
You can check the MistralAI bedrock parameters
[here](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-mistral.html)
This change fixes this
[issue](https://github.com/langchain-ai/langchain/issues/20095)
2024-04-21 20:06:06 -04:00
Bagatur
1c7b3c75a7 community[patch], experimental[patch]: support tool-calling sql and p… (#20639)
d agents
2024-04-21 15:43:09 -07:00
Bagatur
d0cee65cdc langchain[patch]: langchain-pinecone self query support (#20702) 2024-04-21 15:42:39 -07:00
Leonid Kuligin
5ae738c4fe docs: on google-genai vs google-vertexai (#20713)
Thank you for contributing to LangChain!

- [ ] **PR title**: "docs: added a description of differences
langchain_google_genai vs langchain_google_vertexai"


- [ ]
- **Description:** added a description of differences
langchain_google_genai vs langchain_google_vertexai
2024-04-21 12:53:19 -07:00
shumway743
cb6e5e56c2 community[minor]: add graph store implementation for apache age (#20582)
**Description:** implemented GraphStore class for Apache Age graph db

**Dependencies:** depends on psycopg2

Unit and integration tests included. Formatting and linting have been
run.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-20 14:31:04 -07:00
Christophe Bornet
c909ae0152 community[minor]: Add async methods to CassandraVectorStore (#20602)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-20 02:09:58 +00:00
Leonid Ganeline
06d18c106d langchain[patch]: example_selector import fix (#20676)
Cleaned up updated imports
2024-04-19 21:42:18 -04:00
Leonid Ganeline
d6470aab60 langchain: dosctore import fix (#20678)
Cleaned up imports
2024-04-19 21:41:36 -04:00
Leonid Ganeline
3a750e130c templates: utilities import fix (#20679)
Updated imports from `from langchain.utilities` to `from
langchain_community.utilities`
2024-04-19 21:41:15 -04:00
Dmitry Tyumentsev
f111efeb6e community[patch]: YandexGPT API add ability to disable request logging (#20670)
Closes (#20622)

Added the ability to [disable logging of requests to
YandexGPT](https://yandex.cloud/en/docs/foundation-models/operations/yandexgpt/disable-logging).
2024-04-19 21:40:37 -04:00
Erick Friis
e5f5d9ff56 docs: aws listing (#20674) 2024-04-19 21:27:35 +00:00
Mateusz Szewczyk
75ffe51bbe ibm: Add support for Embedding Models (#20647)
---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-19 20:56:24 +00:00
Erick Friis
73809817ff community: release 0.0.34 (#20672) 2024-04-19 12:44:41 -07:00
Tomaz Bratanic
e4b38e2822 Update neo4j cypher templates to the function callback (#20515)
Update Neo4j Cypher templates to use function callback to pass context
instead of passing it in user prompt.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-19 18:33:32 +00:00
Tomaz Bratanic
3d9b26fc28 Update neo4j vector documentation (#20455)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-19 18:32:13 +00:00
Tomaz Bratanic
8c08cf4619 community: Add support for relationship indexes in neo4j vector (#20657)
Neo4j has added relationship vector indexes.
We can't populate them, but we can use existing indexes for retrieval
2024-04-19 11:22:42 -07:00
Erick Friis
940242c1ec core: release 0.1.45 (#20664) 2024-04-19 09:55:02 -07:00
Saurabh Chalke
3dd6266bcc docs: Remove Duplicate --quiet Flag in Installation Command in LangSmith Docs (#20121)
**Description:** This pull request removes a duplicated `--quiet` flag
in the pip install command found in the LangSmith Walkthrough section of
the documentation.

**Issue:** N/A

**Dependencies:** None
2024-04-19 11:16:44 -04:00
Aditya
6a97448928 Updated Tutorials for Vertex Vector Search (#20376)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: docs"


- [ ] **PR message**: 
    - **Description:** Updated Tutorials for Vertex Vector Search
    - **Issue:** NA
    - **Dependencies:** NA
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

@lkuligin for review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
Co-authored-by: Leonid Kuligin <lkuligin@yandex.ru>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-19 10:38:00 -04:00
Boris Djurdjevic
c5aab9afe3 docs: Fix minor typo in data_connection/document_loaders/custom (#20648)
**Description:**
Minor documentation typo fix in
`data_connection/document_loaders/custom`: `thta's` -> `that's`
2024-04-19 14:17:00 +00:00
Souls-R
36084e7500 docs: fix variable name typo in example code (#20658)
This pull request corrects a mistake in the variable name within the
example code. The variable doc_schema has been changed to dog_schema to
fix the error.
2024-04-19 14:08:25 +00:00
Leonid Ganeline
beebd73f95 docs: integrations/retrievers cleanup (#20357)
Fixed format inconsistencies; added descriptions, links.
2024-04-19 10:02:41 -04:00
Leonid Ganeline
0b99e9201d docs: providers alibaba update (#20560)
Added missed integrations to the Alibaba Cloud provider page
2024-04-18 23:11:17 -07:00
Leonid Ganeline
27a4682415 docs: imports update (#20625)
Updated imports in docs

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-18 23:04:07 -07:00
Ethan Yang
53ae77b13e docs: Update openvino example documents links (#20638) 2024-04-18 22:57:28 -07:00
Sivaudha
baedc3ec0a langchain[minor]: Databricks vector search self query integration (#20627)
- Enable self querying feature for databricks vector search

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-19 03:44:38 +00:00
ccurme
6d530481c1 openai: fix allowed block types (#20636) 2024-04-18 22:12:57 -04:00
Erick Friis
764871f97d infra: add test-doc-imports to ci failure (#20637) 2024-04-19 02:06:57 +00:00
Erick Friis
5c216ad08f upstage[patch]: un-xfail tool calling test, release 0.1.0 (#20635) 2024-04-19 02:02:21 +00:00
Nuno Campos
48307e46a3 core[patch]: Fix runnable map ser/de (#20631) 2024-04-18 18:52:33 -07:00
Charlie Holtz
1cbab0ebda community: update Replicate to work with official models (#20633)
Description: you don't need to pass a version for Replicate official
models. That was broken on LangChain until now!

You can now run: 

```
llm = Replicate(
    model="meta/meta-llama-3-8b-instruct",
    model_kwargs={"temperature": 0.75, "max_length": 500, "top_p": 1},
)
prompt = """
User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?
Assistant:
"""
llm(prompt)
```

I've updated the replicate.ipynb to reflect that.

twitter: @charliebholtz

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-19 01:43:40 +00:00
Congyu
dd5139e304 community[patch]: truncate zhipuai temperature and top_p parameters to [0.01, 0.99] (#20261)
ZhipuAI API only accepts `temperature` parameter between `(0, 1)` open
interval, and if `0` is passed, it responds with status code `400`.

However, 0 and 1 is often accepted by other APIs, for example, OpenAI
allows `[0, 2]` for temperature closed range.

This PR truncates temperature parameter passed to `[0.01, 0.99]` to
improve the compatibility between langchain's ecosystem's and ZhipuAI
(e.g., ragas `evaluate` often generates temperature 0, which results in
a lot of 400 invalid responses). The PR also truncates `top_p` parameter
since it has the same restriction.

Reference: [glm-4 doc](https://open.bigmodel.cn/dev/api#glm-4) (which
unfortunately is in Chinese though).

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-19 01:31:30 +00:00
Lance Martin
d5c22b80a5 community[patch]: Fix Ollama for LLaMA3 (#20624)
We see verbose generations w/ LLaMA3 and Ollama - 

https://smith.langchain.com/public/88c4cd21-3d57-4229-96fe-53443398ca99/r

--- 

Fix here implies that when stop was being set to an empty list, the
stream had no conditions under which to stop, which could lead to
excessive or unintended output.

Test LLaMA2 - 

https://smith.langchain.com/public/57dfc64a-591b-46fa-a1cd-8783acaefea2/r

Test LLaMA3 - 

https://smith.langchain.com/public/76ff5f47-ac89-4772-a7d2-5caa907d3fd6/r

https://smith.langchain.com/public/a31d2fad-9094-4c93-949a-964b27630ccb/r

Test Mistral -

https://smith.langchain.com/public/a4fe7114-c308-4317-b9fd-6c86d31f1c5b/r

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-19 00:20:32 +00:00
Erick Friis
726234eee5 infra: fix doc imports ci (#20629) 2024-04-18 23:42:03 +00:00
Erick Friis
3425988de7 core: deprecation default to qualname (#20578) 2024-04-18 15:35:17 -07:00
hulitaitai
7d0a008744 community[minor]: Add audio-parser "faster-whisper" in audio.py (#20012)
faster-whisper is a reimplementation of OpenAI's Whisper model using
CTranslate2, which is up to 4 times faster than enai/whisper for the
same accuracy while using less memory. The efficiency can be further
improved with 8-bit quantization on both CPU and GPU.

It can automatically detect the following 14 languages and transcribe
the text into their respective languages: en, zh, fr, de, ja, ko, ru,
es, th, it, pt, vi, ar, tr.

The gitbub repository for faster-whisper is :
    https://github.com/SYSTRAN/faster-whisper

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-18 20:50:59 +00:00
Guangdong Liu
e3c2431c5b comminuty[patch]:Fix Error in apache doris insert (#19989)
- **Issue:** #19886
2024-04-18 16:34:32 -04:00
naaive
6f0d4f3f09 docs: Update body_func to hybrid_query in ElasticsearchRetriever (#20498) 2024-04-18 20:19:02 +00:00
Tomaz Bratanic
27370b679e community[patch]: Ignore null and invalid embedding values for neo4j metadata filtering (#20558) 2024-04-18 16:15:45 -04:00
Eugene Yurtsev
718c9cbe3a mistral[patch]: Support both model and model_name (#20557) 2024-04-18 16:12:33 -04:00
Eugene Yurtsev
e3bd521654 docs: Remove example vsdx data (#20620)
VSDX data contains EMF files. Some of these apparently can contain
exploits with some Adobe tools.

This is likely a false positive from antivirus software, but we
can remove it nonetheless.
2024-04-18 16:10:40 -04:00
Dhruv Chawla
c0548eb632 docs: Update uptrain.ipynb to show outputs (#20551)
Hey @eyurtsev, I noticed that the notebook isn't displaying the outputs
properly. I've gone ahead and rerun the cells to ensure that readers can
easily understand the functionality without having to run the code
themselves.
2024-04-18 16:10:23 -04:00
Leonid Ganeline
95dc90609e experimental[patch]: prompts import fix (#20534)
Replaced `from langchain.prompts` with `from langchain_core.prompts`
where it is appropriate.
Most of the changes go to `langchain_experimental`
Similar to #20348
2024-04-18 16:09:11 -04:00
Massimiliano Pronesti
2542a09abc community[patch]: AzureSearch incorrectly converted to retriever (#20601)
Closes #20600.

Please see the issue for more details.
2024-04-18 16:06:47 -04:00
Leonid Ganeline
520ef24fb9 docs: import update (#20610)
Updated imports
2024-04-18 16:05:17 -04:00
Christophe Bornet
8f0b5687a3 community[minor]: Add hybrid search to Cassandra VectorStore (#20286)
Only supported by Astra DB at the moment.
**Twitter handle:** cbornet_
2024-04-18 15:58:43 -04:00
Christophe Bornet
d2d01370bc community[minor]: Add async methods to CassandraLoader (#20609)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-18 19:45:20 +00:00
Eugene Yurtsev
8c29b7bf35 mistralai[patch]: Use public attribute for eventsource.response (#20580)
Minor change, use the public attribute instead of the protected one.
2024-04-18 14:12:12 -04:00
Erick Friis
66fb0b1f35 core: fix fireworks mapping (#20613) 2024-04-18 18:08:40 +00:00
2246 changed files with 100354 additions and 18919 deletions

View File

@@ -12,7 +12,7 @@
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}",
"workspaceFolder": "/workspaces/langchain",
// Prevent the container from shutting down
"overrideCommand": true

View File

@@ -6,7 +6,7 @@ services:
context: ..
volumes:
# Update this to wherever you want VS Code to mount the folder of your project
- ..:/workspaces:cached
- ..:/workspaces/langchain:cached
networks:
- langchain-network
# environment:

View File

@@ -19,6 +19,7 @@ if __name__ == "__main__":
"test": set(),
"extended-test": set(),
}
docs_edited = False
if len(files) == 300:
# max diff length is 300 files - there are likely files missing
@@ -76,6 +77,8 @@ if __name__ == "__main__":
"an update for this new library!"
)
elif any(file.startswith(p) for p in ["docs/", "templates/", "cookbook/"]):
if file.startswith("docs/"):
docs_edited = True
dirs_to_run["lint"].add(".")
outputs = {
@@ -84,6 +87,7 @@ if __name__ == "__main__":
),
"dirs-to-test": list(dirs_to_run["test"] | dirs_to_run["extended-test"]),
"dirs-to-extended-test": list(dirs_to_run["extended-test"]),
"docs-edited": "true" if docs_edited else "",
}
for key, value in outputs.items():
json_output = json.dumps(value)

View File

@@ -13,6 +13,11 @@ on:
required: true
type: string
default: 'libs/langchain'
dangerous-nonmaster-release:
required: false
type: boolean
default: false
description: "Release from a non-master branch (danger!)"
env:
PYTHON_VERSION: "3.11"
@@ -20,7 +25,7 @@ env:
jobs:
build:
if: github.ref == 'refs/heads/master'
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
environment: Scheduled testing
runs-on: ubuntu-latest
@@ -73,8 +78,10 @@ jobs:
- build
uses:
./.github/workflows/_test_release.yml
permissions: write-all
with:
working-directory: ${{ inputs.working-directory }}
dangerous-nonmaster-release: ${{ inputs.dangerous-nonmaster-release }}
secrets: inherit
pre-release-checks:
@@ -112,7 +119,7 @@ jobs:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
# Here we use:
# - The default regular PyPI index as the *primary* index, meaning
# - The default regular PyPI index as the *primary* index, meaning
# that it takes priority (https://pypi.org/simple)
# - The test PyPI index as an extra index, so that any dependencies that
# are not found on test PyPI can be resolved and installed anyway.
@@ -171,7 +178,7 @@ jobs:
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
poetry run pip install $MIN_VERSIONS
poetry run pip install --force-reinstall $MIN_VERSIONS
make tests
working-directory: ${{ inputs.working-directory }}
@@ -291,14 +298,13 @@ jobs:
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Create Release
- name: Create Tag
uses: ncipollo/release-action@v1
if: ${{ inputs.working-directory == 'libs/langchain' }}
with:
artifacts: "dist/*"
token: ${{ secrets.GITHUB_TOKEN }}
draft: false
generateReleaseNotes: true
tag: v${{ needs.build.outputs.version }}
commit: master
generateReleaseNotes: false
tag: ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}
body: "# Release ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}\n\nPackage-specific release note generation coming soon."
commit: ${{ github.sha }}

View File

@@ -7,6 +7,11 @@ on:
required: true
type: string
description: "From which folder this pipeline executes"
dangerous-nonmaster-release:
required: false
type: boolean
default: false
description: "Release from a non-master branch (danger!)"
env:
POETRY_VERSION: "1.7.1"
@@ -14,7 +19,7 @@ env:
jobs:
build:
if: github.ref == 'refs/heads/master'
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
runs-on: ubuntu-latest
outputs:
@@ -93,3 +98,4 @@ jobs:
# This is *only for CI use* and is *extremely dangerous* otherwise!
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
skip-existing: true
attestations: false

View File

@@ -36,6 +36,7 @@ jobs:
dirs-to-lint: ${{ steps.set-matrix.outputs.dirs-to-lint }}
dirs-to-test: ${{ steps.set-matrix.outputs.dirs-to-test }}
dirs-to-extended-test: ${{ steps.set-matrix.outputs.dirs-to-extended-test }}
docs-edited: ${{ steps.set-matrix.outputs.docs-edited }}
lint:
name: cd ${{ matrix.working-directory }}
needs: [ build ]
@@ -60,9 +61,9 @@ jobs:
working-directory: ${{ matrix.working-directory }}
secrets: inherit
test_doc_imports:
test-doc-imports:
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
if: ${{ needs.build.outputs.dirs-to-test != '[]' || needs.build.outputs.docs-edited }}
uses: ./.github/workflows/_test_doc_imports.yml
secrets: inherit
@@ -140,7 +141,7 @@ jobs:
echo "$STATUS" | grep 'nothing to commit, working tree clean'
ci_success:
name: "CI Success"
needs: [build, lint, test, compile-integration-tests, dependencies, extended-tests]
needs: [build, lint, test, compile-integration-tests, dependencies, extended-tests, test-doc-imports]
if: |
always()
runs-on: ubuntu-latest

View File

@@ -3,9 +3,9 @@ name: CI / cd . / make spell_check
on:
push:
branches: [master]
branches: [master, v0.1]
pull_request:
branches: [master]
branches: [master, v0.1]
permissions:
contents: read

View File

@@ -19,11 +19,11 @@ jobs:
working-directory:
- "libs/partners/openai"
- "libs/partners/anthropic"
# - "libs/partners/ai21" # standard-tests broken
- "libs/partners/ai21"
- "libs/partners/fireworks"
# - "libs/partners/groq" # rate-limited
- "libs/partners/groq"
- "libs/partners/mistralai"
# - "libs/partners/together" # rate-limited
- "libs/partners/together"
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
steps:
- uses: actions/checkout@v4

View File

@@ -17,16 +17,11 @@ clean: docs_clean api_docs_clean
## docs_build: Build the documentation.
docs_build:
docs/.local_build.sh
cd docs && make build
## docs_clean: Clean the documentation build artifacts.
docs_clean:
@if [ -d _dist ]; then \
rm -r _dist; \
echo "Directory _dist has been cleaned."; \
else \
echo "Nothing to clean."; \
fi
cd docs && make clean
## docs_linkcheck: Run linkchecker on the documentation.
docs_linkcheck:
@@ -60,12 +55,12 @@ spell_fix:
## lint: Run linting on the project.
lint lint_package lint_tests:
poetry run ruff docs templates cookbook
poetry run ruff check docs templates cookbook
poetry run ruff format docs templates cookbook --diff
poetry run ruff --select I docs templates cookbook
poetry run ruff check --select I docs templates cookbook
git grep 'from langchain import' docs/docs templates cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
## format: Format the project files.
format format_diff:
poetry run ruff format docs templates cookbook
poetry run ruff --select I --fix docs templates cookbook
poetry run ruff check --select I --fix docs templates cookbook

View File

@@ -47,7 +47,7 @@ For these applications, LangChain simplifies the entire application lifecycle:
- **`langchain-community`**: Third party integrations.
- Some integrations have been further split into **partner packages** that only rely on **`langchain-core`**. Examples include **`langchain_openai`** and **`langchain_anthropic`**.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **[LangGraph](https://python.langchain.com/docs/langgraph)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
- **[`LangGraph`](https://python.langchain.com/docs/langgraph)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
### Productionization:
- **[LangSmith](https://python.langchain.com/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.

View File

@@ -464,8 +464,8 @@
" Check if the base64 data is an image by looking at the start of the data\n",
" \"\"\"\n",
" image_signatures = {\n",
" b\"\\xFF\\xD8\\xFF\": \"jpg\",\n",
" b\"\\x89\\x50\\x4E\\x47\\x0D\\x0A\\x1A\\x0A\": \"png\",\n",
" b\"\\xff\\xd8\\xff\": \"jpg\",\n",
" b\"\\x89\\x50\\x4e\\x47\\x0d\\x0a\\x1a\\x0a\": \"png\",\n",
" b\"\\x47\\x49\\x46\\x38\": \"gif\",\n",
" b\"\\x52\\x49\\x46\\x46\": \"webp\",\n",
" }\n",
@@ -604,7 +604,7 @@
"source": [
"# Check retrieval\n",
"query = \"Give me company names that are interesting investments based on EV / NTM and NTM rev growth. Consider EV / NTM multiples vs historical?\"\n",
"docs = retriever_multi_vector_img.get_relevant_documents(query, limit=6)\n",
"docs = retriever_multi_vector_img.invoke(query, limit=6)\n",
"\n",
"# We get 4 docs\n",
"len(docs)"
@@ -630,7 +630,7 @@
"source": [
"# Check retrieval\n",
"query = \"What are the EV / NTM and NTM rev growth for MongoDB, Cloudflare, and Datadog?\"\n",
"docs = retriever_multi_vector_img.get_relevant_documents(query, limit=6)\n",
"docs = retriever_multi_vector_img.invoke(query, limit=6)\n",
"\n",
"# We get 4 docs\n",
"len(docs)"

View File

@@ -185,7 +185,7 @@
" )\n",
" # Text summary chain\n",
" model = VertexAI(\n",
" temperature=0, model_name=\"gemini-pro\", max_output_tokens=1024\n",
" temperature=0, model_name=\"gemini-pro\", max_tokens=1024\n",
" ).with_fallbacks([empty_response])\n",
" summarize_chain = {\"element\": lambda x: x} | prompt | model | StrOutputParser()\n",
"\n",
@@ -254,9 +254,9 @@
"\n",
"def image_summarize(img_base64, prompt):\n",
" \"\"\"Make image summary\"\"\"\n",
" model = ChatVertexAI(model_name=\"gemini-pro-vision\", max_output_tokens=1024)\n",
" model = ChatVertexAI(model=\"gemini-pro-vision\", max_tokens=1024)\n",
"\n",
" msg = model(\n",
" msg = model.invoke(\n",
" [\n",
" HumanMessage(\n",
" content=[\n",
@@ -462,8 +462,8 @@
" Check if the base64 data is an image by looking at the start of the data\n",
" \"\"\"\n",
" image_signatures = {\n",
" b\"\\xFF\\xD8\\xFF\": \"jpg\",\n",
" b\"\\x89\\x50\\x4E\\x47\\x0D\\x0A\\x1A\\x0A\": \"png\",\n",
" b\"\\xff\\xd8\\xff\": \"jpg\",\n",
" b\"\\x89\\x50\\x4e\\x47\\x0d\\x0a\\x1a\\x0a\": \"png\",\n",
" b\"\\x47\\x49\\x46\\x38\": \"gif\",\n",
" b\"\\x52\\x49\\x46\\x46\": \"webp\",\n",
" }\n",
@@ -553,9 +553,7 @@
" \"\"\"\n",
"\n",
" # Multi-modal LLM\n",
" model = ChatVertexAI(\n",
" temperature=0, model_name=\"gemini-pro-vision\", max_output_tokens=1024\n",
" )\n",
" model = ChatVertexAI(temperature=0, model_name=\"gemini-pro-vision\", max_tokens=1024)\n",
"\n",
" # RAG pipeline\n",
" chain = (\n",
@@ -604,7 +602,7 @@
],
"source": [
"query = \"What are the EV / NTM and NTM rev growth for MongoDB, Cloudflare, and Datadog?\"\n",
"docs = retriever_multi_vector_img.get_relevant_documents(query, limit=1)\n",
"docs = retriever_multi_vector_img.invoke(query, limit=1)\n",
"\n",
"# We get 2 docs\n",
"len(docs)"

View File

@@ -47,6 +47,7 @@ Notebook | Description
[press_releases.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/press_releases.ipynb) | Retrieve and query company press release data powered by [Kay.ai](https://kay.ai).
[program_aided_language_model.i...](https://github.com/langchain-ai/langchain/tree/master/cookbook/program_aided_language_model.ipynb) | Implement program-aided language models as described in the provided research paper.
[qa_citations.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/qa_citations.ipynb) | Different ways to get a model to cite its sources.
[rag_upstage_layout_analysis_groundedness_check.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag_upstage_layout_analysis_groundedness_check.ipynb) | End-to-end RAG example using Upstage Layout Analysis and Groundedness Check.
[retrieval_in_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/retrieval_in_sql.ipynb) | Perform retrieval-augmented-generation (rag) on a PostgreSQL database using pgvector.
[sales_agent_with_context.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/sales_agent_with_context.ipynb) | Implement a context-aware ai sales agent, salesgpt, that can have natural sales conversations, interact with other systems, and use a product knowledge base to discuss a company's offerings.
[self_query_hotel_search.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/self_query_hotel_search.ipynb) | Build a hotel room search feature with self-querying retrieval, using a specific hotel recommendation dataset.
@@ -56,3 +57,4 @@ Notebook | Description
[two_agent_debate_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_agent_debate_tools.ipynb) | Simulate multi-agent dialogues where the agents can utilize various tools.
[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
[oracleai_demo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS.

View File

@@ -75,7 +75,7 @@
"\n",
"Apply to the [`LLaMA2`](https://arxiv.org/pdf/2307.09288.pdf) paper. \n",
"\n",
"We use the Unstructured [`partition_pdf`](https://unstructured-io.github.io/unstructured/bricks/partition.html#partition-pdf), which segments a PDF document by using a layout model. \n",
"We use the Unstructured [`partition_pdf`](https://unstructured-io.github.io/unstructured/core/partition.html#partition-pdf), which segments a PDF document by using a layout model. \n",
"\n",
"This layout model makes it possible to extract elements, such as tables, from pdfs. \n",
"\n",

View File

@@ -562,9 +562,7 @@
],
"source": [
"# We can retrieve this table\n",
"retriever.get_relevant_documents(\n",
" \"What are results for LLaMA across across domains / subjects?\"\n",
")[1]"
"retriever.invoke(\"What are results for LLaMA across across domains / subjects?\")[1]"
]
},
{
@@ -614,9 +612,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\"Images / figures with playful and creative examples\")[\n",
" 1\n",
"]"
"retriever.invoke(\"Images / figures with playful and creative examples\")[1]"
]
},
{

View File

@@ -501,9 +501,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\"Images / figures with playful and creative examples\")[\n",
" 0\n",
"]"
"retriever.invoke(\"Images / figures with playful and creative examples\")[0]"
]
},
{

View File

@@ -342,7 +342,7 @@
"# Testing on retrieval\n",
"query = \"What percentage of CPI is dedicated to Housing, and how does it compare to the combined percentage of Medical Care, Apparel, and Other Goods and Services?\"\n",
"suffix_for_images = \" Include any pie charts, graphs, or tables.\"\n",
"docs = retriever_multi_vector_img.get_relevant_documents(query + suffix_for_images)"
"docs = retriever_multi_vector_img.invoke(query + suffix_for_images)"
]
},
{
@@ -532,8 +532,8 @@
"def is_image_data(b64data):\n",
" \"\"\"Check if the base64 data is an image by looking at the start of the data.\"\"\"\n",
" image_signatures = {\n",
" b\"\\xFF\\xD8\\xFF\": \"jpg\",\n",
" b\"\\x89\\x50\\x4E\\x47\\x0D\\x0A\\x1A\\x0A\": \"png\",\n",
" b\"\\xff\\xd8\\xff\": \"jpg\",\n",
" b\"\\x89\\x50\\x4e\\x47\\x0d\\x0a\\x1a\\x0a\": \"png\",\n",
" b\"\\x47\\x49\\x46\\x38\": \"gif\",\n",
" b\"\\x52\\x49\\x46\\x46\": \"webp\",\n",
" }\n",

View File

@@ -90,7 +90,7 @@
" ) -> AIMessage:\n",
" messages = self.update_messages(input_message)\n",
"\n",
" output_message = self.model(messages)\n",
" output_message = self.model.invoke(messages)\n",
" self.update_messages(output_message)\n",
"\n",
" return output_message"

557
cookbook/cql_agent.ipynb Normal file
View File

@@ -0,0 +1,557 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Environment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Python Modules"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Install the following Python modules:\n",
"\n",
"```bash\n",
"pip install ipykernel python-dotenv cassio pandas langchain_openai langchain langchain-community langchainhub langchain_experimental openai-multi-tool-use-parallel-patch\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load the `.env` File"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Connection is via `cassio` using `auto=True` parameter, and the notebook uses OpenAI. You should create a `.env` file accordingly.\n",
"\n",
"For Casssandra, set:\n",
"```bash\n",
"CASSANDRA_CONTACT_POINTS\n",
"CASSANDRA_USERNAME\n",
"CASSANDRA_PASSWORD\n",
"CASSANDRA_KEYSPACE\n",
"```\n",
"\n",
"For Astra, set:\n",
"```bash\n",
"ASTRA_DB_APPLICATION_TOKEN\n",
"ASTRA_DB_DATABASE_ID\n",
"ASTRA_DB_KEYSPACE\n",
"```\n",
"\n",
"For example:\n",
"\n",
"```bash\n",
"# Connection to Astra:\n",
"ASTRA_DB_DATABASE_ID=a1b2c3d4-...\n",
"ASTRA_DB_APPLICATION_TOKEN=AstraCS:...\n",
"ASTRA_DB_KEYSPACE=notebooks\n",
"\n",
"# Also set \n",
"OPENAI_API_KEY=sk-....\n",
"```\n",
"\n",
"(You may also modify the below code to directly connect with `cassio`.)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv(override=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Connect to Cassandra"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"import cassio\n",
"\n",
"cassio.init(auto=True)\n",
"session = cassio.config.resolve_session()\n",
"if not session:\n",
" raise Exception(\n",
" \"Check environment configuration or manually configure cassio connection parameters\"\n",
" )\n",
"\n",
"keyspace = os.environ.get(\n",
" \"ASTRA_DB_KEYSPACE\", os.environ.get(\"CASSANDRA_KEYSPACE\", None)\n",
")\n",
"if not keyspace:\n",
" raise ValueError(\"a KEYSPACE environment variable must be set\")\n",
"\n",
"session.set_keyspace(keyspace)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Database"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This needs to be done one time only!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download Data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The dataset used is from Kaggle, the [Environmental Sensor Telemetry Data](https://www.kaggle.com/datasets/garystafford/environmental-sensor-data-132k?select=iot_telemetry_data.csv). The next cell will download and unzip the data into a Pandas dataframe. The following cell is instructions to download manually. \n",
"\n",
"The net result of this section is you should have a Pandas dataframe variable `df`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Download Automatically"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from io import BytesIO\n",
"from zipfile import ZipFile\n",
"\n",
"import pandas as pd\n",
"import requests\n",
"\n",
"datasetURL = \"https://storage.googleapis.com/kaggle-data-sets/788816/1355729/bundle/archive.zip?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=gcp-kaggle-com%40kaggle-161607.iam.gserviceaccount.com%2F20240404%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20240404T115828Z&X-Goog-Expires=259200&X-Goog-SignedHeaders=host&X-Goog-Signature=2849f003b100eb9dcda8dd8535990f51244292f67e4f5fad36f14aa67f2d4297672d8fe6ff5a39f03a29cda051e33e95d36daab5892b8874dcd5a60228df0361fa26bae491dd4371f02dd20306b583a44ba85a4474376188b1f84765147d3b4f05c57345e5de883c2c29653cce1f3755cd8e645c5e952f4fb1c8a735b22f0c811f97f7bce8d0235d0d3731ca8ab4629ff381f3bae9e35fc1b181c1e69a9c7913a5e42d9d52d53e5f716467205af9c8a3cc6746fc5352e8fbc47cd7d18543626bd67996d18c2045c1e475fc136df83df352fa747f1a3bb73e6ba3985840792ec1de407c15836640ec96db111b173bf16115037d53fdfbfd8ac44145d7f9a546aa\"\n",
"\n",
"response = requests.get(datasetURL)\n",
"if response.status_code == 200:\n",
" zip_file = ZipFile(BytesIO(response.content))\n",
" csv_file_name = zip_file.namelist()[0]\n",
"else:\n",
" print(\"Failed to download the file\")\n",
"\n",
"with zip_file.open(csv_file_name) as csv_file:\n",
" df = pd.read_csv(csv_file)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Download Manually"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can download the `.zip` file and unpack the `.csv` contained within. Comment in the next line, and adjust the path to this `.csv` file appropriately."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# df = pd.read_csv(\"/path/to/iot_telemetry_data.csv\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Data into Cassandra"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This section assumes the existence of a dataframe `df`, the following cell validates its structure. The Download section above creates this object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"assert df is not None, \"Dataframe 'df' must be set\"\n",
"expected_columns = [\n",
" \"ts\",\n",
" \"device\",\n",
" \"co\",\n",
" \"humidity\",\n",
" \"light\",\n",
" \"lpg\",\n",
" \"motion\",\n",
" \"smoke\",\n",
" \"temp\",\n",
"]\n",
"assert all(\n",
" [column in df.columns for column in expected_columns]\n",
"), \"DataFrame does not have the expected columns\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create and load tables:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datetime import UTC, datetime\n",
"\n",
"from cassandra.query import BatchStatement\n",
"\n",
"# Create sensors table\n",
"table_query = \"\"\"\n",
"CREATE TABLE IF NOT EXISTS iot_sensors (\n",
" device text,\n",
" conditions text,\n",
" room text,\n",
" PRIMARY KEY (device)\n",
")\n",
"WITH COMMENT = 'Environmental IoT room sensor metadata.';\n",
"\"\"\"\n",
"session.execute(table_query)\n",
"\n",
"pstmt = session.prepare(\n",
" \"\"\"\n",
"INSERT INTO iot_sensors (device, conditions, room)\n",
"VALUES (?, ?, ?)\n",
"\"\"\"\n",
")\n",
"\n",
"devices = [\n",
" (\"00:0f:00:70:91:0a\", \"stable conditions, cooler and more humid\", \"room 1\"),\n",
" (\"1c:bf:ce:15:ec:4d\", \"highly variable temperature and humidity\", \"room 2\"),\n",
" (\"b8:27:eb:bf:9d:51\", \"stable conditions, warmer and dryer\", \"room 3\"),\n",
"]\n",
"\n",
"for device, conditions, room in devices:\n",
" session.execute(pstmt, (device, conditions, room))\n",
"\n",
"print(\"Sensors inserted successfully.\")\n",
"\n",
"# Create data table\n",
"table_query = \"\"\"\n",
"CREATE TABLE IF NOT EXISTS iot_data (\n",
" day text,\n",
" device text,\n",
" ts timestamp,\n",
" co double,\n",
" humidity double,\n",
" light boolean,\n",
" lpg double,\n",
" motion boolean,\n",
" smoke double,\n",
" temp double,\n",
" PRIMARY KEY ((day, device), ts)\n",
")\n",
"WITH COMMENT = 'Data from environmental IoT room sensors. Columns include device identifier, timestamp (ts) of the data collection, carbon monoxide level (co), relative humidity, light presence, LPG concentration, motion detection, smoke concentration, and temperature (temp). Data is partitioned by day and device.';\n",
"\"\"\"\n",
"session.execute(table_query)\n",
"\n",
"pstmt = session.prepare(\n",
" \"\"\"\n",
"INSERT INTO iot_data (day, device, ts, co, humidity, light, lpg, motion, smoke, temp)\n",
"VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)\n",
"\"\"\"\n",
")\n",
"\n",
"\n",
"def insert_data_batch(name, group):\n",
" batch = BatchStatement()\n",
" day, device = name\n",
" print(f\"Inserting batch for day: {day}, device: {device}\")\n",
"\n",
" for _, row in group.iterrows():\n",
" timestamp = datetime.fromtimestamp(row[\"ts\"], UTC)\n",
" batch.add(\n",
" pstmt,\n",
" (\n",
" day,\n",
" row[\"device\"],\n",
" timestamp,\n",
" row[\"co\"],\n",
" row[\"humidity\"],\n",
" row[\"light\"],\n",
" row[\"lpg\"],\n",
" row[\"motion\"],\n",
" row[\"smoke\"],\n",
" row[\"temp\"],\n",
" ),\n",
" )\n",
"\n",
" session.execute(batch)\n",
"\n",
"\n",
"# Convert columns to appropriate types\n",
"df[\"light\"] = df[\"light\"] == \"true\"\n",
"df[\"motion\"] = df[\"motion\"] == \"true\"\n",
"df[\"ts\"] = df[\"ts\"].astype(float)\n",
"df[\"day\"] = df[\"ts\"].apply(\n",
" lambda x: datetime.fromtimestamp(x, UTC).strftime(\"%Y-%m-%d\")\n",
")\n",
"\n",
"grouped_df = df.groupby([\"day\", \"device\"])\n",
"\n",
"for name, group in grouped_df:\n",
" insert_data_batch(name, group)\n",
"\n",
"print(\"Data load complete\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(session.keyspace)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load the Tools"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Python `import` statements for the demo:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, create_openai_tools_agent\n",
"from langchain_community.agent_toolkits.cassandra_database.toolkit import (\n",
" CassandraDatabaseToolkit,\n",
")\n",
"from langchain_community.tools.cassandra_database.prompt import QUERY_PATH_PROMPT\n",
"from langchain_community.tools.cassandra_database.tool import (\n",
" GetSchemaCassandraDatabaseTool,\n",
" GetTableDataCassandraDatabaseTool,\n",
" QueryCassandraDatabaseTool,\n",
")\n",
"from langchain_community.utilities.cassandra_database import CassandraDatabase\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `CassandraDatabase` object is loaded from `cassio`, though it does accept a `Session`-type parameter as an alternative."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create a CassandraDatabase instance\n",
"db = CassandraDatabase(include_tables=[\"iot_sensors\", \"iot_data\"])\n",
"\n",
"# Create the Cassandra Database tools\n",
"query_tool = QueryCassandraDatabaseTool(db=db)\n",
"schema_tool = GetSchemaCassandraDatabaseTool(db=db)\n",
"select_data_tool = GetTableDataCassandraDatabaseTool(db=db)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The tools can be invoked directly:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Test the tools\n",
"print(\"Executing a CQL query:\")\n",
"query = \"SELECT * FROM iot_sensors LIMIT 5;\"\n",
"result = query_tool.run({\"query\": query})\n",
"print(result)\n",
"\n",
"print(\"\\nGetting the schema for a keyspace:\")\n",
"schema = schema_tool.run({\"keyspace\": keyspace})\n",
"print(schema)\n",
"\n",
"print(\"\\nGetting data from a table:\")\n",
"table = \"iot_data\"\n",
"predicate = \"day = '2020-07-14' and device = 'b8:27:eb:bf:9d:51'\"\n",
"data = select_data_tool.run(\n",
" {\"keyspace\": keyspace, \"table\": table, \"predicate\": predicate, \"limit\": 5}\n",
")\n",
"print(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Agent Configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import Tool\n",
"from langchain_experimental.utilities import PythonREPL\n",
"\n",
"python_repl = PythonREPL()\n",
"\n",
"repl_tool = Tool(\n",
" name=\"python_repl\",\n",
" description=\"A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.\",\n",
" func=python_repl.run,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain import hub\n",
"\n",
"llm = ChatOpenAI(temperature=0, model=\"gpt-4-1106-preview\")\n",
"toolkit = CassandraDatabaseToolkit(db=db)\n",
"\n",
"# context = toolkit.get_context()\n",
"# tools = toolkit.get_tools()\n",
"tools = [schema_tool, select_data_tool, repl_tool]\n",
"\n",
"input = (\n",
" QUERY_PATH_PROMPT\n",
" + f\"\"\"\n",
"\n",
"Here is your task: In the {keyspace} keyspace, find the total number of times the temperature of each device has exceeded 23 degrees on July 14, 2020.\n",
" Create a summary report including the name of the room. Use Pandas if helpful.\n",
"\"\"\"\n",
")\n",
"\n",
"prompt = hub.pull(\"hwchase17/openai-tools-agent\")\n",
"\n",
"# messages = [\n",
"# HumanMessagePromptTemplate.from_template(input),\n",
"# AIMessage(content=QUERY_PATH_PROMPT),\n",
"# MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n",
"# ]\n",
"\n",
"# prompt = ChatPromptTemplate.from_messages(messages)\n",
"# print(prompt)\n",
"\n",
"# Choose the LLM that will drive the agent\n",
"# Only certain models support this\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-1106\", temperature=0)\n",
"\n",
"# Construct the OpenAI Tools agent\n",
"agent = create_openai_tools_agent(llm, tools, prompt)\n",
"\n",
"print(\"Available tools:\")\n",
"for tool in tools:\n",
" print(\"\\t\" + tool.name + \" - \" + tool.description + \" - \" + str(tool))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n",
"\n",
"response = agent_executor.invoke({\"input\": input})\n",
"\n",
"print(response[\"output\"])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -169,7 +169,7 @@
"\n",
"def get_tools(query):\n",
" # Get documents, which contain the Plugins to use\n",
" docs = retriever.get_relevant_documents(query)\n",
" docs = retriever.invoke(query)\n",
" # Get the toolkits, one for each plugin\n",
" tool_kits = [toolkits_dict[d.metadata[\"plugin_name\"]] for d in docs]\n",
" # Get the tools: a separate NLAChain for each endpoint\n",

View File

@@ -193,7 +193,7 @@
"\n",
"def get_tools(query):\n",
" # Get documents, which contain the Plugins to use\n",
" docs = retriever.get_relevant_documents(query)\n",
" docs = retriever.invoke(query)\n",
" # Get the toolkits, one for each plugin\n",
" tool_kits = [toolkits_dict[d.metadata[\"plugin_name\"]] for d in docs]\n",
" # Get the tools: a separate NLAChain for each endpoint\n",

View File

@@ -142,7 +142,7 @@
"\n",
"\n",
"def get_tools(query):\n",
" docs = retriever.get_relevant_documents(query)\n",
" docs = retriever.invoke(query)\n",
" return [ALL_TOOLS[d.metadata[\"index\"]] for d in docs]"
]
},

View File

@@ -362,7 +362,7 @@
],
"source": [
"llm = OpenAI()\n",
"llm(query)"
"llm.invoke(query)"
]
},
{

View File

@@ -108,7 +108,7 @@
" return obs_message\n",
"\n",
" def _act(self):\n",
" act_message = self.model(self.message_history)\n",
" act_message = self.model.invoke(self.message_history)\n",
" self.message_history.append(act_message)\n",
" action = int(self.action_parser.parse(act_message.content)[\"action\"])\n",
" return action\n",

View File

@@ -206,7 +206,7 @@
" print(\"---RETRIEVE---\")\n",
" state_dict = state[\"keys\"]\n",
" question = state_dict[\"question\"]\n",
" documents = retriever.get_relevant_documents(question)\n",
" documents = retriever.invoke(question)\n",
" return {\"keys\": {\"documents\": documents, \"question\": question}}\n",
"\n",
"\n",

View File

@@ -213,7 +213,7 @@
" print(\"---RETRIEVE---\")\n",
" state_dict = state[\"keys\"]\n",
" question = state_dict[\"question\"]\n",
" documents = retriever.get_relevant_documents(question)\n",
" documents = retriever.invoke(question)\n",
" return {\"keys\": {\"documents\": documents, \"question\": question}}\n",
"\n",
"\n",

View File

@@ -435,7 +435,7 @@
" display(HTML(image_html))\n",
"\n",
"\n",
"docs = retriever.get_relevant_documents(\"Woman with children\", k=10)\n",
"docs = retriever.invoke(\"Woman with children\", k=10)\n",
"for doc in docs:\n",
" if is_base64(doc.page_content):\n",
" plt_img_base64(doc.page_content)\n",

View File

@@ -443,7 +443,7 @@
"\n",
"\n",
"query = \"Woman with children\"\n",
"docs = retriever.get_relevant_documents(query, k=10)\n",
"docs = retriever.invoke(query, k=10)\n",
"\n",
"for doc in docs:\n",
" if is_base64(doc.page_content):\n",

View File

@@ -74,7 +74,7 @@
" Applies the chatmodel to the message history\n",
" and returns the message string\n",
" \"\"\"\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=\"\\n\".join(self.message_history + [self.prefix])),\n",

View File

@@ -79,7 +79,7 @@
" Applies the chatmodel to the message history\n",
" and returns the message string\n",
" \"\"\"\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=\"\\n\".join(self.message_history + [self.prefix])),\n",
@@ -234,7 +234,7 @@
" termination_clause=self.termination_clause if self.stop else \"\",\n",
" )\n",
"\n",
" self.response = self.model(\n",
" self.response = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=response_prompt),\n",
@@ -263,7 +263,7 @@
" speaker_names=speaker_names,\n",
" )\n",
"\n",
" choice_string = self.model(\n",
" choice_string = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=choice_prompt),\n",
@@ -299,7 +299,7 @@
" ),\n",
" next_speaker=self.next_speaker,\n",
" )\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=next_prompt),\n",

View File

@@ -71,7 +71,7 @@
" Applies the chatmodel to the message history\n",
" and returns the message string\n",
" \"\"\"\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=\"\\n\".join(self.message_history + [self.prefix])),\n",
@@ -164,7 +164,7 @@
" message_history=\"\\n\".join(self.message_history),\n",
" recent_message=self.message_history[-1],\n",
" )\n",
" bid_string = self.model([SystemMessage(content=prompt)]).content\n",
" bid_string = self.model.invoke([SystemMessage(content=prompt)]).content\n",
" return bid_string"
]
},

View File

@@ -0,0 +1,872 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Oracle AI Vector Search with Document Processing\n",
"Oracle AI Vector Search is designed for Artificial Intelligence (AI) workloads that allows you to query data based on semantics, rather than keywords.\n",
"One of the biggest benefit of Oracle AI Vector Search is that semantic search on unstructured data can be combined with relational search on business data in one single system. This is not only powerful but also significantly more effective because you don't need to add a specialized vector database, eliminating the pain of data fragmentation between multiple systems.\n",
"\n",
"In addition, because Oracle has been building database technologies for so long, your vectors can benefit from all of Oracle Database's most powerful features, like the following:\n",
"\n",
" * Partitioning Support\n",
" * Real Application Clusters scalability\n",
" * Exadata smart scans\n",
" * Shard processing across geographically distributed databases\n",
" * Transactions\n",
" * Parallel SQL\n",
" * Disaster recovery\n",
" * Security\n",
" * Oracle Machine Learning\n",
" * Oracle Graph Database\n",
" * Oracle Spatial and Graph\n",
" * Oracle Blockchain\n",
" * JSON\n",
"\n",
"This guide demonstrates how Oracle AI Vector Search can be used with Langchain to serve an end-to-end RAG pipeline. This guide goes through examples of:\n",
"\n",
" * Loading the documents from various sources using OracleDocLoader\n",
" * Summarizing them within/outside the database using OracleSummary\n",
" * Generating embeddings for them within/outside the database using OracleEmbeddings\n",
" * Chunking them according to different requirements using Advanced Oracle Capabilities from OracleTextSplitter\n",
" * Storing and Indexing them in a Vector Store and querying them for queries in OracleVS"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prerequisites\n",
"\n",
"Please install Oracle Python Client driver to use Langchain with Oracle AI Vector Search. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# pip install oracledb"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Demo User\n",
"First, create a demo user with all the required privileges. "
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connection successful!\n",
"User setup done!\n"
]
}
],
"source": [
"import sys\n",
"\n",
"import oracledb\n",
"\n",
"# please update with your username, password, hostname and service_name\n",
"# please make sure this user has sufficient privileges to perform all below\n",
"username = \"\"\n",
"password = \"\"\n",
"dsn = \"\"\n",
"\n",
"try:\n",
" conn = oracledb.connect(user=username, password=password, dsn=dsn)\n",
" print(\"Connection successful!\")\n",
"\n",
" cursor = conn.cursor()\n",
" cursor.execute(\n",
" \"\"\"\n",
" begin\n",
" -- drop user\n",
" begin\n",
" execute immediate 'drop user testuser cascade';\n",
" exception\n",
" when others then\n",
" dbms_output.put_line('Error setting up user.');\n",
" end;\n",
" execute immediate 'create user testuser identified by testuser';\n",
" execute immediate 'grant connect, unlimited tablespace, create credential, create procedure, create any index to testuser';\n",
" execute immediate 'create or replace directory DEMO_PY_DIR as ''/scratch/hroy/view_storage/hroy_devstorage/demo/orachain''';\n",
" execute immediate 'grant read, write on directory DEMO_PY_DIR to public';\n",
" execute immediate 'grant create mining model to testuser';\n",
"\n",
" -- network access\n",
" begin\n",
" DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(\n",
" host => '*',\n",
" ace => xs$ace_type(privilege_list => xs$name_list('connect'),\n",
" principal_name => 'testuser',\n",
" principal_type => xs_acl.ptype_db));\n",
" end;\n",
" end;\n",
" \"\"\"\n",
" )\n",
" print(\"User setup done!\")\n",
" cursor.close()\n",
" conn.close()\n",
"except Exception as e:\n",
" print(\"User setup failed!\")\n",
" cursor.close()\n",
" conn.close()\n",
" sys.exit(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Process Documents using Oracle AI\n",
"Let's think about a scenario that the users have some documents in Oracle Database or in a file system. They want to use the data for Oracle AI Vector Search using Langchain.\n",
"\n",
"For that, the users need to do some document preprocessing. The first step would be to read the documents, generate their summary(if needed) and then chunk/split them if needed. After that, they need to generate the embeddings for those chunks and store into Oracle AI Vector Store. Finally, the users will perform some semantic queries on those data. \n",
"\n",
"Oracle AI Vector Search Langchain library provides a range of document processing functionalities including document loading, splitting, generating summary and embeddings.\n",
"\n",
"In the following sections, we will go through how to use Oracle AI Langchain APIs to achieve each of these functionalities individually. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Connect to Demo User\n",
"The following sample code will show how to connect to Oracle Database. "
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connection successful!\n"
]
}
],
"source": [
"import sys\n",
"\n",
"import oracledb\n",
"\n",
"# please update with your username, password, hostname and service_name\n",
"username = \"\"\n",
"password = \"\"\n",
"dsn = \"\"\n",
"\n",
"try:\n",
" conn = oracledb.connect(user=username, password=password, dsn=dsn)\n",
" print(\"Connection successful!\")\n",
"except Exception as e:\n",
" print(\"Connection failed!\")\n",
" sys.exit(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Populate a Demo Table\n",
"Create a demo table and insert some sample documents."
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Table created and populated.\n"
]
}
],
"source": [
"try:\n",
" cursor = conn.cursor()\n",
"\n",
" drop_table_sql = \"\"\"drop table demo_tab\"\"\"\n",
" cursor.execute(drop_table_sql)\n",
"\n",
" create_table_sql = \"\"\"create table demo_tab (id number, data clob)\"\"\"\n",
" cursor.execute(create_table_sql)\n",
"\n",
" insert_row_sql = \"\"\"insert into demo_tab values (:1, :2)\"\"\"\n",
" rows_to_insert = [\n",
" (\n",
" 1,\n",
" \"If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.\",\n",
" ),\n",
" (\n",
" 2,\n",
" \"A tablespace can be online (accessible) or offline (not accessible) whenever the database is open.\\nA tablespace is usually online so that its data is available to users. The SYSTEM tablespace and temporary tablespaces cannot be taken offline.\",\n",
" ),\n",
" (\n",
" 3,\n",
" \"The database stores LOBs differently from other data types. Creating a LOB column implicitly creates a LOB segment and a LOB index. The tablespace containing the LOB segment and LOB index, which are always stored together, may be different from the tablespace containing the table.\\nSometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.\",\n",
" ),\n",
" ]\n",
" cursor.executemany(insert_row_sql, rows_to_insert)\n",
"\n",
" conn.commit()\n",
"\n",
" print(\"Table created and populated.\")\n",
" cursor.close()\n",
"except Exception as e:\n",
" print(\"Table creation failed.\")\n",
" cursor.close()\n",
" conn.close()\n",
" sys.exit(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"Now that we have a demo user and a demo table with some data, we just need to do one more setup. For embedding and summary, we have a few provider options that the users can choose from such as database, 3rd party providers like ocigenai, huggingface, openai, etc. If the users choose to use 3rd party provider, they need to create a credential with corresponding authentication information. On the other hand, if the users choose to use 'database' as provider, they need to load an onnx model to Oracle Database for embeddings; however, for summary, they don't need to do anything."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load ONNX Model\n",
"\n",
"To generate embeddings, Oracle provides a few provider options for users to choose from. The users can choose 'database' provider or some 3rd party providers like OCIGENAI, HuggingFace, etc.\n",
"\n",
"***Note*** If the users choose database option, they need to load an ONNX model to Oracle Database. The users do not need to load an ONNX model to Oracle Database if they choose to use 3rd party provider to generate embeddings.\n",
"\n",
"One of the core benefits of using an ONNX model is that the users do not need to transfer their data to 3rd party to generate embeddings. And also, since it does not involve any network or REST API calls, it may provide better performance.\n",
"\n",
"Here is the sample code to load an ONNX model to Oracle Database:"
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ONNX model loaded.\n"
]
}
],
"source": [
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"\n",
"# please update with your related information\n",
"# make sure that you have onnx file in the system\n",
"onnx_dir = \"DEMO_PY_DIR\"\n",
"onnx_file = \"tinybert.onnx\"\n",
"model_name = \"demo_model\"\n",
"\n",
"try:\n",
" OracleEmbeddings.load_onnx_model(conn, onnx_dir, onnx_file, model_name)\n",
" print(\"ONNX model loaded.\")\n",
"except Exception as e:\n",
" print(\"ONNX model loading failed!\")\n",
" sys.exit(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Credential\n",
"\n",
"On the other hand, if the users choose to use 3rd party provider to generate embeddings and summary, they need to create credential to access 3rd party provider's end points.\n",
"\n",
"***Note:*** The users do not need to create any credential if they choose to use 'database' provider to generate embeddings and summary. Should the users choose to 3rd party provider, they need to create credential for the 3rd party provider they want to use. \n",
"\n",
"Here is a sample example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"try:\n",
" cursor = conn.cursor()\n",
" cursor.execute(\n",
" \"\"\"\n",
" declare\n",
" jo json_object_t;\n",
" begin\n",
" -- HuggingFace\n",
" dbms_vector_chain.drop_credential(credential_name => 'HF_CRED');\n",
" jo := json_object_t();\n",
" jo.put('access_token', '<access_token>');\n",
" dbms_vector_chain.create_credential(\n",
" credential_name => 'HF_CRED',\n",
" params => json(jo.to_string));\n",
"\n",
" -- OCIGENAI\n",
" dbms_vector_chain.drop_credential(credential_name => 'OCI_CRED');\n",
" jo := json_object_t();\n",
" jo.put('user_ocid','<user_ocid>');\n",
" jo.put('tenancy_ocid','<tenancy_ocid>');\n",
" jo.put('compartment_ocid','<compartment_ocid>');\n",
" jo.put('private_key','<private_key>');\n",
" jo.put('fingerprint','<fingerprint>');\n",
" dbms_vector_chain.create_credential(\n",
" credential_name => 'OCI_CRED',\n",
" params => json(jo.to_string));\n",
" end;\n",
" \"\"\"\n",
" )\n",
" cursor.close()\n",
" print(\"Credentials created.\")\n",
"except Exception as ex:\n",
" cursor.close()\n",
" raise"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Documents\n",
"The users can load the documents from Oracle Database or a file system or both. They just need to set the loader parameters accordingly. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters.\n",
"\n",
"The main benefit of using OracleDocLoader is that it can handle 150+ different file formats. You don't need to use different types of loader for different file formats. Here is the list formats that we support: [Oracle Text Supported Document Formats](https://docs.oracle.com/en/database/oracle/oracle-database/23/ccref/oracle-text-supported-document-formats.html)\n",
"\n",
"The following sample code will show how to do that:"
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of docs loaded: 3\n"
]
}
],
"source": [
"from langchain_community.document_loaders.oracleai import OracleDocLoader\n",
"from langchain_core.documents import Document\n",
"\n",
"# loading from Oracle Database table\n",
"# make sure you have the table with this specification\n",
"loader_params = {}\n",
"loader_params = {\n",
" \"owner\": \"testuser\",\n",
" \"tablename\": \"demo_tab\",\n",
" \"colname\": \"data\",\n",
"}\n",
"\n",
"\"\"\" load the docs \"\"\"\n",
"loader = OracleDocLoader(conn=conn, params=loader_params)\n",
"docs = loader.load()\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of docs loaded: {len(docs)}\")\n",
"# print(f\"Document-0: {docs[0].page_content}\") # content"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Generate Summary\n",
"Now that the user loaded the documents, they may want to generate a summary for each document. The Oracle AI Vector Search Langchain library provides an API to do that. There are a few summary generation provider options including Database, OCIGENAI, HuggingFace and so on. The users can choose their preferred provider to generate a summary. Like before, they just need to set the summary parameters accordingly. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"***Note:*** The users may need to set proxy if they want to use some 3rd party summary generation providers other than Oracle's in-house and default provider: 'database'. If you don't have proxy, please remove the proxy parameter when you instantiate the OracleSummary."
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"# proxy to be used when we instantiate summary and embedder object\n",
"proxy = \"\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following sample code will show how to generate summary:"
]
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of Summaries: 3\n"
]
}
],
"source": [
"from langchain_community.utilities.oracleai import OracleSummary\n",
"from langchain_core.documents import Document\n",
"\n",
"# using 'database' provider\n",
"summary_params = {\n",
" \"provider\": \"database\",\n",
" \"glevel\": \"S\",\n",
" \"numParagraphs\": 1,\n",
" \"language\": \"english\",\n",
"}\n",
"\n",
"# get the summary instance\n",
"# Remove proxy if not required\n",
"summ = OracleSummary(conn=conn, params=summary_params, proxy=proxy)\n",
"\n",
"list_summary = []\n",
"for doc in docs:\n",
" summary = summ.get_summary(doc.page_content)\n",
" list_summary.append(summary)\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of Summaries: {len(list_summary)}\")\n",
"# print(f\"Summary-0: {list_summary[0]}\") #content"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Split Documents\n",
"The documents can be in different sizes: small, medium, large, or very large. The users like to split/chunk their documents into smaller pieces to generate embeddings. There are lots of different splitting customizations the users can do. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters.\n",
"\n",
"The following sample code will show how to do that:"
]
},
{
"cell_type": "code",
"execution_count": 50,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of Chunks: 3\n"
]
}
],
"source": [
"from langchain_community.document_loaders.oracleai import OracleTextSplitter\n",
"from langchain_core.documents import Document\n",
"\n",
"# split by default parameters\n",
"splitter_params = {\"normalize\": \"all\"}\n",
"\n",
"\"\"\" get the splitter instance \"\"\"\n",
"splitter = OracleTextSplitter(conn=conn, params=splitter_params)\n",
"\n",
"list_chunks = []\n",
"for doc in docs:\n",
" chunks = splitter.split_text(doc.page_content)\n",
" list_chunks.extend(chunks)\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of Chunks: {len(list_chunks)}\")\n",
"# print(f\"Chunk-0: {list_chunks[0]}\") # content"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Generate Embeddings\n",
"Now that the documents are chunked as per requirements, the users may want to generate embeddings for these chunks. Oracle AI Vector Search provides a number of ways to generate embeddings. The users can load an ONNX embedding model to Oracle Database and use it to generate embeddings or use some 3rd party API's end points to generate embeddings. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"***Note:*** The users may need to set proxy if they want to use some 3rd party embedding generation providers other than 'database' provider (aka using ONNX model)."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"# proxy to be used when we instantiate summary and embedder object\n",
"proxy = \"\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following sample code will show how to generate embeddings:"
]
},
{
"cell_type": "code",
"execution_count": 51,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of embeddings: 3\n"
]
}
],
"source": [
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_core.documents import Document\n",
"\n",
"# using ONNX model loaded to Oracle Database\n",
"embedder_params = {\"provider\": \"database\", \"model\": \"demo_model\"}\n",
"\n",
"# get the embedding instance\n",
"# Remove proxy if not required\n",
"embedder = OracleEmbeddings(conn=conn, params=embedder_params, proxy=proxy)\n",
"\n",
"embeddings = []\n",
"for doc in docs:\n",
" chunks = splitter.split_text(doc.page_content)\n",
" for chunk in chunks:\n",
" embed = embedder.embed_query(chunk)\n",
" embeddings.append(embed)\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of embeddings: {len(embeddings)}\")\n",
"# print(f\"Embedding-0: {embeddings[0]}\") # content"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Oracle AI Vector Store\n",
"Now that you know how to use Oracle AI Langchain library APIs individually to process the documents, let us show how to integrate with Oracle AI Vector Store to facilitate the semantic searches."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, let's import all the dependencies."
]
},
{
"cell_type": "code",
"execution_count": 52,
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"\n",
"import oracledb\n",
"from langchain_community.document_loaders.oracleai import (\n",
" OracleDocLoader,\n",
" OracleTextSplitter,\n",
")\n",
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_community.utilities.oracleai import OracleSummary\n",
"from langchain_community.vectorstores import oraclevs\n",
"from langchain_community.vectorstores.oraclevs import OracleVS\n",
"from langchain_community.vectorstores.utils import DistanceStrategy\n",
"from langchain_core.documents import Document"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, let's combine all document processing stages together. Here is the sample code below:"
]
},
{
"cell_type": "code",
"execution_count": 53,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connection successful!\n",
"ONNX model loaded.\n",
"Number of total chunks with metadata: 3\n"
]
}
],
"source": [
"\"\"\"\n",
"In this sample example, we will use 'database' provider for both summary and embeddings.\n",
"So, we don't need to do the followings:\n",
" - set proxy for 3rd party providers\n",
" - create credential for 3rd party providers\n",
"\n",
"If you choose to use 3rd party provider, \n",
"please follow the necessary steps for proxy and credential.\n",
"\"\"\"\n",
"\n",
"# oracle connection\n",
"# please update with your username, password, hostname, and service_name\n",
"username = \"\"\n",
"password = \"\"\n",
"dsn = \"\"\n",
"\n",
"try:\n",
" conn = oracledb.connect(user=username, password=password, dsn=dsn)\n",
" print(\"Connection successful!\")\n",
"except Exception as e:\n",
" print(\"Connection failed!\")\n",
" sys.exit(1)\n",
"\n",
"\n",
"# load onnx model\n",
"# please update with your related information\n",
"onnx_dir = \"DEMO_PY_DIR\"\n",
"onnx_file = \"tinybert.onnx\"\n",
"model_name = \"demo_model\"\n",
"try:\n",
" OracleEmbeddings.load_onnx_model(conn, onnx_dir, onnx_file, model_name)\n",
" print(\"ONNX model loaded.\")\n",
"except Exception as e:\n",
" print(\"ONNX model loading failed!\")\n",
" sys.exit(1)\n",
"\n",
"\n",
"# params\n",
"# please update necessary fields with related information\n",
"loader_params = {\n",
" \"owner\": \"testuser\",\n",
" \"tablename\": \"demo_tab\",\n",
" \"colname\": \"data\",\n",
"}\n",
"summary_params = {\n",
" \"provider\": \"database\",\n",
" \"glevel\": \"S\",\n",
" \"numParagraphs\": 1,\n",
" \"language\": \"english\",\n",
"}\n",
"splitter_params = {\"normalize\": \"all\"}\n",
"embedder_params = {\"provider\": \"database\", \"model\": \"demo_model\"}\n",
"\n",
"# instantiate loader, summary, splitter, and embedder\n",
"loader = OracleDocLoader(conn=conn, params=loader_params)\n",
"summary = OracleSummary(conn=conn, params=summary_params)\n",
"splitter = OracleTextSplitter(conn=conn, params=splitter_params)\n",
"embedder = OracleEmbeddings(conn=conn, params=embedder_params)\n",
"\n",
"# process the documents\n",
"chunks_with_mdata = []\n",
"for id, doc in enumerate(docs, start=1):\n",
" summ = summary.get_summary(doc.page_content)\n",
" chunks = splitter.split_text(doc.page_content)\n",
" for ic, chunk in enumerate(chunks, start=1):\n",
" chunk_metadata = doc.metadata.copy()\n",
" chunk_metadata[\"id\"] = chunk_metadata[\"_oid\"] + \"$\" + str(id) + \"$\" + str(ic)\n",
" chunk_metadata[\"document_id\"] = str(id)\n",
" chunk_metadata[\"document_summary\"] = str(summ[0])\n",
" chunks_with_mdata.append(\n",
" Document(page_content=str(chunk), metadata=chunk_metadata)\n",
" )\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of total chunks with metadata: {len(chunks_with_mdata)}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At this point, we have processed the documents and generated chunks with metadata. Next, we will create Oracle AI Vector Store with those chunks.\n",
"\n",
"Here is the sample code how to do that:"
]
},
{
"cell_type": "code",
"execution_count": 55,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Vector Store Table: oravs\n"
]
}
],
"source": [
"# create Oracle AI Vector Store\n",
"vectorstore = OracleVS.from_documents(\n",
" chunks_with_mdata,\n",
" embedder,\n",
" client=conn,\n",
" table_name=\"oravs\",\n",
" distance_strategy=DistanceStrategy.DOT_PRODUCT,\n",
")\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Vector Store Table: {vectorstore.table_name}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above example creates a vector store with DOT_PRODUCT distance strategy. \n",
"\n",
"However, the users can create Oracle AI Vector Store provides different distance strategies. Please see the [comprehensive guide](/docs/integrations/vectorstores/oracle) for more information."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have embeddings stored in vector stores, let's create an index on them to get better semantic search performance during query time.\n",
"\n",
"***Note*** If you are getting some insufficient memory error, please increase ***vector_memory_size*** in your database.\n",
"\n",
"Here is the sample code to create an index:"
]
},
{
"cell_type": "code",
"execution_count": 56,
"metadata": {},
"outputs": [],
"source": [
"oraclevs.create_index(\n",
" conn, vectorstore, params={\"idx_name\": \"hnsw_oravs\", \"idx_type\": \"HNSW\"}\n",
")\n",
"\n",
"print(\"Index created.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above example creates a default HNSW index on the embeddings stored in 'oravs' table. The users can set different parameters as per their requirements. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters.\n",
"\n",
"Also, there are different types of vector indices that the users can create. Please see the [comprehensive guide](/docs/integrations/vectorstores/oracle) for more information.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Perform Semantic Search\n",
"All set!\n",
"\n",
"We have processed the documents, stored them to vector store, and then created index to get better query performance. Now let's do some semantic searches.\n",
"\n",
"Here is the sample code for this:"
]
},
{
"cell_type": "code",
"execution_count": 58,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[Document(page_content='The database stores LOBs differently from other data types. Creating a LOB column implicitly creates a LOB segment and a LOB index. The tablespace containing the LOB segment and LOB index, which are always stored together, may be different from the tablespace containing the table. Sometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.', metadata={'_oid': '662f2f257677f3c2311a8ff999fd34e5', '_rowid': 'AAAR/xAAEAAAAAnAAC', 'id': '662f2f257677f3c2311a8ff999fd34e5$3$1', 'document_id': '3', 'document_summary': 'Sometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.\\n\\n'})]\n",
"[]\n",
"[(Document(page_content='The database stores LOBs differently from other data types. Creating a LOB column implicitly creates a LOB segment and a LOB index. The tablespace containing the LOB segment and LOB index, which are always stored together, may be different from the tablespace containing the table. Sometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.', metadata={'_oid': '662f2f257677f3c2311a8ff999fd34e5', '_rowid': 'AAAR/xAAEAAAAAnAAC', 'id': '662f2f257677f3c2311a8ff999fd34e5$3$1', 'document_id': '3', 'document_summary': 'Sometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.\\n\\n'}), 0.055675752460956573)]\n",
"[]\n",
"[Document(page_content='If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.', metadata={'_oid': '662f2f253acf96b33b430b88699490a2', '_rowid': 'AAAR/xAAEAAAAAnAAA', 'id': '662f2f253acf96b33b430b88699490a2$1$1', 'document_id': '1', 'document_summary': 'If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.\\n\\n'})]\n",
"[Document(page_content='If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.', metadata={'_oid': '662f2f253acf96b33b430b88699490a2', '_rowid': 'AAAR/xAAEAAAAAnAAA', 'id': '662f2f253acf96b33b430b88699490a2$1$1', 'document_id': '1', 'document_summary': 'If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.\\n\\n'})]\n"
]
}
],
"source": [
"query = \"What is Oracle AI Vector Store?\"\n",
"filter = {\"document_id\": [\"1\"]}\n",
"\n",
"# Similarity search without a filter\n",
"print(vectorstore.similarity_search(query, 1))\n",
"\n",
"# Similarity search with a filter\n",
"print(vectorstore.similarity_search(query, 1, filter=filter))\n",
"\n",
"# Similarity search with relevance score\n",
"print(vectorstore.similarity_search_with_score(query, 1))\n",
"\n",
"# Similarity search with relevance score with filter\n",
"print(vectorstore.similarity_search_with_score(query, 1, filter=filter))\n",
"\n",
"# Max marginal relevance search\n",
"print(vectorstore.max_marginal_relevance_search(query, 1, fetch_k=20, lambda_mult=0.5))\n",
"\n",
"# Max marginal relevance search with filter\n",
"print(\n",
" vectorstore.max_marginal_relevance_search(\n",
" query, 1, fetch_k=20, lambda_mult=0.5, filter=filter\n",
" )\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -129,7 +129,7 @@
" return obs_message\n",
"\n",
" def _act(self):\n",
" act_message = self.model(self.message_history)\n",
" act_message = self.model.invoke(self.message_history)\n",
" self.message_history.append(act_message)\n",
" action = int(self.action_parser.parse(act_message.content)[\"action\"])\n",
" return action\n",

View File

@@ -168,7 +168,7 @@
"\n",
"retriever = vector_store.as_retriever(search_type=\"similarity\", search_kwargs={\"k\": 3})\n",
"\n",
"retrieved_docs = retriever.get_relevant_documents(\"<your question>\")\n",
"retrieved_docs = retriever.invoke(\"<your question>\")\n",
"\n",
"print(retrieved_docs[0].page_content)\n",
"\n",

View File

@@ -0,0 +1,80 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# RAG using Upstage Layout Analysis and Groundedness Check\n",
"This example illustrates RAG using [Upstage](https://python.langchain.com/docs/integrations/providers/upstage/) Layout Analysis and Groundedness Check."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
"\n",
"from langchain_community.vectorstores import DocArrayInMemorySearch\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_core.runnables.base import RunnableSerializable\n",
"from langchain_upstage import (\n",
" ChatUpstage,\n",
" UpstageEmbeddings,\n",
" UpstageGroundednessCheck,\n",
" UpstageLayoutAnalysisLoader,\n",
")\n",
"\n",
"model = ChatUpstage()\n",
"\n",
"files = [\"/PATH/TO/YOUR/FILE.pdf\", \"/PATH/TO/YOUR/FILE2.pdf\"]\n",
"\n",
"loader = UpstageLayoutAnalysisLoader(file_path=files, split=\"element\")\n",
"\n",
"docs = loader.load()\n",
"\n",
"vectorstore = DocArrayInMemorySearch.from_documents(docs, embedding=UpstageEmbeddings())\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"output_parser = StrOutputParser()\n",
"\n",
"retrieved_docs = retriever.get_relevant_documents(\"How many parameters in SOLAR model?\")\n",
"\n",
"groundedness_check = UpstageGroundednessCheck()\n",
"groundedness = \"\"\n",
"while groundedness != \"grounded\":\n",
" chain: RunnableSerializable = RunnablePassthrough() | prompt | model | output_parser\n",
"\n",
" result = chain.invoke(\n",
" {\n",
" \"context\": retrieved_docs,\n",
" \"question\": \"How many parameters in SOLAR model?\",\n",
" }\n",
" )\n",
"\n",
" groundedness = groundedness_check.invoke(\n",
" {\n",
" \"context\": retrieved_docs,\n",
" \"answer\": result,\n",
" }\n",
" )"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -355,15 +355,15 @@
"metadata": {},
"outputs": [],
"source": [
"attribute_info[-2][\n",
" \"description\"\n",
"] += f\". Valid values are {sorted(latest_price['starrating'].value_counts().index.tolist())}\"\n",
"attribute_info[3][\n",
" \"description\"\n",
"] += f\". Valid values are {sorted(latest_price['maxoccupancy'].value_counts().index.tolist())}\"\n",
"attribute_info[-3][\n",
" \"description\"\n",
"] += f\". Valid values are {sorted(latest_price['country'].value_counts().index.tolist())}\""
"attribute_info[-2][\"description\"] += (\n",
" f\". Valid values are {sorted(latest_price['starrating'].value_counts().index.tolist())}\"\n",
")\n",
"attribute_info[3][\"description\"] += (\n",
" f\". Valid values are {sorted(latest_price['maxoccupancy'].value_counts().index.tolist())}\"\n",
")\n",
"attribute_info[-3][\"description\"] += (\n",
" f\". Valid values are {sorted(latest_price['country'].value_counts().index.tolist())}\"\n",
")"
]
},
{
@@ -688,9 +688,9 @@
"metadata": {},
"outputs": [],
"source": [
"attribute_info[-3][\n",
" \"description\"\n",
"] += \". NOTE: Only use the 'eq' operator if a specific country is mentioned. If a region is mentioned, include all relevant countries in filter.\"\n",
"attribute_info[-3][\"description\"] += (\n",
" \". NOTE: Only use the 'eq' operator if a specific country is mentioned. If a region is mentioned, include all relevant countries in filter.\"\n",
")\n",
"chain = load_query_constructor_runnable(\n",
" ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0),\n",
" doc_contents,\n",
@@ -1227,7 +1227,7 @@
}
],
"source": [
"results = retriever.get_relevant_documents(\n",
"results = retriever.invoke(\n",
" \"I want to stay somewhere highly rated along the coast. I want a room with a patio and a fireplace.\"\n",
")\n",
"for res in results:\n",

View File

@@ -84,7 +84,7 @@
" Applies the chatmodel to the message history\n",
" and returns the message string\n",
" \"\"\"\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=\"\\n\".join(self.message_history + [self.prefix])),\n",

View File

@@ -70,7 +70,7 @@
" Applies the chatmodel to the message history\n",
" and returns the message string\n",
" \"\"\"\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=\"\\n\".join(self.message_history + [self.prefix])),\n",

1
docs/.gitignore vendored
View File

@@ -1,2 +1,3 @@
/.quarto/
src/supabase.d.ts
build

View File

@@ -1,24 +0,0 @@
#!/usr/bin/env bash
set -o errexit
set -o nounset
set -o pipefail
set -o xtrace
SCRIPT_DIR="$(cd "$(dirname "$0")"; pwd)"
cd "${SCRIPT_DIR}"
mkdir -p ../_dist
rsync -ruv --exclude node_modules --exclude api_reference --exclude .venv --exclude .docusaurus . ../_dist
cd ../_dist
poetry run python scripts/model_feat_table.py
cp ../cookbook/README.md src/pages/cookbook.mdx
mkdir -p docs/templates
cp ../templates/docs/INDEX.md docs/templates/index.md
poetry run python scripts/copy_templates.py
wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/langserve.md
wget -q https://raw.githubusercontent.com/langchain-ai/langgraph/main/README.md -O docs/langgraph.md
yarn
poetry run quarto preview docs

82
docs/Makefile Normal file
View File

@@ -0,0 +1,82 @@
# we build the docs in these stages:
# 1. install vercel and python dependencies
# 2. copy files from "source dir" to "intermediate dir"
# 2. generate files like model feat table, etc in "intermediate dir"
# 3. copy files to their right spots (e.g. langserve readme) in "intermediate dir"
# 4. build the docs from "intermediate dir" to "output dir"
SOURCE_DIR = docs/
INTERMEDIATE_DIR = build/intermediate/docs
OUTPUT_NEW_DIR = build/output-new
OUTPUT_NEW_DOCS_DIR = $(OUTPUT_NEW_DIR)/docs
PYTHON = .venv/bin/python
PARTNER_DEPS_LIST := $(shell find ../libs/partners -mindepth 1 -maxdepth 1 -type d -exec test -e "{}/pyproject.toml" \; -print | grep -vE "airbyte|ibm|ai21" | tr '\n' ' ')
PORT ?= 3001
clean:
rm -rf build
install-vercel-deps:
yum -y update
yum install gcc bzip2-devel libffi-devel zlib-devel wget tar gzip rsync -y
install-py-deps:
python3 -m venv .venv
$(PYTHON) -m pip install --upgrade pip
$(PYTHON) -m pip install --upgrade uv
$(PYTHON) -m uv pip install -r vercel_requirements.txt
$(PYTHON) -m uv pip install --editable $(PARTNER_DEPS_LIST)
generate-files:
mkdir -p $(INTERMEDIATE_DIR)
cp -r $(SOURCE_DIR)/* $(INTERMEDIATE_DIR)
mkdir -p $(INTERMEDIATE_DIR)/templates
cp ../templates/docs/INDEX.md $(INTERMEDIATE_DIR)/templates/index.md
cp ../cookbook/README.md $(INTERMEDIATE_DIR)/cookbook.mdx
$(PYTHON) scripts/model_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/copy_templates.py $(INTERMEDIATE_DIR)
wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O $(INTERMEDIATE_DIR)/langserve.md
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langserve.md https://github.com/langchain-ai/langserve/tree/main/
copy-infra:
mkdir -p $(OUTPUT_NEW_DIR)
cp -r src $(OUTPUT_NEW_DIR)
cp vercel.json $(OUTPUT_NEW_DIR)
cp babel.config.js $(OUTPUT_NEW_DIR)
cp -r data $(OUTPUT_NEW_DIR)
cp docusaurus.config.js $(OUTPUT_NEW_DIR)
cp package.json $(OUTPUT_NEW_DIR)
cp sidebars.js $(OUTPUT_NEW_DIR)
cp -r static $(OUTPUT_NEW_DIR)
cp yarn.lock $(OUTPUT_NEW_DIR)
render:
$(PYTHON) scripts/notebook_convert.py $(INTERMEDIATE_DIR) $(OUTPUT_NEW_DOCS_DIR)
md-sync:
rsync -avm --include="*/" --include="*.mdx" --include="*.md" --include="*.png" --exclude="*" $(INTERMEDIATE_DIR)/ $(OUTPUT_NEW_DOCS_DIR)
generate-references:
$(PYTHON) scripts/generate_api_reference_links.py --docs_dir $(OUTPUT_NEW_DOCS_DIR)
build: install-py-deps generate-files copy-infra render md-sync generate-references
vercel-build: install-vercel-deps build
rm -rf docs
mv $(OUTPUT_NEW_DOCS_DIR) docs
rm -rf build
yarn run docusaurus build
mv build v0.1
mkdir build
mv v0.1 build
mv build/v0.1/404.html build
start:
cd $(OUTPUT_NEW_DIR) && yarn && yarn start --port=$(PORT)

File diff suppressed because one or more lines are too long

View File

@@ -1,76 +0,0 @@
/* eslint-disable prefer-template */
/* eslint-disable no-param-reassign */
// eslint-disable-next-line import/no-extraneous-dependencies
const babel = require("@babel/core");
const path = require("path");
const fs = require("fs");
/**
*
* @param {string|Buffer} content Content of the resource file
* @param {object} [map] SourceMap data consumable by https://github.com/mozilla/source-map
* @param {any} [meta] Meta data, could be anything
*/
async function webpackLoader(content, map, meta) {
const cb = this.async();
if (!this.resourcePath.endsWith(".ts")) {
cb(null, JSON.stringify({ content, imports: [] }), map, meta);
return;
}
try {
const result = await babel.parseAsync(content, {
sourceType: "module",
filename: this.resourcePath,
});
const imports = [];
result.program.body.forEach((node) => {
if (node.type === "ImportDeclaration") {
const source = node.source.value;
if (!source.startsWith("langchain")) {
return;
}
node.specifiers.forEach((specifier) => {
if (specifier.type === "ImportSpecifier") {
const local = specifier.local.name;
const imported = specifier.imported.name;
imports.push({ local, imported, source });
} else {
throw new Error("Unsupported import type");
}
});
}
});
imports.forEach((imp) => {
const { imported, source } = imp;
const moduleName = source.split("/").slice(1).join("_");
const docsPath = path.resolve(__dirname, "docs", "api", moduleName);
const available = fs.readdirSync(docsPath, { withFileTypes: true });
const found = available.find(
(dirent) =>
dirent.isDirectory() &&
fs.existsSync(path.resolve(docsPath, dirent.name, imported + ".md"))
);
if (found) {
imp.docs =
"/" + path.join("docs", "api", moduleName, found.name, imported);
} else {
throw new Error(
`Could not find docs for ${source}.${imported} in docs/api/`
);
}
});
cb(null, JSON.stringify({ content, imports }), map, meta);
} catch (err) {
cb(err);
}
}
module.exports = webpackLoader;

File diff suppressed because it is too large Load Diff

View File

@@ -7,9 +7,9 @@
"source": [
"# Create a runnable with the @chain decorator\n",
"\n",
"You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionaly equivalent to wrapping in a [`RunnableLambda`](/docs/expression_language/primitives/functions).\n",
"You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionally equivalent to wrapping in a [`RunnableLambda`](/docs/expression_language/primitives/functions).\n",
"\n",
"This will have the benefit of improved observability by tracing your chain correctly. Any calls to runnables inside this function will be traced as nested childen.\n",
"This will have the benefit of improved observability by tracing your chain correctly. Any calls to runnables inside this function will be traced as nested children.\n",
"\n",
"It will also allow you to use this as any other runnable, compose it in chain, etc.\n",
"\n",

View File

@@ -29,9 +29,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings"
]

View File

@@ -4,6 +4,7 @@ sidebar_class_name: hidden
# LangChain Expression Language (LCEL)
LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together.
LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (weve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:
@@ -11,7 +12,7 @@ LCEL was designed from day 1 to **support putting prototypes in production, with
When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens.
[**Async support**](/docs/expression_language/interface)
Any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](/docs/langsmith) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.
Any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](/docs/langserve) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.
[**Optimized parallel execution**](/docs/expression_language/primitives/parallel)
Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency.

View File

@@ -16,6 +16,7 @@
"id": "9a9acd2e",
"metadata": {},
"source": [
"To make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about [in this section](/docs/expression_language/primitives).\n",
"\n",
"This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. \n",

View File

@@ -63,7 +63,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.runnables import ConfigurableField\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
@@ -285,8 +285,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"from langchain_community.chat_models import ChatAnthropic\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.runnables import ConfigurableField\n",
"from langchain_openai import ChatOpenAI"
]

View File

@@ -3,16 +3,18 @@
{
"cell_type": "raw",
"id": "bc346658-6820-413a-bd8f-11bd3082fe43",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_position: 0.5\n",
"title: Advantages of LCEL\n",
"---\n",
"\n",
"```{=mdx}\n",
"import { ColumnContainer, Column } from \"@theme/Columns\";\n",
"```"
"import { ColumnContainer, Column } from \"@theme/Columns\";"
]
},
{
@@ -20,6 +22,7 @@
"id": "919a5ae2-ed21-4923-b98f-723c111bac67",
"metadata": {},
"source": [
"\n",
":::{.callout-tip} \n",
"We recommend reading the LCEL [Get started](/docs/expression_language/get_started) section first.\n",
":::"
@@ -56,13 +59,10 @@
"## Invoke\n",
"In the simplest case, we just want to pass in a topic string and get back a joke string:\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"```\n",
"\n",
"#### Without LCEL\n"
]
},
@@ -102,11 +102,9 @@
"metadata": {},
"source": [
"\n",
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
@@ -146,18 +144,15 @@
"metadata": {},
"source": [
"\n",
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"## Stream\n",
"If we want to stream results instead, we'll need to change our function:\n",
"\n",
"```{=mdx}\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@@ -198,11 +193,10 @@
"id": "f8e36b0e-c7dc-4130-a51b-189d4b756c7f",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
]
@@ -223,19 +217,18 @@
"id": "b9b41e78-ddeb-44d0-a58b-a0ea0c99a761",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"\n",
"## Batch\n",
"\n",
"If we want to run on a batch of inputs in parallel, we'll again need a new function:\n",
"\n",
"```{=mdx}\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"\n",
"#### Without LCEL\n",
"\n"
@@ -263,11 +256,11 @@
"id": "9b3e9d34-6775-43c1-93d8-684b58e341ab",
"metadata": {},
"source": [
"```{=mdx}\n",
"\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
]
@@ -287,18 +280,14 @@
"id": "cc5ba36f-eec1-4fc1-8cfe-fa242a7f7809",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"## Async\n",
"\n",
"If we need an asynchronous version:\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@@ -334,11 +323,9 @@
"id": "2f209290-498c-4c17-839e-ee9002919846",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
@@ -359,10 +346,9 @@
"id": "1f282129-99a3-40f4-b67f-2d0718b1bea9",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"## Async Batch\n",
"\n",
"```{=mdx}\n",

View File

@@ -4,6 +4,7 @@ sidebar_position: 2
# Installation
## Official release
To install LangChain run:

View File

@@ -5,6 +5,7 @@ sidebar_class_name: hidden
# Introduction
**LangChain** is a framework for developing applications powered by large language models (LLMs).
LangChain simplifies every stage of the LLM application lifecycle:
@@ -13,12 +14,13 @@ LangChain simplifies every stage of the LLM application lifecycle:
- **Deployment**: Turn any chain into an API with [LangServe](/docs/langserve).
import ThemedImage from '@theme/ThemedImage';
import useBaseUrl from '@docusaurus/useBaseUrl';
<ThemedImage
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
sources={{
light: '/svg/langchain_stack.svg',
dark: '/svg/langchain_stack_dark.svg',
light: useBaseUrl('/svg/langchain_stack.svg'),
dark: useBaseUrl('/svg/langchain_stack_dark.svg'),
}}
title="LangChain Framework Overview"
/>
@@ -29,7 +31,7 @@ Concretely, the framework consists of the following open-source libraries:
- **`langchain-community`**: Third party integrations.
- Partner packages (e.g. **`langchain-openai`**, **`langchain-anthropic`**, etc.): Some integrations have been further split into their own lightweight packages that only depend on **`langchain-core`**.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **[langgraph](/docs/langgraph)**: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
- **[langgraph](https://langchain-ai.github.io/langgraph/)**: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
- **[langserve](/docs/langserve)**: Deploy LangChain chains as REST APIs.
The broader ecosystem includes:
@@ -73,7 +75,7 @@ LangChain Expression Language (LCEL) is the foundation of many of LangChain's co
### [🦜🛠️ LangSmith](/docs/langsmith)
Trace and evaluate your language model applications and intelligent agents to help you move from prototype to production.
### [🦜🕸️ LangGraph](/docs/langgraph)
### [🦜🕸️ LangGraph](https://langchain-ai.github.io/langgraph/)
Build stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
### [🦜🏓 LangServe](/docs/langserve)

View File

@@ -4,6 +4,7 @@ sidebar_position: 1
# Quickstart
In this quickstart we'll show you how to:
- Get setup with LangChain, LangSmith and LangServe
- Use the most basic and common components of LangChain: prompt templates, models, and output parsers
@@ -194,7 +195,7 @@ Prompt templates convert raw user input to better input to the LLM.
```python
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "You are world class technical documentation writer."),
("system", "You are a world class technical documentation writer."),
("user", "{input}")
])
```

View File

@@ -17,7 +17,7 @@ Here's a summary of the key methods and properties of a comparison evaluator:
- `requires_reference`: This property specifies whether this evaluator requires a reference label.
:::note LangSmith Support
The [run_on_dataset](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.smith) evaluation method is designed to evaluate only a single model at a time, and thus, doesn't support these evaluators.
Pairwise evaluations are supported in LangSmith via the [`evaluate_comparative`](https://docs.smith.langchain.com/how_to_guides/evaluation/evaluate_pairwise) function.
:::
Detailed information about creating custom evaluators and the available built-in comparison evaluators is provided in the following sections.

View File

@@ -9,7 +9,7 @@
"\n",
"This notebook shows how to prevent prompt injection attacks using the text classification model from `HuggingFace`.\n",
"\n",
"By default, it uses a *[laiyer/deberta-v3-base-prompt-injection](https://huggingface.co/laiyer/deberta-v3-base-prompt-injection)* model trained to identify prompt injections. \n",
"By default, it uses a *[protectai/deberta-v3-base-prompt-injection-v2](https://huggingface.co/protectai/deberta-v3-base-prompt-injection-v2)* model trained to identify prompt injections. \n",
"\n",
"In this notebook, we will use the ONNX version of the model to speed up the inference. "
]
@@ -49,11 +49,15 @@
"from optimum.onnxruntime import ORTModelForSequenceClassification\n",
"from transformers import AutoTokenizer, pipeline\n",
"\n",
"# Using https://huggingface.co/laiyer/deberta-v3-base-prompt-injection\n",
"model_path = \"laiyer/deberta-v3-base-prompt-injection\"\n",
"tokenizer = AutoTokenizer.from_pretrained(model_path)\n",
"tokenizer.model_input_names = [\"input_ids\", \"attention_mask\"] # Hack to run the model\n",
"model = ORTModelForSequenceClassification.from_pretrained(model_path, subfolder=\"onnx\")\n",
"# Using https://huggingface.co/protectai/deberta-v3-base-prompt-injection-v2\n",
"model_path = \"protectai/deberta-v3-base-prompt-injection-v2\"\n",
"revision = None # We recommend specifiying the revision to avoid breaking changes or supply chain attacks\n",
"tokenizer = AutoTokenizer.from_pretrained(\n",
" model_path, revision=revision, model_input_names=[\"input_ids\", \"attention_mask\"]\n",
")\n",
"model = ORTModelForSequenceClassification.from_pretrained(\n",
" model_path, revision=revision, subfolder=\"onnx\"\n",
")\n",
"\n",
"classifier = pipeline(\n",
" \"text-classification\",\n",

View File

@@ -137,7 +137,7 @@
}
],
"source": [
"from langchain.prompts.prompt import PromptTemplate\n",
"from langchain_core.prompts.prompt import PromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"anonymizer = PresidioAnonymizer()\n",

View File

@@ -878,8 +878,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.prompt import PromptTemplate\n",
"from langchain_core.prompts import format_document\n",
"from langchain_core.prompts.prompt import PromptTemplate\n",
"\n",
"DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template=\"{page_content}\")\n",
"\n",

View File

@@ -207,7 +207,7 @@
}
],
"source": [
"from langchain.prompts.prompt import PromptTemplate\n",
"from langchain_core.prompts.prompt import PromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"anonymizer = PresidioReversibleAnonymizer()\n",

View File

@@ -278,8 +278,8 @@
],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_core.callbacks.stdout import StdOutCallbackHandler\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI\n",
"\n",
"argilla_callback = ArgillaCallbackHandler(\n",

View File

@@ -170,8 +170,8 @@
"import os\n",
"\n",
"from langchain.chains import LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.prompts.chat import (\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
")\n",

View File

@@ -151,7 +151,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import (\n",
"from langchain_core.prompts import (\n",
" ChatPromptTemplate,\n",
" FewShotChatMessagePromptTemplate,\n",
")\n",

View File

@@ -194,7 +194,7 @@
"llm = OpenAI(\n",
" temperature=0, callbacks=[LabelStudioCallbackHandler(project_name=\"My Project\")]\n",
")\n",
"print(llm(\"Tell me a joke\"))"
"print(llm.invoke(\"Tell me a joke\"))"
]
},
{
@@ -270,7 +270,7 @@
" )\n",
" ]\n",
")\n",
"llm_results = chat_llm(\n",
"llm_results = chat_llm.invoke(\n",
" [\n",
" SystemMessage(content=\"Always use a lot of emojis\"),\n",
" HumanMessage(content=\"Tell me a joke\"),\n",

View File

@@ -107,7 +107,7 @@ User tracking allows you to identify your users, track their cost, conversations
from langchain_community.callbacks.llmonitor_callback import LLMonitorCallbackHandler, identify
with identify("user-123"):
llm("Tell me a joke")
llm.invoke("Tell me a joke")
with identify("user-456", user_props={"email": "user456@test.com"}):
agen.run("Who is Leo DiCaprio's girlfriend?")

View File

@@ -103,7 +103,7 @@
" temperature=0,\n",
" callbacks=[PromptLayerCallbackHandler(pl_tags=[\"chatopenai\"])],\n",
")\n",
"llm_results = chat_llm(\n",
"llm_results = chat_llm.invoke(\n",
" [\n",
" HumanMessage(content=\"What comes after 1,2,3 ?\"),\n",
" HumanMessage(content=\"Tell me another joke?\"),\n",
@@ -129,10 +129,11 @@
"from langchain_community.llms import GPT4All\n",
"\n",
"model = GPT4All(model=\"./models/gpt4all-model.bin\", n_ctx=512, n_threads=8)\n",
"callbacks = [PromptLayerCallbackHandler(pl_tags=[\"langchain\", \"gpt4all\"])]\n",
"\n",
"response = model(\n",
"response = model.invoke(\n",
" \"Once upon a time, \",\n",
" callbacks=[PromptLayerCallbackHandler(pl_tags=[\"langchain\", \"gpt4all\"])],\n",
" config={\"callbacks\": callbacks},\n",
")"
]
},
@@ -181,7 +182,7 @@
")\n",
"\n",
"example_prompt = promptlayer.prompts.get(\"example\", version=1, langchain=True)\n",
"openai_llm(example_prompt.format(product=\"toasters\"))"
"openai_llm.invoke(example_prompt.format(product=\"toasters\"))"
]
},
{

View File

@@ -91,7 +91,7 @@
"source": [
"from langchain.agents import initialize_agent, load_tools\n",
"from langchain.chains import LLMChain, SimpleSequentialChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI\n",
"from sagemaker.analytics import ExperimentAnalytics\n",
"from sagemaker.experiments.run import Run\n",

View File

@@ -315,7 +315,7 @@
}
],
"source": [
"chat_res = chat_llm(\n",
"chat_res = chat_llm.invoke(\n",
" [\n",
" SystemMessage(content=\"Every answer of yours must be about OpenAI.\"),\n",
" HumanMessage(content=\"Tell me a joke\"),\n",

View File

@@ -58,9 +58,28 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 22,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
"To disable this warning, you can either:\n",
"\t- Avoid using `tokenizers` before the fork if possible\n",
"\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mWARNING: There was an error checking the latest version of pip.\u001b[0m\u001b[33m\n",
"\u001b[0mNote: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain langchain_openai uptrain faiss-cpu flashrank"
]
@@ -81,7 +100,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
@@ -112,7 +131,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
@@ -129,7 +148,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 25,
"metadata": {},
"outputs": [],
"source": [
@@ -146,7 +165,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 26,
"metadata": {},
"outputs": [],
"source": [
@@ -164,7 +183,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
@@ -181,7 +200,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
@@ -245,9 +264,31 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 29,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[32m2024-04-17 17:03:44.969\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36muptrain.framework.evalllm\u001b[0m:\u001b[36mevaluate_on_server\u001b[0m:\u001b[36m378\u001b[0m - \u001b[1mSending evaluation request for rows 0 to <50 to the Uptrain\u001b[0m\n",
"\u001b[32m2024-04-17 17:04:05.809\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36muptrain.framework.evalllm\u001b[0m:\u001b[36mevaluate\u001b[0m:\u001b[36m367\u001b[0m - \u001b[1mLocal server not running, start the server to log data and visualize in the dashboard!\u001b[0m\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Question: What did the president say about Ketanji Brown Jackson\n",
"Response: The president mentioned that he had nominated Ketanji Brown Jackson to serve on the United States Supreme Court 4 days ago. He described her as one of the nation's top legal minds who will continue Justice Breyers legacy of excellence. He also mentioned that she is a former top litigator in private practice, a former federal public defender, and comes from a family of public school educators and police officers. He described her as a consensus builder and noted that since her nomination, she has received a broad range of support from various groups, including the Fraternal Order of Police and former judges appointed by both Democrats and Republicans.\n",
"\n",
"Context Relevance Score: 1.0\n",
"Factual Accuracy Score: 1.0\n",
"Response Completeness Score: 1.0\n"
]
}
],
"source": [
"# Create the RAG prompt\n",
"template = \"\"\"Answer the question based only on the following context, which can include text and tables:\n",
@@ -287,15 +328,15 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 30,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[32m2024-04-10 14:09:15.887\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36muptrain.framework.evalllm\u001b[0m:\u001b[36mevaluate_on_server\u001b[0m:\u001b[36m376\u001b[0m - \u001b[1mSending evaluation request for rows 0 to <50 to the Uptrain\u001b[0m\n",
"\u001b[32m2024-04-10 14:09:21.367\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36muptrain.framework.evalllm\u001b[0m:\u001b[36mevaluate\u001b[0m:\u001b[36m365\u001b[0m - \u001b[1mLocal server not running, start the server to log data and visualize in the dashboard!\u001b[0m\n"
"\u001b[32m2024-04-17 17:04:10.675\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36muptrain.framework.evalllm\u001b[0m:\u001b[36mevaluate_on_server\u001b[0m:\u001b[36m378\u001b[0m - \u001b[1mSending evaluation request for rows 0 to <50 to the Uptrain\u001b[0m\n",
"\u001b[32m2024-04-17 17:04:16.804\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36muptrain.framework.evalllm\u001b[0m:\u001b[36mevaluate\u001b[0m:\u001b[36m367\u001b[0m - \u001b[1mLocal server not running, start the server to log data and visualize in the dashboard!\u001b[0m\n"
]
},
{
@@ -309,15 +350,15 @@
" - What were the president's remarks regarding Ketanji Brown Jackson?\n",
" - What statements has the president made about Ketanji Brown Jackson?\n",
"\n",
"Multi Query Accuracy Score: 1.0\n"
"Multi Query Accuracy Score: 0.5\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[32m2024-04-10 14:09:29.142\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36muptrain.framework.evalllm\u001b[0m:\u001b[36mevaluate_on_server\u001b[0m:\u001b[36m376\u001b[0m - \u001b[1mSending evaluation request for rows 0 to <50 to the Uptrain\u001b[0m\n",
"\u001b[32m2024-04-10 14:09:53.095\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36muptrain.framework.evalllm\u001b[0m:\u001b[36mevaluate\u001b[0m:\u001b[36m365\u001b[0m - \u001b[1mLocal server not running, start the server to log data and visualize in the dashboard!\u001b[0m\n"
"\u001b[32m2024-04-17 17:04:22.027\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36muptrain.framework.evalllm\u001b[0m:\u001b[36mevaluate_on_server\u001b[0m:\u001b[36m378\u001b[0m - \u001b[1mSending evaluation request for rows 0 to <50 to the Uptrain\u001b[0m\n",
"\u001b[32m2024-04-17 17:04:44.033\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36muptrain.framework.evalllm\u001b[0m:\u001b[36mevaluate\u001b[0m:\u001b[36m367\u001b[0m - \u001b[1mLocal server not running, start the server to log data and visualize in the dashboard!\u001b[0m\n"
]
},
{
@@ -326,7 +367,7 @@
"text": [
"\n",
"Question: What did the president say about Ketanji Brown Jackson\n",
"Response: The president mentioned that he had nominated Ketanji Brown Jackson to serve on the United States Supreme Court 4 days ago. He described her as one of the nation's top legal minds who will continue Justice Breyers legacy of excellence. He also mentioned that she is a former top litigator in private practice, a former federal public defender, and comes from a family of public school educators and police officers. Since her nomination, she has received a broad range of support, including from the Fraternal Order of Police and former judges appointed by both Democrats and Republicans.\n",
"Response: The president mentioned that he had nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to serve on the United States Supreme Court 4 days ago. He described her as one of the nation's top legal minds who will continue Justice Breyers legacy of excellence. He also mentioned that since her nomination, she has received a broad range of supportfrom the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\n",
"\n",
"Context Relevance Score: 1.0\n",
"Factual Accuracy Score: 1.0\n",
@@ -374,9 +415,50 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 31,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[32m2024-04-17 17:04:46.462\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36muptrain.framework.evalllm\u001b[0m:\u001b[36mevaluate_on_server\u001b[0m:\u001b[36m378\u001b[0m - \u001b[1mSending evaluation request for rows 0 to <50 to the Uptrain\u001b[0m\n",
"\u001b[32m2024-04-17 17:04:53.561\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36muptrain.framework.evalllm\u001b[0m:\u001b[36mevaluate\u001b[0m:\u001b[36m367\u001b[0m - \u001b[1mLocal server not running, start the server to log data and visualize in the dashboard!\u001b[0m\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Question: What did the president say about Ketanji Brown Jackson\n",
"\n",
"Context Conciseness Score: 0.0\n",
"Context Reranking Score: 1.0\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[32m2024-04-17 17:04:56.947\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36muptrain.framework.evalllm\u001b[0m:\u001b[36mevaluate_on_server\u001b[0m:\u001b[36m378\u001b[0m - \u001b[1mSending evaluation request for rows 0 to <50 to the Uptrain\u001b[0m\n",
"\u001b[32m2024-04-17 17:05:16.551\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36muptrain.framework.evalllm\u001b[0m:\u001b[36mevaluate\u001b[0m:\u001b[36m367\u001b[0m - \u001b[1mLocal server not running, start the server to log data and visualize in the dashboard!\u001b[0m\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Question: What did the president say about Ketanji Brown Jackson\n",
"Response: The President mentioned that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to serve on the United States Supreme Court 4 days ago. He described her as one of the nation's top legal minds who will continue Justice Breyers legacy of excellence.\n",
"\n",
"Context Relevance Score: 1.0\n",
"Factual Accuracy Score: 1.0\n",
"Response Completeness Score: 0.5\n"
]
}
],
"source": [
"# Create the retriever\n",
"compressor = FlashrankRerank()\n",

View File

@@ -72,7 +72,7 @@
"metadata": {},
"outputs": [],
"source": [
"output = chat([HumanMessage(content=\"write a funny joke\")])\n",
"output = chat.invoke([HumanMessage(content=\"write a funny joke\")])\n",
"print(\"output:\", output)"
]
},
@@ -90,7 +90,7 @@
"outputs": [],
"source": [
"kwargs = {\"temperature\": 0.8, \"top_p\": 0.8, \"top_k\": 5}\n",
"output = chat([HumanMessage(content=\"write a funny joke\")], **kwargs)\n",
"output = chat.invoke([HumanMessage(content=\"write a funny joke\")], **kwargs)\n",
"print(\"output:\", output)"
]
},

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,181 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Coze Chat\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Chat with Coze Bot\n",
"\n",
"ChatCoze chat models API by coze.com. For more information, see [https://www.coze.com/open/docs/chat](https://www.coze.com/open/docs/chat)"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-25T15:14:24.186131Z",
"start_time": "2024-04-25T15:14:23.831767Z"
}
},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatCoze\n",
"from langchain_core.messages import HumanMessage"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-25T15:14:24.191123Z",
"start_time": "2024-04-25T15:14:24.186330Z"
}
},
"outputs": [],
"source": [
"chat = ChatCoze(\n",
" coze_api_base=\"YOUR_API_BASE\",\n",
" coze_api_key=\"YOUR_API_KEY\",\n",
" bot_id=\"YOUR_BOT_ID\",\n",
" user=\"YOUR_USER_ID\",\n",
" conversation_id=\"YOUR_CONVERSATION_ID\",\n",
" streaming=False,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Alternatively, you can set your API key and API base with:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"COZE_API_KEY\"] = \"YOUR_API_KEY\"\n",
"os.environ[\"COZE_API_BASE\"] = \"YOUR_API_BASE\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-25T15:14:25.853218Z",
"start_time": "2024-04-25T15:14:24.192408Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='为你找到关于coze的信息如下\n\nCoze是一个由字节跳动推出的AI聊天机器人和应用程序编辑开发平台。\n\n用户无论是否有编程经验都可以通过该平台快速创建各种类型的聊天机器人、智能体、AI应用和插件并将其部署在社交平台和即时聊天应用程序中。\n\n国际版使用的模型比国内版更强大。')"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat([HumanMessage(content=\"什么是扣子(coze)\")])"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"## Chat with Coze Streaming"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-25T15:14:25.870044Z",
"start_time": "2024-04-25T15:14:25.863381Z"
},
"collapsed": false
},
"outputs": [],
"source": [
"chat = ChatCoze(\n",
" coze_api_base=\"YOUR_API_BASE\",\n",
" coze_api_key=\"YOUR_API_KEY\",\n",
" bot_id=\"YOUR_BOT_ID\",\n",
" user=\"YOUR_USER_ID\",\n",
" conversation_id=\"YOUR_CONVERSATION_ID\",\n",
" streaming=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-25T15:14:27.153546Z",
"start_time": "2024-04-25T15:14:25.868470Z"
},
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessageChunk(content='为你查询到Coze是一个由字节跳动推出的AI聊天机器人和应用程序编辑开发平台。')"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat([HumanMessage(content=\"什么是扣子(coze)\")])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -8,13 +8,8 @@
"source": [
"# DeepInfra\n",
"\n",
"[DeepInfra](https://deepinfra.com/?utm_source=langchain) is a serverless inference as a service that provides access to a [variety of LLMs](https://deepinfra.com/models?utm_source=langchain) and [embeddings models](https://deepinfra.com/models?type=embeddings&utm_source=langchain). This notebook goes over how to use LangChain with DeepInfra for chat models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[DeepInfra](https://deepinfra.com/?utm_source=langchain) is a serverless inference as a service that provides access to a [variety of LLMs](https://deepinfra.com/models?utm_source=langchain) and [embeddings models](https://deepinfra.com/models?type=embeddings&utm_source=langchain). This notebook goes over how to use LangChain with DeepInfra for chat models.\n",
"\n",
"## Set the Environment API Key\n",
"Make sure to get your API key from DeepInfra. You have to [Login](https://deepinfra.com/login?from=%2Fdash) and get a new token.\n",
"\n",
@@ -24,92 +19,34 @@
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" ········\n"
]
}
],
"source": [
"# get a new token: https://deepinfra.com/login?from=%2Fdash\n",
"\n",
"from getpass import getpass\n",
"\n",
"DEEPINFRA_API_TOKEN = getpass()"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"\n",
"# or pass deepinfra_api_token parameter to the ChatDeepInfra constructor\n",
"os.environ[\"DEEPINFRA_API_TOKEN\"] = DEEPINFRA_API_TOKEN"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# get a new token: https://deepinfra.com/login?from=%2Fdash\n",
"\n",
"import os\n",
"from getpass import getpass\n",
"\n",
"from langchain_community.chat_models import ChatDeepInfra\n",
"from langchain_core.messages import HumanMessage"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat = ChatDeepInfra(model=\"meta-llama/Llama-2-7b-chat-hf\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\" J'aime la programmation.\", additional_kwargs={}, example=False)"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"DEEPINFRA_API_TOKEN = getpass()\n",
"\n",
"# or pass deepinfra_api_token parameter to the ChatDeepInfra constructor\n",
"os.environ[\"DEEPINFRA_API_TOKEN\"] = DEEPINFRA_API_TOKEN\n",
"\n",
"chat = ChatDeepInfra(model=\"meta-llama/Llama-2-7b-chat-hf\")\n",
"\n",
"messages = [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French. I love programming.\"\n",
" )\n",
"]\n",
"chat(messages)"
"chat.invoke(messages)"
]
},
{
@@ -123,7 +60,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "93a21c5c-6ef9-4688-be60-b2e1f94842fb",
"metadata": {
"tags": []
@@ -135,69 +72,32 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[ChatGeneration(text=\" J'aime programmer.\", generation_info=None, message=AIMessage(content=\" J'aime programmer.\", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('8cc8fb68-1c35-439c-96a0-695036a93652'))])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"await chat.agenerate([messages])"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" J'aime la programmation."
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\" J'aime la programmation.\", additional_kwargs={}, example=False)"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"chat = ChatDeepInfra(\n",
" streaming=True,\n",
" verbose=True,\n",
" callbacks=[StreamingStdOutCallbackHandler()],\n",
")\n",
"chat(messages)"
"chat.invoke(messages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c253883f",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

View File

@@ -114,7 +114,7 @@
"human = \"Translate this sentence from English to French. I love programming.\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chat = ChatVertexAI(model_name=\"gemini-pro\", convert_system_message_to_human=True)\n",
"chat = ChatVertexAI(model=\"gemini-pro\", convert_system_message_to_human=True)\n",
"\n",
"chain = prompt | chat\n",
"chain.invoke({})"
@@ -233,9 +233,7 @@
}
],
"source": [
"chat = ChatVertexAI(\n",
" model_name=\"codechat-bison\", max_output_tokens=1000, temperature=0.5\n",
")\n",
"chat = ChatVertexAI(model=\"codechat-bison\", max_tokens=1000, temperature=0.5)\n",
"\n",
"message = chat.invoke(\"Write a Python function generating all prime numbers\")\n",
"print(message.content)"
@@ -399,7 +397,7 @@
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"llm = ChatVertexAI(model_name=\"gemini-pro\", temperature=0)\n",
"llm = ChatVertexAI(model=\"gemini-pro\", temperature=0)\n",
"llm_with_tools = llm.bind_tools([GetWeather])\n",
"ai_msg = llm_with_tools.invoke(\n",
" \"what is the weather like in San Francisco\",\n",
@@ -551,7 +549,7 @@
"human = \"{text}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chat = ChatVertexAI(model_name=\"chat-bison\", max_output_tokens=1000, temperature=0.5)\n",
"chat = ChatVertexAI(model=\"chat-bison\", max_tokens=1000, temperature=0.5)\n",
"chain = prompt | chat\n",
"\n",
"asyncio.run(\n",

View File

@@ -19,13 +19,13 @@
},
"outputs": [],
"source": [
"from langchain.prompts.chat import (\n",
"from langchain_community.chat_models import JinaChat\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
")\n",
"from langchain_community.chat_models import JinaChat\n",
"from langchain_core.messages import HumanMessage, SystemMessage"
")"
]
},
{

View File

@@ -123,7 +123,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"outputs": [
{
@@ -172,7 +172,7 @@
" <td>F</td>\n",
" <td>59836 Carla Causeway Suite 939\\nPort Eugene, I...</td>\n",
" <td>meltondenise@yahoo.com</td>\n",
" <td>1997-09-09</td>\n",
" <td>1997-11-23</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
@@ -181,7 +181,7 @@
" <td>M</td>\n",
" <td>3108 Christina Forges\\nPort Timothychester, KY...</td>\n",
" <td>erica80@hotmail.com</td>\n",
" <td>1924-05-05</td>\n",
" <td>1924-07-19</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
@@ -190,7 +190,7 @@
" <td>F</td>\n",
" <td>Unit 7405 Box 3052\\nDPO AE 09858</td>\n",
" <td>timothypotts@gmail.com</td>\n",
" <td>1933-09-06</td>\n",
" <td>1933-11-20</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
@@ -199,7 +199,7 @@
" <td>F</td>\n",
" <td>6408 Christopher Hill Apt. 459\\nNew Benjamin, ...</td>\n",
" <td>dadams@gmail.com</td>\n",
" <td>1988-07-28</td>\n",
" <td>1988-10-11</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
@@ -208,7 +208,7 @@
" <td>M</td>\n",
" <td>2241 Bell Gardens Suite 723\\nScottside, CA 38463</td>\n",
" <td>williamayala@gmail.com</td>\n",
" <td>1930-12-19</td>\n",
" <td>1931-03-04</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
@@ -233,14 +233,14 @@
"\n",
" birthdate \n",
"id \n",
"0 1997-09-09 \n",
"1 1924-05-05 \n",
"2 1933-09-06 \n",
"3 1988-07-28 \n",
"4 1930-12-19 "
"0 1997-11-23 \n",
"1 1924-07-19 \n",
"2 1933-11-20 \n",
"3 1988-10-11 \n",
"4 1931-03-04 "
]
},
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
@@ -646,7 +646,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.18"
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -49,12 +49,12 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.chat import (\n",
"from langchain_core.messages import SystemMessage\n",
"from langchain_core.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
" MessagesPlaceholder,\n",
")\n",
"from langchain_core.messages import SystemMessage\n",
"\n",
"template_messages = [\n",
" SystemMessage(content=\"You are a helpful assistant.\"),\n",

View File

@@ -62,7 +62,7 @@
"messages = [system_message, user_message]\n",
"\n",
"# chat with wasm-chat service\n",
"response = chat(messages)\n",
"response = chat.invoke(messages)\n",
"\n",
"print(f\"[Bot] {response.content}\")"
]

View File

@@ -33,7 +33,7 @@
"metadata": {},
"outputs": [],
"source": [
"!pip install langchain langchain-core langchain-community"
"!pip install langchain langchain-core langchain-community httpx"
]
},
{
@@ -60,9 +60,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.chat import ChatPromptTemplate\n",
"from langchain_community.chat_models import ChatMaritalk\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts.chat import ChatPromptTemplate\n",
"\n",
"llm = ChatMaritalk(\n",
" model=\"sabia-2-medium\", # Available models: sabia-2-small and sabia-2-medium\n",
@@ -89,6 +89,58 @@
"print(response) # should answer something like \"1. Max\\n2. Bella\\n3. Charlie\\n4. Rocky\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Stream Generation\n",
"\n",
"For tasks involving the generation of long text, such as creating an extensive article or translating a large document, it can be advantageous to receive the response in parts, as the text is generated, instead of waiting for the complete text. This makes the application more responsive and efficient, especially when the generated text is extensive. We offer two approaches to meet this need: one synchronous and another asynchronous.\n",
"\n",
"#### Synchronous:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"messages = [HumanMessage(content=\"Suggest 3 names for my dog\")]\n",
"\n",
"for chunk in llm.stream(messages):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Asynchronous:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"\n",
"async def async_invoke_chain(animal: str):\n",
" messages = [HumanMessage(content=f\"Suggest 3 names for my {animal}\")]\n",
" async for chunk in llm._astream(messages):\n",
" print(chunk.message.content, end=\"\", flush=True)\n",
"\n",
"\n",
"await async_invoke_chain(\"dog\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -184,7 +236,7 @@
"\n",
"query = \"Qual o tempo máximo para realização da prova?\"\n",
"\n",
"docs = retriever.get_relevant_documents(query)\n",
"docs = retriever.invoke(query)\n",
"\n",
"chain.invoke(\n",
" {\"input_documents\": docs, \"query\": query}\n",

View File

@@ -9,10 +9,16 @@
"source": [
"# NVIDIA AI Foundation Endpoints\n",
"\n",
":::{.callout-caution}\n",
"\n",
"These are the LangChain v0.1 docs. Please refer to the updated [LangChain v0.2 docs](https://python.langchain.com/v0.2/docs/integrations/chat/nvidia_ai_endpoints/) instead.\n",
"\n",
":::\n",
"\n",
"The `ChatNVIDIA` class is a LangChain chat model that connects to [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/).\n",
"\n",
"\n",
"> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to NVIDIA hosted API endpoints for NVIDIA AI Foundation Models like Mixtral 8x7B, Llama 2, Stable Diffusion, etc. These models, hosted on the [NVIDIA NGC catalog](https://catalog.ngc.nvidia.com/ai-foundation-models), are optimized, tested, and hosted on the NVIDIA AI platform, making them fast and easy to evaluate, further customize, and seamlessly run at peak performance on any accelerated stack.\n",
"> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to NVIDIA hosted API endpoints for NVIDIA AI Foundation Models like Mixtral 8x7B, Llama 2, Stable Diffusion, etc. These models, hosted on the [NVIDIA API catalog](https://build.nvidia.com/), are optimized, tested, and hosted on the NVIDIA AI platform, making them fast and easy to evaluate, further customize, and seamlessly run at peak performance on any accelerated stack.\n",
"> \n",
"> With [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/), you can get quick results from a fully accelerated stack running on [NVIDIA DGX Cloud](https://www.nvidia.com/en-us/data-center/dgx-cloud/). Once customized, these models can be deployed anywhere with enterprise-grade security, stability, and support using [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/).\n",
"> \n",
@@ -58,13 +64,13 @@
"\n",
"**To get started:**\n",
"\n",
"1. Create a free account with the [NVIDIA NGC](https://catalog.ngc.nvidia.com/) service, which hosts AI solution catalogs, containers, models, etc.\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models\n",
"\n",
"2. Navigate to `Catalog > AI Foundation Models > (Model with API endpoint)`.\n",
"2. Click on your model of choice\n",
"\n",
"3. Select the `API` option and click `Generate Key`.\n",
"3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.\n",
"\n",
"4. Save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints."
"4. Copy and save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints."
]
},
{
@@ -311,7 +317,7 @@
"\n",
"Some model types support unique prompting techniques and chat messages. We will review a few important ones below.\n",
"\n",
"**To find out more about a specific model, please navigate to the API section of an AI Foundation model [as linked here](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/codellama-13b/api).**"
"**To find out more about a specific model, please navigate to the API section of an AI Foundation model [as linked here](https://build.nvidia.com/).**"
]
},
{
@@ -1149,7 +1155,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -30,7 +30,7 @@
"* [Download](https://ollama.ai/download) and install Ollama onto the available supported platforms (including Windows Subsystem for Linux)\n",
"* Fetch available LLM model via `ollama pull <name-of-model>`\n",
" * View a list of available models via the [model library](https://ollama.ai/library)\n",
" * e.g., for `Llama-7b`: `ollama pull llama2`\n",
" * e.g., `ollama pull llama3`\n",
"* This will download the default tagged version of the model. Typically, the default points to the latest, smallest sized-parameter model.\n",
"\n",
"> On Mac, the models will be download to `~/.ollama/models`\n",
@@ -46,7 +46,7 @@
"\n",
"You can see a full list of supported parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html).\n",
"\n",
"If you are using a LLaMA `chat` model (e.g., `ollama pull llama2:7b-chat`) then you can use the `ChatOllama` interface.\n",
"If you are using a LLaMA `chat` model (e.g., `ollama pull llama3`) then you can use the `ChatOllama` interface.\n",
"\n",
"This includes [special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) for system message and user input.\n",
"\n",
@@ -65,7 +65,7 @@
"\n",
"```bash\n",
"curl http://localhost:11434/api/generate -d '{\n",
" \"model\": \"llama2\",\n",
" \"model\": \"llama3\",\n",
" \"prompt\":\"Why is the sky blue?\"\n",
"}'\n",
"```\n",
@@ -86,11 +86,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
" Sure, here's a fun space-themed joke for you:\n",
"Why did the astronaut break up with his girlfriend?\n",
"\n",
"Why don't astronauts like broccoli? \n",
"Because it has too many \"crisps\" in it!\n",
"\n"
"Because he needed space!\n"
]
}
],
@@ -102,7 +100,7 @@
"\n",
"# supports many more optional parameters. Hover on your `ChatOllama(...)`\n",
"# class to view the latest available supported parameters\n",
"llm = ChatOllama(model=\"llama2\")\n",
"llm = ChatOllama(model=\"llama3\")\n",
"prompt = ChatPromptTemplate.from_template(\"Tell me a short joke about {topic}\")\n",
"\n",
"# using LangChain Expressive Language chain syntax\n",
@@ -125,21 +123,14 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Sure\n",
",\n",
" here\n",
"'s\n",
" a\n",
" joke\n",
":\n",
" Why\n",
"Why\n",
" did\n",
" the\n",
" astronaut\n",
@@ -148,17 +139,18 @@
" with\n",
" his\n",
" girlfriend\n",
" before\n",
" going\n",
" to\n",
" Mars\n",
"?\n",
" Because\n",
"\n",
"\n",
"Because\n",
" he\n",
" needed\n",
" more\n",
" space\n",
" to\n",
" explore\n",
".\n",
"\n",
"\n",
"!\n",
"\n"
]
}
@@ -179,51 +171,9 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Sure\n",
",\n",
" here\n",
"'s\n",
" a\n",
" little\n",
" one\n",
":\n",
" Why\n",
" did\n",
" the\n",
" rocket\n",
" scientist\n",
" break\n",
" up\n",
" with\n",
" her\n",
" partner\n",
"?\n",
" Because\n",
" he\n",
" couldn\n",
"'t\n",
" handle\n",
" all\n",
" her\n",
" \"\n",
"space\n",
"y\n",
"\"\n",
" jokes\n",
".\n",
"\n",
"\n",
"\n"
]
}
],
"outputs": [],
"source": [
"topic = {\"topic\": \"Space travel\"}\n",
"\n",
@@ -255,13 +205,13 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatOllama\n",
"\n",
"llm = ChatOllama(model=\"llama2\", format=\"json\", temperature=0)"
"llm = ChatOllama(model=\"llama3\", format=\"json\", temperature=0)"
]
},
{
@@ -273,7 +223,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"content='{\\n\"morning\": {\\n\"color\": \"light blue\"\\n},\\n\"noon\": {\\n\"color\": \"blue\"\\n},\\n\"afternoon\": {\\n\"color\": \"grayish-blue\"\\n},\\n\"evening\": {\\n\"color\": \"pinkish-orange\"\\n}\\n}'\n"
"content='{ \"morning\": \"blue\", \"noon\": \"clear blue\", \"afternoon\": \"hazy yellow\", \"evening\": \"orange-red\" }\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n \\n\\n\\n\\n\\n\\n ' id='run-e893700f-e2d0-4df8-ad86-17525dcee318-0'\n"
]
}
],
@@ -292,7 +242,7 @@
},
{
"cell_type": "code",
"execution_count": 53,
"execution_count": 8,
"metadata": {},
"outputs": [
{
@@ -300,13 +250,9 @@
"output_type": "stream",
"text": [
"\n",
"{\n",
"\"name\": \"John\",\n",
"\"age\": 35,\n",
"\"interests\": [\n",
"\"pizza\"\n",
"]\n",
"}\n"
"Name: John\n",
"Age: 35\n",
"Likes: Pizza\n"
]
}
],
@@ -516,7 +462,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.8"
}
},
"nbformat": 4,

View File

@@ -15,9 +15,15 @@
"source": [
"# OllamaFunctions\n",
"\n",
":::{.callout-caution}\n",
"\n",
"This was an experimental wrapper that bolted-on tool calling support to models that do not natively support it. The primary Ollama integration now supports tool calling, and should be used instead. See example usage in LangChain v0.2 documentation [here](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/).\n",
"\n",
":::\n",
"\n",
"This notebook shows how to use an experimental wrapper around Ollama that gives it the same API as OpenAI Functions.\n",
"\n",
"Note that more powerful and capable models will perform better with complex schema and/or multiple functions. The examples below use Mistral.\n",
"Note that more powerful and capable models will perform better with complex schema and/or multiple functions. The examples below use llama3 and phi3 models.\n",
"For a complete list of supported models and model variants, see the [Ollama model library](https://ollama.ai/library).\n",
"\n",
"## Setup\n",
@@ -32,12 +38,18 @@
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-28T00:53:25.276543Z",
"start_time": "2024-04-28T00:53:24.881202Z"
},
"scrolled": true
},
"outputs": [],
"source": [
"from langchain_experimental.llms.ollama_functions import OllamaFunctions\n",
"\n",
"model = OllamaFunctions(model=\"mistral\")"
"model = OllamaFunctions(model=\"llama3\", format=\"json\")"
]
},
{
@@ -50,11 +62,16 @@
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-26T04:59:17.270931Z",
"start_time": "2024-04-26T04:59:17.263347Z"
}
},
"outputs": [],
"source": [
"model = model.bind(\n",
" functions=[\n",
"model = model.bind_tools(\n",
" tools=[\n",
" {\n",
" \"name\": \"get_current_weather\",\n",
" \"description\": \"Get the current weather in a given location\",\n",
@@ -88,12 +105,17 @@
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-26T04:59:26.092428Z",
"start_time": "2024-04-26T04:59:17.272627Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'function_call': {'name': 'get_current_weather', 'arguments': '{\"location\": \"Boston, MA\", \"unit\": \"celsius\"}'}})"
"AIMessage(content='', additional_kwargs={'function_call': {'name': 'get_current_weather', 'arguments': '{\"location\": \"Boston, MA\"}'}}, id='run-1791f9fe-95ad-4ca4-bdf7-9f73eab31e6f-0')"
]
},
"execution_count": 3,
@@ -111,54 +133,119 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using for extraction\n",
"## Structured Output\n",
"\n",
"One useful thing you can do with function calling here is extracting properties from a given input in a structured format:"
"One useful thing you can do with function calling using `with_structured_output()` function is extracting properties from a given input in a structured format:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-26T04:59:26.098828Z",
"start_time": "2024-04-26T04:59:26.094021Z"
}
},
"outputs": [],
"source": [
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"# Schema for structured response\n",
"class Person(BaseModel):\n",
" name: str = Field(description=\"The person's name\", required=True)\n",
" height: float = Field(description=\"The person's height\", required=True)\n",
" hair_color: str = Field(description=\"The person's hair color\")\n",
"\n",
"\n",
"# Prompt template\n",
"prompt = PromptTemplate.from_template(\n",
" \"\"\"Alex is 5 feet tall. \n",
"Claudia is 1 feet taller than Alex and jumps higher than him. \n",
"Claudia is a brunette and Alex is blonde.\n",
"\n",
"Human: {question}\n",
"AI: \"\"\"\n",
")\n",
"\n",
"# Chain\n",
"llm = OllamaFunctions(model=\"phi3\", format=\"json\", temperature=0)\n",
"structured_llm = llm.with_structured_output(Person)\n",
"chain = prompt | structured_llm"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Extracting data about Alex"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-26T04:59:30.164955Z",
"start_time": "2024-04-26T04:59:26.099790Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'Alex', 'height': 5, 'hair_color': 'blonde'},\n",
" {'name': 'Claudia', 'height': 6, 'hair_color': 'brunette'}]"
"Person(name='Alex', height=5.0, hair_color='blonde')"
]
},
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import create_extraction_chain\n",
"\n",
"# Schema\n",
"schema = {\n",
" \"properties\": {\n",
" \"name\": {\"type\": \"string\"},\n",
" \"height\": {\"type\": \"integer\"},\n",
" \"hair_color\": {\"type\": \"string\"},\n",
" },\n",
" \"required\": [\"name\", \"height\"],\n",
"}\n",
"\n",
"# Input\n",
"input = \"\"\"Alex is 5 feet tall. Claudia is 1 feet taller than Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\"\"\"\n",
"\n",
"# Run chain\n",
"llm = OllamaFunctions(model=\"mistral\", temperature=0)\n",
"chain = create_extraction_chain(schema, llm)\n",
"chain.run(input)"
"alex = chain.invoke(\"Describe Alex\")\n",
"alex"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Extracting data about Claudia"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-26T04:59:31.509846Z",
"start_time": "2024-04-26T04:59:30.165662Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"Person(name='Claudia', height=6.0, hair_color='brunette')"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"claudia = chain.invoke(\"Describe Claudia\")\n",
"claudia"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -172,9 +259,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -17,6 +17,7 @@
"source": [
"# ChatOpenAI\n",
"\n",
"This notebook covers how to get started with OpenAI chat models."
]
},
@@ -147,7 +148,7 @@
"\n",
"### ChatOpenAI.bind_tools()\n",
"\n",
"With `ChatAnthropic.bind_tools`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to an Anthropic tool schemas, which looks like:\n",
"With `ChatOpenAI.bind_tools`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to an OpenAI tool schemas, which looks like:\n",
"```\n",
"{\n",
" \"name\": \"...\",\n",

View File

@@ -0,0 +1,119 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2970dd75-8ebf-4b51-8282-9b454b8f356d",
"metadata": {},
"source": [
"# Together AI\n",
"\n",
"[Together AI](https://www.together.ai/) offers an API to query [50+ leading open-source models](https://docs.together.ai/docs/inference-models) in a couple lines of code.\n",
"\n",
"This example goes over how to use LangChain to interact with Together AI models."
]
},
{
"cell_type": "markdown",
"id": "1c47fc36",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1ecdb29d",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade langchain-together"
]
},
{
"cell_type": "markdown",
"id": "89883202",
"metadata": {},
"source": [
"## Environment\n",
"\n",
"To use Together AI, you'll need an API key which you can find here:\n",
"https://api.together.ai/settings/api-keys. This can be passed in as an init param\n",
"``together_api_key`` or set as environment variable ``TOGETHER_API_KEY``.\n"
]
},
{
"cell_type": "markdown",
"id": "8304b4d9",
"metadata": {},
"source": [
"## Example"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "637bb53f",
"metadata": {},
"outputs": [],
"source": [
"# Querying chat models with Together AI\n",
"\n",
"from langchain_together import ChatTogether\n",
"\n",
"# choose from our 50+ models here: https://docs.together.ai/docs/inference-models\n",
"chat = ChatTogether(\n",
" # together_api_key=\"YOUR_API_KEY\",\n",
" model=\"meta-llama/Llama-3-70b-chat-hf\",\n",
")\n",
"\n",
"# stream the response back from the model\n",
"for m in chat.stream(\"Tell me fun things to do in NYC\"):\n",
" print(m.content, end=\"\", flush=True)\n",
"\n",
"# if you don't want to do streaming, you can use the invoke method\n",
"# chat.invoke(\"Tell me fun things to do in NYC\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e7b7170d-d7c5-4890-9714-a37238343805",
"metadata": {},
"outputs": [],
"source": [
"# Querying code and language models with Together AI\n",
"\n",
"from langchain_together import Together\n",
"\n",
"llm = Together(\n",
" model=\"codellama/CodeLlama-70b-Python-hf\",\n",
" # together_api_key=\"...\"\n",
")\n",
"\n",
"print(llm.invoke(\"def bubble_sort(): \"))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -142,11 +142,70 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": []
"source": [
"## Tool Calling\n",
"ChatTongyi supports tool calling API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'name': 'get_current_weather', 'arguments': '{\"location\": \"San Francisco\"}'}, 'id': '', 'type': 'function'}]}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'tool_calls', 'request_id': 'dae79197-8780-9b7e-8c15-6a83e2a53534', 'token_usage': {'input_tokens': 229, 'output_tokens': 19, 'total_tokens': 248}}, id='run-9e06f837-582b-473b-bb1f-5e99a68ecc10-0', tool_calls=[{'name': 'get_current_weather', 'args': {'location': 'San Francisco'}, 'id': ''}])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.chat_models.tongyi import ChatTongyi\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"\n",
"tools = [\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"get_current_time\",\n",
" \"description\": \"当你想知道现在的时间时非常有用。\",\n",
" \"parameters\": {},\n",
" },\n",
" },\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"get_current_weather\",\n",
" \"description\": \"当你想查询指定城市的天气时非常有用。\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"location\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"城市或县区,比如北京市、杭州市、余杭区等。\",\n",
" }\n",
" },\n",
" },\n",
" \"required\": [\"location\"],\n",
" },\n",
" },\n",
"]\n",
"\n",
"messages = [\n",
" SystemMessage(content=\"You are a helpful assistant.\"),\n",
" HumanMessage(content=\"What is the weather like in San Francisco?\"),\n",
"]\n",
"chatLLM = ChatTongyi()\n",
"llm_kwargs = {\"tools\": tools, \"result_format\": \"message\"}\n",
"ai_message = chatLLM.bind(**llm_kwargs).invoke(messages)\n",
"ai_message"
]
}
],
"metadata": {

View File

@@ -31,12 +31,12 @@
},
"outputs": [],
"source": [
"from langchain.prompts.chat import (\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
")\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_openai import ChatOpenAI"
]
},

View File

@@ -348,7 +348,7 @@
"outputs": [],
"source": [
"async def ainvoke_with_prompt_template():\n",
" from langchain.prompts.chat import (\n",
" from langchain_core.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" )\n",
"\n",

View File

@@ -119,7 +119,7 @@
"metadata": {},
"outputs": [],
"source": [
"response = chat(messages)\n",
"response = chat.invoke(messages)\n",
"print(response.content) # Displays the AI-generated poem"
]
},

View File

@@ -216,11 +216,11 @@
"source": [
"from typing import List\n",
"\n",
"from langchain_community.chat_loaders.base import ChatSession\n",
"from langchain_community.chat_loaders.utils import (\n",
" map_ai_messages,\n",
" merge_chat_runs,\n",
")\n",
"from langchain_core.chat_sessions import ChatSession\n",
"\n",
"raw_messages = loader.lazy_load()\n",
"# Merge consecutive messages from the same sender into a single message\n",

View File

@@ -258,7 +258,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.adapters.openai import convert_messages_for_finetuning"
"from langchain_community.adapters.openai import convert_messages_for_finetuning"
]
},
{

View File

@@ -116,11 +116,11 @@
"source": [
"from typing import List\n",
"\n",
"from langchain_community.chat_loaders.base import ChatSession\n",
"from langchain_community.chat_loaders.utils import (\n",
" map_ai_messages,\n",
" merge_chat_runs,\n",
")\n",
"from langchain_core.chat_sessions import ChatSession\n",
"\n",
"raw_messages = loader.lazy_load()\n",
"# Merge consecutive messages from the same sender into a single message\n",
@@ -173,7 +173,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.adapters.openai import convert_messages_for_finetuning"
"from langchain_community.adapters.openai import convert_messages_for_finetuning"
]
},
{

View File

@@ -150,7 +150,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.adapters.openai import convert_messages_for_finetuning\n",
"from langchain_community.adapters.openai import convert_messages_for_finetuning\n",
"\n",
"training_data = convert_messages_for_finetuning(chat_sessions)"
]

View File

@@ -285,7 +285,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.adapters.openai import convert_messages_for_finetuning\n",
"from langchain_community.adapters.openai import convert_messages_for_finetuning\n",
"\n",
"training_data = convert_messages_for_finetuning(chat_sessions)"
]

View File

@@ -87,11 +87,11 @@
"source": [
"from typing import List\n",
"\n",
"from langchain_community.chat_loaders.base import ChatSession\n",
"from langchain_community.chat_loaders.utils import (\n",
" map_ai_messages,\n",
" merge_chat_runs,\n",
")\n",
"from langchain_core.chat_sessions import ChatSession\n",
"\n",
"raw_messages = loader.lazy_load()\n",
"# Merge consecutive messages from the same sender into a single message\n",

View File

@@ -136,11 +136,11 @@
"source": [
"from typing import List\n",
"\n",
"from langchain_community.chat_loaders.base import ChatSession\n",
"from langchain_community.chat_loaders.utils import (\n",
" map_ai_messages,\n",
" merge_chat_runs,\n",
")\n",
"from langchain_core.chat_sessions import ChatSession\n",
"\n",
"raw_messages = loader.lazy_load()\n",
"# Merge consecutive messages from the same sender into a single message\n",

View File

@@ -21,7 +21,7 @@
"source": [
"import json\n",
"\n",
"from langchain.adapters.openai import convert_message_to_dict\n",
"from langchain_community.adapters.openai import convert_message_to_dict\n",
"from langchain_core.messages import AIMessage"
]
},

View File

@@ -209,11 +209,11 @@
"source": [
"from typing import List\n",
"\n",
"from langchain_community.chat_loaders.base import ChatSession\n",
"from langchain_community.chat_loaders.utils import (\n",
" map_ai_messages,\n",
" merge_chat_runs,\n",
")\n",
"from langchain_core.chat_sessions import ChatSession\n",
"\n",
"raw_messages = loader.lazy_load()\n",
"# Merge consecutive messages from the same sender into a single message\n",

View File

@@ -126,11 +126,11 @@
"source": [
"from typing import List\n",
"\n",
"from langchain_community.chat_loaders.base import ChatSession\n",
"from langchain_community.chat_loaders.utils import (\n",
" map_ai_messages,\n",
" merge_chat_runs,\n",
")\n",
"from langchain_core.chat_sessions import ChatSession\n",
"\n",
"raw_messages = loader.lazy_load()\n",
"# Merge consecutive messages from the same sender into a single message\n",

View File

@@ -166,7 +166,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore.document import Document\n",
"from langchain_community.docstore.document import Document\n",
"\n",
"\n",
"def handle_record(record, id):\n",

View File

@@ -149,7 +149,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore.document import Document\n",
"from langchain_community.docstore.document import Document\n",
"\n",
"\n",
"def handle_record(record, id):\n",

View File

@@ -151,7 +151,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore.document import Document\n",
"from langchain_community.docstore.document import Document\n",
"\n",
"\n",
"def handle_record(record, id):\n",

View File

@@ -156,7 +156,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore.document import Document\n",
"from langchain_community.docstore.document import Document\n",
"\n",
"\n",
"def handle_record(record, id):\n",

Some files were not shown because too many files have changed in this diff Show More