Commit Graph

731 Commits

Author SHA1 Message Date
Erick Friis
f5cf6975ba
docs: anthropic partner package docs (#18109) 2024-02-26 17:51:44 +00:00
rongchenlin
9147a437f1
docs: Fix the bug in MongoDBChatMessageHistory notebook (#18128)
I tried to configure MongoDBChatMessageHistory using the code from the
original documentation to store messages based on the passed session_id
in MongoDB. However, this configuration did not take effect, and the
session id in the database remained as 'test_session'. To resolve this
issue, I found that when configuring MongoDBChatMessageHistory, it is
necessary to set session_id=session_id instead of
session_id=test_session.

Issue: DOC: Ineffective Configuration of MongoDBChatMessageHistory for
Custom session_id Storage

previous code:
```python
chain_with_history = RunnableWithMessageHistory(
    chain,
    lambda session_id: MongoDBChatMessageHistory(
        session_id="test_session",
        connection_string="mongodb://root:Y181491117cLj@123.56.224.232:27017",
        database_name="my_db",
        collection_name="chat_histories",
    ),
    input_messages_key="question",
    history_messages_key="history",
)
config = {"configurable": {"session_id": "mmm"}}
chain_with_history.invoke({"question": "Hi! I'm bob"}, config)
```

![image](https://github.com/langchain-ai/langchain/assets/83388493/c372f785-1ec1-43f5-8d01-b7cc07b806b7)


Modified code:
```python
chain_with_history = RunnableWithMessageHistory(
    chain,
    lambda session_id: MongoDBChatMessageHistory(
        session_id=session_id,   # here is my modify code
        connection_string="mongodb://root:Y181491117cLj@123.56.224.232:27017",
        database_name="my_db",
        collection_name="chat_histories",
    ),
    input_messages_key="question",
    history_messages_key="history",
)
config = {"configurable": {"session_id": "mmm"}}
chain_with_history.invoke({"question": "Hi! I'm bob"}, config)
```

Effect after modification (it works):


![image](https://github.com/langchain-ai/langchain/assets/83388493/5776268c-9098-4da3-bf41-52825be5fafb)
2024-02-26 15:02:56 +00:00
Matt
3b08617a89
docs: update azure search langchain notebook (#18053)
**Description:** Update the azure search notebook to have more
descriptive comments, and an option to choose between OpenAI and
AzureOpenAI Embeddings

---------

Co-authored-by: Matt Gotteiner <[email protected]>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-25 18:48:13 -08:00
Barun Amalkumar Halder
cc69976860
community[minor] : adds callback handler for Fiddler AI (#17708)
**Description:**  Callback handler to integrate fiddler with langchain. 
This PR adds the following -

1. `FiddlerCallbackHandler` implementation into langchain/community
2. Example notebook `fiddler.ipynb` for usage documentation

[Internal Tracker : FDL-14305]

**Issue:** 
NA

**Dependencies:** 
- Installation of langchain-community is unaffected.
- Usage of FiddlerCallbackHandler requires installation of latest
fiddler-client (2.5+)

**Twitter handle:** @fiddlerlabs @behalder

Co-authored-by: Barun Halder <barun@fiddler.ai>
2024-02-25 18:17:03 -08:00
Christophe Bornet
b8b5ce0c8c
astradb: Add AstraDBChatMessageHistory to langchain-astradb package (#17732)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-25 18:14:49 -08:00
Jacob Lee
c9eac3287e
docs[patch]: Remove redundant Pinecone import (#18079)
CC @efriis
2024-02-24 19:27:54 -08:00
Erick Friis
e85948d46b
docs: fireworks tool calling docs (#18057) 2024-02-24 00:49:11 +00:00
Erick Friis
1a3383fba1
docs: fireworks fixes (#18056) 2024-02-23 15:58:53 -08:00
Yufei (Benny) Chen
ee6a773456
fireworks[patch]: Add Fireworks partner packages (#17694)
---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-23 20:45:47 +00:00
Erick Friis
11cf95e810
docs: recommend lambdas over runnablebranch (#18033) 2024-02-23 11:34:27 -08:00
Chad Juliano
50ba3c68bb
community[minor]: add Kinetica LLM wrapper (#17879)
**Description:** Initial pull request for Kinetica LLM wrapper
**Issue:** N/A
**Dependencies:** No new dependencies for unit tests. Integration tests
require gpudb, typeguard, and faker
**Twitter handle:** @chad_juliano

Note: There is another pull request for Kinetica vectorstore. Ultimately
we would like to make a partner package but we are starting with a
community contribution.
2024-02-22 16:02:00 -08:00
Matt
6ef12fdfd2
docs: Update Azure Search vector store notebook (#17901)
- **Description:** Update the Azure Search vector store notebook for the
latest version of the SDK

---------

Co-authored-by: Matt Gotteiner <[email protected]>
2024-02-22 15:59:43 -08:00
Averi Kitsch
c05cbf0533
docs: Update Google Provider documentation (#17970)
**Description:** Clean up Google product names and fix document loader
section
**Issue:** NA
**Dependencies:** None

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-22 15:58:52 -08:00
Erick Friis
ed789be8f4
docs, templates: update schema imports to core (#17885)
- chat models, messages
- documents
- agentaction/finish
- baseretriever,document
- stroutputparser
- more messages
- basemessage
- format_document
- baseoutputparser

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-22 15:58:44 -08:00
Leonid Ganeline
f685d2f50c
docs: partner package list (#17978)
Updated partner package list
2024-02-22 18:23:07 -05:00
bear
e8633e53c4
docs: Rerun the Tongyi Qwen model to fix incorrect responses. (#17693)
This PR updates the docs of Tongyi Qwen model. 
1. fix the previously incorrect responses of the Tongyi Qwen.
2. rewrite the case with LCEL.
2024-02-22 13:20:04 -08:00
Mateusz Szewczyk
f6e3aa9770
docs: update IBM watsonx.ai docs (#17932)
- **Description:** Update IBM watsonx.ai docs and add IBM as a provider
docs
- **Dependencies:**
[ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),
  - **Tag maintainer:** : 

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally. 
2024-02-22 10:22:18 -08:00
Erick Friis
a53370a060
pinecone[patch], docs: PineconeVectorStore, release 0.0.3 (#17896) 2024-02-22 08:24:08 -08:00
Graden Rea
e5e38e89ce
partner: Add groq partner integration and chat model (#17856)
Description: Add a Groq chat model
issue: TODO
Dependencies: groq
Twitter handle: N/A
2024-02-22 07:36:16 -08:00
William FH
42f158c128
docs: typo (#17710) 2024-02-21 16:53:41 -08:00
Ian
3019a594b7
community[minor]: Add tidb loader support (#17788)
This pull request support loading data from TiDB database with
Langchain.

A simple usage:
```
from  langchain_community.document_loaders import TiDBLoader

CONNECTION_STRING = "mysql+pymysql://root@127.0.0.1:4000/test"

QUERY = "select id, name, description from items;"
loader = TiDBLoader(
    connection_string=CONNECTION_STRING,
    query=QUERY,
    page_content_columns=["name", "description"],
    metadata_columns=["id"],
)
documents = loader.load()
print(documents)
```
2024-02-21 16:42:33 -08:00
Michael Feil
242981b8f0
community[minor]: infinity embedding local option (#17671)
**drop-in-replacement for sentence-transformers
inference.**

https://github.com/langchain-ai/langchain/discussions/17670

tldr from the discussion above -> around a 4x-22x speedup over using
SentenceTransformers / huggingface embeddings. For more info:
https://github.com/michaelfeil/infinity (pure-python dependency)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-21 16:33:13 -08:00
aditya thomas
d9aa11d589
docs: Change module import path for SQLDatabase in the documentation (#17874)
**Description:** This PR changes the module import path for SQLDatabase
in the documentation
**Issue:** Updates the documentation to reflect the move of integrations
to langchain-community
2024-02-21 15:57:30 -08:00
Christophe Bornet
f8a3b8e83f
docs: Update langchain-astradb README with AstraDBStore (#17864) 2024-02-21 15:51:40 -08:00
Matthew Kwiatkowski
144f59b5fe
docs: Fix URL typo in tigris.ipynb (#17894)
- **Description:** The URL in the tigris tutorial was htttps instead of
https, leading to a bad link.
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** Speucey
2024-02-21 15:39:38 -08:00
volodymyr-memsql
0a9a519a39
community[patch]: Added add_images method to SingleStoreDB vector store (#17871)
In this pull request, we introduce the add_images method to the
SingleStoreDB vector store class, expanding its capabilities to handle
multi-modal embeddings seamlessly. This method facilitates the
incorporation of image data into the vector store by associating each
image's URI with corresponding document content, metadata, and either
pre-generated embeddings or embeddings computed using the embed_image
method of the provided embedding object.

the change includes integration tests, validating the behavior of the
add_images. Additionally, we provide a notebook showcasing the usage of
this new method.

---------

Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
2024-02-21 15:16:32 -08:00
Guangdong Liu
7735721929
docs: update sparkllm intro doc (#17848)
**Description:** update sparkllm intro doc.
**Issue:** None
**Dependencies:** None
**Twitter handle:** None
2024-02-21 15:02:20 -08:00
Guangdong Liu
47b1b7092d
community[minor]: Add SparkLLM to community (#17702) 2024-02-20 11:23:47 -08:00
Guangdong Liu
3ba1cb8650
community[minor]: Add SparkLLM Text Embedding Model and SparkLLM introduction (#17573) 2024-02-20 11:22:27 -08:00
Virat Singh
92e52e89ca
community: Add PolygonTickerNews Tool (#17808)
Description:
In this PR, I am adding a PolygonTickerNews Tool, which can be used to
get the latest news for a given ticker / stock.

Twitter handle: [@virattt](https://twitter.com/virattt)
2024-02-20 10:15:29 -08:00
CogniJT
919ebcc596
community[minor]: CogniSwitch Agent Toolkit for LangChain (#17312)
**Description**: CogniSwitch focusses on making GenAI usage more
reliable. It abstracts out the complexity & decision making required for
tuning processing, storage & retrieval. Using simple APIs documents /
URLs can be processed into a Knowledge Graph that can then be used to
answer questions.

**Dependencies**: No dependencies. Just network calls & API key required
**Tag maintainer**: @hwchase17
**Twitter handle**: https://github.com/CogniSwitch
**Documentation**: Please check
`docs/docs/integrations/toolkits/cogniswitch.ipynb`
**Tests**: The usual tool & toolkits tests using `test_imports.py`

PR has passed linting and testing before this submission.

---------

Co-authored-by: Saicharan Sridhara <145636106+saiCogniswitch@users.noreply.github.com>
2024-02-19 10:54:13 -08:00
Aymeric Roucher
0d294760e7
Community: Fuse HuggingFace Endpoint-related classes into one (#17254)
## Description
Fuse HuggingFace Endpoint-related classes into one:
-
[HuggingFaceHub](5ceaf784f3/libs/community/langchain_community/llms/huggingface_hub.py)
-
[HuggingFaceTextGenInference](5ceaf784f3/libs/community/langchain_community/llms/huggingface_text_gen_inference.py)
- and
[HuggingFaceEndpoint](5ceaf784f3/libs/community/langchain_community/llms/huggingface_endpoint.py)

Are fused into
- HuggingFaceEndpoint

## Issue
The deduplication of classes was creating a lack of clarity, and
additional effort to develop classes leads to issues like [this
hack](5ceaf784f3/libs/community/langchain_community/llms/huggingface_endpoint.py (L159)).

## Dependancies

None, this removes dependancies.

## Twitter handle

If you want to post about this: @AymericRoucher

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-19 10:33:15 -08:00
Raghav Dixit
6c18f73ca5
community[patch]: LanceDB integration improvements/fixes (#16173)
Hi, I'm from the LanceDB team.

Improves LanceDB integration by making it easier to use - now you aren't
required to create tables manually and pass them in the constructor,
although that is still backward compatible.

Bug fix - pandas was being used even though it's not a dependency for
LanceDB or langchain

PS - this issue was raised a few months ago but lost traction. It is a
feature improvement for our users kindly review this , Thanks !
2024-02-19 10:22:02 -08:00
Guangdong Liu
73edf17b4e
community[minor]: Add Apache Doris as vector store (#17527)
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-18 12:05:58 -07:00
Stefano Lottini
2a239710a0
docs: update astradb imports to in docs/sample notebook to import from partner package (#17627)
This PR replaces the imports of the Astra DB vector store with the
newly-released partner package, in compliance with the deprecation
notice now attached to the community "legacy" store.
2024-02-16 11:30:13 -05:00
Stefano Lottini
5240ecab99
astradb: bootstrapping Astra DB as Partner Package (#16875)
**Description:** This PR introduces a new "Astra DB" Partner Package.

So far only the vector store class is _duplicated_ there, all others
following once this is validated and established.

Along with the move to separate package, incidentally, the class name
will change `AstraDB` => `AstraDBVectorStore`.

The strategy has been to duplicate the module (with prospected removal
from community at LangChain 0.2). Until then, the code will be kept in
sync with minimal, known differences (there is a makefile target to
automate drift control. Out of convenience with this check, the
community package has a class `AstraDBVectorStore` aliased to `AstraDB`
at the end of the module).

With this PR several bugfixes and improvement come to the vector store,
as well as a reshuffling of the doc pages/notebooks (Astra and
Cassandra) to align with the move to a separate package.

**Dependencies:** A brand new pyproject.toml in the new package, no
changes otherwise.

**Twitter handle:** `@rsprrs`

---------

Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-15 15:50:59 -08:00
Erick Friis
f6f0ca1bae
docs: ai21 sidebars (#17600) 2024-02-15 14:43:48 -08:00
Erick Friis
6cc6faa00e
ai21: init package (#17592)
Co-authored-by: Asaf Gardin <asafg@ai21.com>
Co-authored-by: etang <etang@ai21.com>
Co-authored-by: asafgardin <147075902+asafgardin@users.noreply.github.com>
2024-02-15 12:25:05 -08:00
Moshe Berchansky
20a56fe0a2
community[minor]: Add QuantizedEmbedders (#17391)
**Description:** 
* adding Quantized embedders using optimum-intel and
intel-extension-for-pytorch.
* added mdx documentation and example notebooks 
* added embedding import testing.

**Dependencies:** 
optimum = {extras = ["neural-compressor"], version = "^1.14.0", optional
= true}
intel_extension_for_pytorch = {version = "^2.2.0", optional = true}

Dependencies have been added to pyproject.toml for the community lib.  

**Twitter handle:** @peter_izsak

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-15 11:01:24 -08:00
William FH
53b8c86309
fix dataset link (#17565) 2024-02-14 23:18:07 -08:00
nvpranak
91bcc9c5c9
community[minor]: Nemo embeddings(#16206)
This PR is adding support for NVIDIA NeMo embeddings issue #16095.

---------

Co-authored-by: Praveen Nakshatrala <pnakshatrala@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-14 13:25:42 -08:00
Mateusz Szewczyk
916332ef5b
ibm: added partners package langchain_ibm, added llm (#16512)
- **Description:** Added `langchain_ibm` as an langchain partners
package of IBM [watsonx.ai](https://www.ibm.com/products/watsonx-ai) LLM
provider (`WatsonxLLM`)
- **Dependencies:**
[ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),
  - **Tag maintainer:** : 
---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-14 12:12:19 -08:00
wulixuan
c776cfc599
community[minor]: integrate with model Yuan2.0 (#15411)
1. integrate with
[`Yuan2.0`](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/README-EN.md)
2. update `langchain.llms`
3. add a new doc for [Yuan2.0
integration](docs/docs/integrations/llms/yuan2.ipynb)

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-14 11:46:20 -08:00
volodymyr-memsql
e36bc379f2
community[patch]: Add vector index support to SingleStoreDB VectorStore (#17308)
This pull request introduces support for various Approximate Nearest
Neighbor (ANN) vector index algorithms in the VectorStore class,
starting from version 8.5 of SingleStore DB. Leveraging this enhancement
enables users to harness the power of vector indexing, significantly
boosting search speed, particularly when handling large sets of vectors.

---------

Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-14 11:43:12 -08:00
Kate Silverstein
0bc4a9b3fc
community[minor]: Adds Llamafile as an LLM (#17431)
* **Description:** Adds a simple LLM implementation for interacting with
[llamafile](https://github.com/Mozilla-Ocho/llamafile)-based models.
* **Dependencies:** N/A
* **Issue:** N/A

**Detail**
[llamafile](https://github.com/Mozilla-Ocho/llamafile) lets you run LLMs
locally from a single file on most computers without installing any
dependencies.

To use the llamafile LLM implementation, the user needs to:

1. Download a llamafile e.g.
https://huggingface.co/jartine/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile?download=true
2. Make the file executable.
3. Run the llamafile in 'server mode'. (All llamafiles come packaged
with a lightweight server; by default, the server listens at
`http://localhost:8080`.)


```bash
wget https://url/of/model.llamafile
chmod +x model.llamafile
./model.llamafile --server --nobrowser
```

Now, the user can invoke the LLM via the LangChain client:

```python
from langchain_community.llms.llamafile import Llamafile

llm = Llamafile()

llm.invoke("Tell me a joke.")
```
2024-02-14 11:15:24 -08:00
Erick Friis
d7418acbe1
nomic[patch]: release 0.0.2, dimensionality (#17534)
- nomic[patch]: release 0.0.2
- x
2024-02-14 08:38:07 -08:00
Bagatur
5dca107621
docs: update providers (#17488) 2024-02-13 14:00:15 -08:00
Rave Harpaz
90f55e6bd1
Documentation/add update documentation for oci (#17473)
Thank you for contributing to LangChain!

Checklist:

- **PR title**: docs: add & update docs for Oracle Cloud Infrastructure
(OCI) integrations

- **Description**: adding and updating documentation for two
integrations - OCI Generative AI & OCI Data Science
(1) adding integration page for OCI Generative AI embeddings (@baskaryan
request,
         docs/docs/integrations/text_embedding/oci_generative_ai.ipynb)
(2) updating integration page for OCI Generative AI llms
(docs/docs/integrations/llms/oci_generative_ai.ipynb)
(3) adding platform documentation for OCI (@baskaryan request,
docs/docs/integrations/platforms/oci.mdx). this combines the
          integrations of OCI Generative AI & OCI Data Science
(4) if possible, requesting to be added to 'Featured Community
Providers' so supplying a modified
docs/docs/integrations/platforms/index.mdx to reflect the addition
- **Issue:** none

 - **Dependencies:** no new dependencies 

 - **Twitter handle:**

---------

Co-authored-by: MING KANG <ming.kang@oracle.com>
2024-02-13 13:26:23 -08:00
wulixuan
5d06797905
community[minor]: integrate chat models with Yuan2.0 (#16575)
1. integrate chat models with
[`Yuan2.0`](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/README-EN.md)
2. add a new doc for [Yuan2.0
integration](docs/docs/integrations/llms/yuan2.ipynb)
 
Yuan2.0 is a new generation Fundamental Large Language Model developed
by IEIT System. We have published all three models, Yuan 2.0-102B, Yuan
2.0-51B, and Yuan 2.0-2B.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-13 10:55:14 -08:00
Erick Friis
065cde69b1
google-genai[patch]: release 0.0.9, safety settings docs (#17432) 2024-02-13 10:01:25 -08:00
Ian Gregory
e5472b5eb8
Framework for supporting more languages in LanguageParser (#13318)
## Description

I am submitting this for a school project as part of a team of 5. Other
team members are @LeilaChr, @maazh10, @Megabear137, @jelalalamy. This PR
also has contributions from community members @Harrolee and @Mario928.

Initial context is in the issue we opened (#11229).

This pull request adds:

- Generic framework for expanding the languages that `LanguageParser`
can handle, using the
[tree-sitter](https://github.com/tree-sitter/py-tree-sitter#py-tree-sitter)
parsing library and existing language-specific parsers written for it
- Support for the following additional languages in `LanguageParser`:
  - C
  - C++
  - C#
  - Go
- Java (contributed by @Mario928
https://github.com/ThatsJustCheesy/langchain/pull/2)
  - Kotlin
  - Lua
  - Perl
  - Ruby
  - Rust
  - Scala
- TypeScript (contributed by @Harrolee
https://github.com/ThatsJustCheesy/langchain/pull/1)

Here is the [design
document](https://docs.google.com/document/d/17dB14cKCWAaiTeSeBtxHpoVPGKrsPye8W0o_WClz2kk)
if curious, but no need to read it.

## Issues

- Closes #11229
- Closes #10996
- Closes #8405

## Dependencies

`tree_sitter` and `tree_sitter_languages` on PyPI. We have tried to add
these as optional dependencies.

## Documentation

We have updated the list of supported languages, and also added a
section to `source_code.ipynb` detailing how to add support for
additional languages using our framework.

## Maintainer

- @hwchase17 (previously reviewed
https://github.com/langchain-ai/langchain/pull/6486)

Thanks!!

## Git commits

We will gladly squash any/all of our commits (esp merge commits) if
necessary. Let us know if this is desirable, or if you will be
squash-merging anyway.

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Maaz Hashmi <mhashmi373@gmail.com>
Co-authored-by: LeilaChr <87657694+LeilaChr@users.noreply.github.com>
Co-authored-by: Jeremy La <jeremylai511@gmail.com>
Co-authored-by: Megabear137 <zubair.alnoor27@gmail.com>
Co-authored-by: Lee Harrold <lhharrold@sep.com>
Co-authored-by: Mario928 <88029051+Mario928@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-13 08:45:49 -08:00
Sridhar Ramaswamy
9f1cbbc6ed
community[minor]: Add pebblo safe document loader (#16862)
- **Description:** Pebblo opensource project enables developers to
safely load data to their Gen AI apps. It identifies semantic topics and
entities found in the loaded data and summarizes them in a
developer-friendly report.
  - **Dependencies:** none
  - **Twitter handle:** srics

@hwchase17
2024-02-12 21:56:12 -08:00
Massimiliano Pronesti
df7cbd6fbb
community[minor]: add FlashRank ranker (#16785)
**Description:** This PR adds support for
[flashrank](https://github.com/PrithivirajDamodaran/FlashRank) for
reranking as alternative to Cohere.

I'm not sure `libs/langchain` is the right place for this change. At
first, I wanted to put it under `libs/community`. All the compressors
were under `libs/langchain/retrievers/document_compressors` though. Hope
this makes sense!
2024-02-12 20:00:52 -08:00
Max Jakob
604e117411
docs: another auth method for ElasticsearchStore (#16831)
Users can also use their own Elasticsearch client object to configure
the connection.
2024-02-12 19:29:54 -08:00
Zeeland
4986e7227e
docs: rm unnecessary imports (#16876)
- **Description:** optimize the document of memory usage
  - **Issue:** it lose some install guide
2024-02-12 19:25:54 -08:00
Leonid Ganeline
b87d6f9f48
docs: Redis page update (#16906)
- Reordered sections
- Applied consistent formatting
- Fixed headers (there were 2 H1 headers; this breaks CoT)
- Added `Settings` header and moved all related sections under it
2024-02-12 18:23:35 -08:00
Naveenkhasyap
841e5f514e
docs: Updated doc for integrations/chat/anthropic_functions #15664 (#17226)
Description: Updated doc for integrations/chat/anthropic_functions with
new functions: invoke. Changed structure of the document to match the
required one.
Issue: https://github.com/langchain-ai/langchain/issues/15664
Dependencies: None
Twitter handle: None

---------

Co-authored-by: NaveenMaltesh <naveen@onmeta.in>
2024-02-12 17:09:38 -08:00
Pennlaine
e1bc623f8f
docs: Updated docs for sitemap loader to use correct URL (#17395)
- **Description:** 
Updated URL for sitemap loader from
"https://langchain.readthedocs.io/sitemap.xml" to
"https://api.python.langchain.com/sitemap.xml"
  - **Issue:** Fixes #17236
2024-02-12 16:20:32 -08:00
Ikko Eltociear Ashimine
b48fa8b695
docs: fix typo in vikingdb.ipynb (#17429)
retreival -> retrieval
2024-02-12 15:51:12 -08:00
Abhijeeth Padarthi
584b647b96
community[minor]: AWS Athena Document Loader (#15625)
- **Description:** Adds the document loader for [AWS
Athena](https://aws.amazon.com/athena/), a serverless and interactive
analytics service.
  - **Dependencies:** Added boto3 as a dependency
2024-02-12 12:53:40 -08:00
ByeongUk Choi
ac970c9497
Update Docs for TFIDFRetriever Import Path (#17322)
This PR updates the `TF-IDF.ipynb` documentation to reflect the new
import path for TFIDFRetriever in the langchain-community package. The
previous path, `from langchain.retrievers import TFIDFRetriever`, has
been updated to `from langchain_community.retrievers import
TFIDFRetriever` to align with the latest changes in the langchain
library.
2024-02-11 21:26:08 -08:00
Michael Hunger
1c902ce3d1
tools:docs: update google_search.ipynb - change tool name (#17354)
according to https://youtu.be/rZus0JtRqXE?si=aFo1JTDnu5kSEiEN&t=678 by
@efriis

- **Description:** Seems the requirements for tool names have changed
and spaces are no longer allowed. Changed the tool name from Google
Search to google_search in the notebook
  - **Issue:** n/a
  - **Dependencies:** none
  - **Twitter handle:** @mesirii
2024-02-11 21:25:19 -08:00
jiangzf93
d6a1c88ca7
docs: update documentation for file system tool integration (#17377)
- **Description:** Update the docs for the tool integration module `file
system`
- **Issue:** [For New Contributors: Update Integration Documentation
#15664](https://github.com/langchain-ai/langchain/issues/15664#top)
  - **Dependencies:** N/A
2024-02-11 21:19:40 -08:00
Pennlaine
2384267900
Updated doc for tools/pubmed with new functions: invoke. (#17378)
Updated doc for integrations/chat/anthropic_functions #15664 

  - **Description:**
Adds `pip install` instructions
Update `run` with `invoke`

  - **Issue:** 
Fixes #15664
2024-02-11 21:19:31 -08:00
Erick Friis
3a2eb6e12b
infra: add print rule to ruff (#16221)
Added noqa for existing prints. Can slowly remove / will prevent more
being intro'd
2024-02-09 16:13:30 -08:00
Quang Hoa
54c1fb3f25
community[patch]: Make some functions work with Milvus (#10695)
**Description**
Make some functions work with Milvus:
1. get_ids: Get primary keys by field in the metadata
2. delete: Delete one or more entities by ids
3. upsert: Update/Insert one or more entities

**Issue**
None
**Dependencies**
None
**Tag maintainer:**
@hwchase17 
**Twitter handle:**
None

---------

Co-authored-by: HoaNQ9 <hoanq.1811@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-09 15:21:31 -08:00
Vadim Kudlay
5f9ac6986e
nvidia-ai-endpoints[patch]: model arguments (e.g. temperature) on construction bug (#17290)
- **Issue:** Issue with model argument support (been there for a while
actually):
- Non-specially-handled arguments like temperature don't work when
passed through constructor.
- Such arguments DO work quite well with `bind`, but also do not abide
by field requirements.
- Since initial push, server-side error messages have gotten better and
v0.0.2 raises better exceptions. So maybe it's better to let server-side
handle such issues?
- **Description:**
- Removed ChatNVIDIA's argument fields in favor of
`model_kwargs`/`model_kws` arguments which aggregates constructor kwargs
(from constructor pathway) and merges them with call kwargs (bind
pathway).
- Shuffled a few functions from `_NVIDIAClient` to `ChatNVIDIA` to
streamline construction for future integrations.
- Minor/Optional: Old services didn't have stop support, so client-side
stopping was implemented. Now do both.
- **Any Breaking Changes:** Minor breaking changes if you strongly rely
on chat_model.temperature, etc. This is captured by
chat_model.model_kwargs.

PR passes tests and example notebooks and example testing. Still gonna
chat with some people, so leaving as draft for now.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-09 13:46:02 -08:00
Scott Nath
a32798abd7
community: Add you.com utility, update you retriever integration docs (#17014)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

- **Description: changes to you.com files** 
    - general cleanup
- adds community/utilities/you.py, moving bulk of code from retriever ->
utility
    - removes `snippet` as endpoint
    - adds `news` as endpoint
    - adds more tests

<s>**Description: update community MAKE file** 
    - adds `integration_tests`
    - adds `coverage`</s>

- **Issue:** the issue # it fixes if applicable,
- [For New Contributors: Update Integration
Documentation](https://github.com/langchain-ai/langchain/issues/15664#issuecomment-1920099868)
- **Dependencies:** n/a
- **Twitter handle:** @scottnath
- **Mastodon handle:** scottnath@mastodon.social

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-08 13:47:50 -08:00
Kartheek Yakkala
3a22157d92
docs: Added LCEL for alibabacloud and anyscale (#17252)
---------

Co-authored-by: KARTHEEK YAKKALA <kartheekyakkala@KARTHEEKs-Air.lan>
Co-authored-by: KARTHEEK YAKKALA <kartheekyakkala.se@gmail.com>
2024-02-08 13:18:09 -08:00
Jorge Campo
88609565a3
docs: Fix typo in github.ipynb (#17259)
'agiven' -> 'a given'
2024-02-08 12:03:00 -08:00
Bagatur
00a09e1b71
docs: use PromptTemplate.from_template (#17218)
Ran
```python
import glob
import re

def update_prompt(x):
    return re.sub(
        r"(?P<start>\b)PromptTemplate\(template=(?P<template>.*), input_variables=(?:.*)\)",
        "\g<start>PromptTemplate.from_template(\g<template>)",
        x
    )


for fn in glob.glob("docs/**/*", recursive=True):
    try:
        content = open(fn).readlines()
    except:
        continue
    content = [update_prompt(l) for l in content]
    with open(fn, "w") as f:
        f.write("".join(content))
```
2024-02-07 19:52:42 -08:00
Eugene Yurtsev
780e84ae79
community[minor]: SQLDatabase Add fetch mode cursor, query parameters, query by selectable, expose execution options, and documentation (#17191)
- **Description:** Improve `SQLDatabase` adapter component to promote
code re-use, see
[suggestion](https://github.com/langchain-ai/langchain/pull/16246#pullrequestreview-1846590962).
  - **Needed by:** GH-16246
  - **Addressed to:** @baskaryan, @cbornet 

## Details
- Add `cursor` fetch mode
- Accept SQL query parameters
- Accept both `str` and SQLAlchemy selectables as query expression
- Expose `execution_options`
- Documentation page (notebook) about `SQLDatabase` [^1]
See [About
SQLDatabase](https://github.com/langchain-ai/langchain/blob/c1c7b763/docs/docs/integrations/tools/sql_database.ipynb).

[^1]: Apparently there hasn't been any yet?

---------

Co-authored-by: Andreas Motl <andreas.motl@crate.io>
2024-02-07 22:23:43 -05:00
Leonid Ganeline
d903fa313e
docs: titles fix (#17206)
Several notebooks have Title != file name. That results in corrupted
sorting in Navbar (ToC).
- Fixed titles and file names.
- Changed text formats to the consistent form
- Redirected renamed files in the `Vercel.json`
2024-02-07 22:09:34 -05:00
Bagatur
aeb6b38901
docs: cleanup fleet integration (#17214)
Causing search issues
2024-02-07 17:18:48 -08:00
Leonid Ganeline
0af0fc5d25
docs integraions/providers nav fix (#17148)
Issue: `Provides` page is presented as the index page (on the
`Providers` item) and as the `Providers/Providers` item. The latter
should not be in the menu. See the picture.

![image](https://github.com/langchain-ai/langchain/assets/2256422/6894023f-f13a-4f0d-8fe2-ed5b0ae2bdd2)
This PR fixes this.
2024-02-06 20:33:14 -08:00
Erick Friis
d397721a34
docs: format (#17143) 2024-02-06 16:32:53 -08:00
Arno Schutijzer
863f96b2e0
docs: fix typo in ollama notebook (#17127)
- **Description:** typo fix in ollama notebook
2024-02-06 16:54:40 -05:00
Junyoung Park
1ed73f1992
community[minor]: Add SelfQueryRetriever support to PGVector (#16991)
- **Description:** Add SelfQueryRetriever support to PGVector
  - **Issue:** -
  - **Dependencies:** -
  - **Twitter handle:** -

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-06 10:50:50 -08:00
Frank
ef082c77b1
community[minor]: add github file loader to load any github file content b… (#15305)
### Description
support load any github file content based on file extension.  

Why not use [git
loader](https://python.langchain.com/docs/integrations/document_loaders/git#load-existing-repository-from-disk)
?
git loader clones the whole repo even only interested part of files,
that's too heavy. This GithubFileLoader only downloads that you are
interested files.

### Twitter handle
my twitter: @shufanhaotop

---------

Co-authored-by: Hao Fan <h_fan@apple.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-06 09:42:33 -08:00
老阿張
ac662b3698
docs: Fix typo in amadeus.ipynb (#16916)
Description: "enviornment should be  environment"? 🤔
Issue: Typo
Dependencies: Nope
Twitter handle: laoazhang
2024-02-06 09:42:05 -08:00
Jan de Boer
2d8015554c
docs: Link to Brave Website added (#16958)
**Description:** Link to the Brave Website added to the
`brave-search.ipynb` notebook.
This notebook is shown in the docs as an example for the brave tool.

**Issue:** There was to reference on where / how to get an api key
 
**Dependencies:** none
 
**Twitter handle:** not for this one :)
2024-02-05 18:29:16 -08:00
os1ma
fd88e0f800
docs: update StreamlitCallbackHandler example (#16970)
- **Description:** docs: update StreamlitCallbackHandler example.
  - **Issue:** None
  - **Dependencies:** None

I have updated the example for StreamlitCallbackHandler in the
documentation bellow.
https://python.langchain.com/docs/integrations/callbacks/streamlit

Previously, the example used `initialize_agent`, which has been
deprecated, so I've updated it to use `create_react_agent` instead. Many
langchain users are likely searching examples of combining
`create_react_agent` or `openai_tools_agent_chain` with
StreamlitCallbackHandler. I'm sure this update will be really helpful
for them!

Unfortunately, writing unit tests for this example is difficult, so I
have not written any tests. I have run this code in a standalone Python
script file and ensured it runs correctly.
2024-02-05 18:20:59 -08:00
Marc Mahe
f08a9139d2
docs: update mistral docs for version 0.1+ (#17011)
**Description:**
Updated integration page for mistralai.
2024-02-05 18:03:12 -08:00
Ikko Eltociear Ashimine
5f5f5acbc5
docs: fix typo in dspy.ipynb (#16996)
langugage -> language
2024-02-05 17:31:06 -08:00
Alex Boury
334b6ebdf3
community[minor]: Breebs docs retriever (#16578)
- **Description:** Implementation of breeb retriever with integration
tests ->
libs/community/tests/integration_tests/retrievers/test_breebs.py and
documentation (notebook) ->
docs/docs/integrations/retrievers/breebs.ipynb.
  - **Dependencies:** None
2024-02-05 15:51:08 -08:00
Shivani Modi
fcb875629d
docs: Updating documentation for Konko provider (#16953)
- **Description:** A small update to the Konko provider documentation.

---------

Co-authored-by: Shivani Modi <shivanimodi@Shivanis-MacBook-Pro.local>
2024-02-05 15:49:13 -08:00
Supreet Takkar
ae33979813
community[patch]: Allow adding ARNs as model_id to support Amazon Bedrock custom models (#16800)
- **Description:** Adds an additional class variable to `BedrockBase`
called `provider` that allows sending a model provider such as amazon,
cohere, ai21, etc.
Up until now, the model provider is extracted from the `model_id` using
the first part before the `.`, such as `amazon` for
`amazon.titan-text-express-v1` (see [supported list of Bedrock model IDs
here](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids-arns.html)).
But for custom Bedrock models where the ARN of the provisioned
throughput must be supplied, the `model_id` is like
`arn:aws:bedrock:...` so the `model_id` cannot be extracted from this. A
model `provider` is required by the LangChain Bedrock class to perform
model-based processing. To allow the same processing to be performed for
custom-models of a specific base model type, passing this `provider`
argument can help solve the issues.
The alternative considered here was the use of
`provider.arn:aws:bedrock:...` which then requires ARN to be extracted
and passed separately when invoking the model. The proposed solution
here is simpler and also does not cause issues for current models
already using the Bedrock class.
  - **Issue:** N/A
  - **Dependencies:** N/A

---------

Co-authored-by: Piyush Jain <piyushjain@duck.com>
2024-02-05 14:28:03 -08:00
Vadim Kudlay
75b6fa1134
nvidia-ai-endpoints[patch]: Support User-Agent metadata and minor fixes. (#16942)
- **Description:** Several meta/usability updates, including User-Agent.
  - **Issue:** 
- User-Agent metadata for tracking connector engagement. @milesial
please check and advise.
- Better error messages. Tries harder to find a request ID. @milesial
requested.
- Client-side image resizing for multimodal models. Hope to upgrade to
Assets API solution in around a month.
- `client.payload_fn` allows you to modify payload before network
request. Use-case shown in doc notebook for kosmos_2.
- `client.last_inputs` put back in to allow for advanced
support/debugging.
  - **Dependencies:** 
- Attempts to pull in PIL for image resizing. If not installed, prints
out "please install" message, warns it might fail, and then tries
without resizing. We are waiting on a more permanent solution.

For LC viz: @hinthornw 
For NV viz: @fciannella @milesial @vinaybagade

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-05 12:24:53 -08:00
Erick Friis
6ffd5b15bc
pinecone: init pkg (#16556)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-02-05 11:55:01 -08:00
Erick Friis
db6af21395
docs: exa contents (#16555) 2024-02-05 11:15:06 -08:00
Nicolas Grenié
54fcd476bb
docs: Update ollama examples with new community libraries (#17007)
- **Description:** Updating one line code sample for Ollama with new
**langchain_community** package
  - **Issue:**
  - **Dependencies:** none
  - **Twitter handle:**  @picsoung
2024-02-04 15:13:29 -08:00
Erick Friis
afdd636999
docs: partner packages (#16960) 2024-02-02 15:12:21 -08:00
Ashley Xu
66adb95284
docs: BigQuery Vector Search went public review and updated docs (#16896)
Update the docs for BigQuery Vector Search
2024-02-02 10:26:44 -08:00
Massimiliano Pronesti
71f9ea33b6
docs: add quantization to vllm and update API (#16950)
- **Description:** Update vLLM docs to include instructions on how to
use quantized models, as well as to replace the deprecated methods.
2024-02-02 10:24:49 -08:00
Radhakrishnan
3b0fa9079d
docs: Updated integration doc for aleph alpha (#16844)
Description: Updated doc for llm/aleph_alpha with new functions: invoke.
Changed structure of the document to match the required one.
Issue: https://github.com/langchain-ai/langchain/issues/15664
Dependencies: None
Twitter handle: None

---------

Co-authored-by: Radhakrishnan Iyer <radhakrishnan.iyer@ibm.com>
2024-02-02 09:28:06 -08:00
Erick Friis
b1a847366c
community: revert SQL Stores (#16912)
This reverts commit cfc225ecb3.


https://github.com/langchain-ai/langchain/pull/15909#issuecomment-1922418097

These will have existed in langchain-community 0.0.16 and 0.0.17.
2024-02-01 16:37:40 -08:00
Harel Gal
93366861c7
docs: Indicated Guardrails for Amazon Bedrock preview status (#16769)
Added notification about limited preview status of Guardrails for Amazon
Bedrock feature to code example.

---------

Co-authored-by: Piyush Jain <piyushjain@duck.com>
2024-02-01 10:41:48 -08:00
Erick Friis
17e886388b
nomic: init pkg (#16853)
Co-authored-by: Lance Martin <lance@langchain.dev>
2024-01-31 16:46:35 -08:00
Bob Lin
546b757303
community: Add ChatGLM3 (#15265)
Add [ChatGLM3](https://github.com/THUDM/ChatGLM3) and updated
[chatglm.ipynb](https://python.langchain.com/docs/integrations/llms/chatglm)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-29 20:30:52 -08:00
Marina Pliusnina
a1ce7ab672
adding parameter for changing the language in SpacyEmbeddings (#15743)
Description: Added the parameter for a possibility to change a language
model in SpacyEmbeddings. The default value is still the same:
"en_core_web_sm", so it shouldn't affect a code which previously did not
specify this parameter, but it is not hard-coded anymore and easy to
change in case you want to use it with other languages or models.

Issue: At Barcelona Supercomputing Center in Aina project
(https://github.com/projecte-aina), a project for Catalan Language
Models and Resources, we would like to use Langchain for one of our
current projects and we would like to comment that Langchain, while
being a very powerful and useful open-source tool, is pretty much
focused on English language. We would like to contribute to make it a
bit more adaptable for using with other languages.

Dependencies: This change requires the Spacy library and a language
model, specified in the model parameter.

Tag maintainer: @dev2049

Twitter handle: @projecte_aina

---------

Co-authored-by: Marina Pliusnina <marina.pliusnina@bsc.es>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-29 20:30:34 -08:00
baichuan-assistant
f8f2649f12
community: Add Baichuan LLM to community (#16724)
Replace this entire comment with:
- **Description:** Add Baichuan LLM to integration/llm, also updated
related docs.

Co-authored-by: BaiChuanHelper <wintergyc@WinterGYCs-MacBook-Pro.local>
2024-01-29 20:08:24 -08:00
thiswillbeyourgithub
1d082359ee
community: add support for callable filters in FAISS (#16190)
- **Description:**
Filtering in a FAISS vectorstores is very inflexible and doesn't allow
that many use case. I think supporting callable like this enables a lot:
regular expressions, condition on multiple keys etc. **Note** I had to
manually alter a test. I don't understand if it was falty to begin with
or if there is something funky going on.
- **Issue:** None
- **Dependencies:** None
- **Twitter handle:** None

Signed-off-by: thiswillbeyourgithub <26625900+thiswillbeyourgithub@users.noreply.github.com>
2024-01-29 20:05:56 -08:00
Bassem Yacoube
85e93e05ed
community[minor]: Update OctoAI LLM, Embedding and documentation (#16710)
This PR includes updates for OctoAI integrations:
- The LLM class was updated to fix a bug that occurs with multiple
sequential calls
- The Embedding class was updated to support the new GTE-Large endpoint
released on OctoAI lately
- The documentation jupyter notebook was updated to reflect using the
new LLM sdk
Thank you!
2024-01-29 13:57:17 -08:00
Volodymyr Machula
32c5be8b73
community[minor]: Connery Tool and Toolkit (#14506)
## Summary

This PR implements the "Connery Action Tool" and "Connery Toolkit".
Using them, you can integrate Connery actions into your LangChain agents
and chains.

Connery is an open-source plugin infrastructure for AI.

With Connery, you can easily create a custom plugin with a set of
actions and seamlessly integrate them into your LangChain agents and
chains. Connery will handle the rest: runtime, authorization, secret
management, access management, audit logs, and other vital features.
Additionally, Connery and our community offer a wide range of
ready-to-use open-source plugins for your convenience.

Learn more about Connery:

- GitHub: https://github.com/connery-io/connery-platform
- Documentation: https://docs.connery.io
- Twitter: https://twitter.com/connery_io

## TODOs

- [x] API wrapper
   - [x] Integration tests
- [x] Connery Action Tool
   - [x] Docs
   - [x] Example
   - [x] Integration tests
- [x] Connery Toolkit
  - [x] Docs
  - [x] Example
- [x] Formatting (`make format`)
- [x] Linting (`make lint`)
- [x] Testing (`make test`)
2024-01-29 12:45:03 -08:00
Neli Hateva
c95facc293
langchain[minor], community[minor]: Implement Ontotext GraphDB QA Chain (#16019)
- **Description:** Implement Ontotext GraphDB QA Chain
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** @OntotextGraphDB
2024-01-29 12:25:53 -08:00
Abhinav
8e44363ec9
langchain_community: Update documentation for installing llama-cpp-python on windows (#16666)
**Description** : This PR updates the documentation for installing
llama-cpp-python on Windows.

- Updates install command to support pyproject.toml
- Makes CPU/GPU install instructions clearer
- Adds reinstall with GPU support command

**Issue**: Existing
[documentation](https://python.langchain.com/docs/integrations/llms/llamacpp#compiling-and-installing)
lists the following commands for installing llama-cpp-python
```
python setup.py clean
python setup.py install
````
The current version of the repo does not include a `setup.py` and uses a
`pyproject.toml` instead.
This can be replaced with
```
python -m pip install -e .
```
As explained in
https://github.com/abetlen/llama-cpp-python/issues/965#issuecomment-1837268339
**Dependencies**: None
**Twitter handle**: None

---------

Co-authored-by: blacksmithop <angstycoder101@gmaii.com>
2024-01-29 08:41:29 -08:00
Benito Geordie
f3fdc5c5da
community: Added integrations for ThirdAI's NeuralDB with Retriever and VectorStore frameworks (#15280)
**Description:** Adds ThirdAI NeuralDB retriever and vectorstore
integration. NeuralDB is a CPU-friendly and fine-tunable text retrieval
engine.
2024-01-29 08:35:42 -08:00
Jonathan Bennion
815896ff13
langchain: pubmed tool path update in doc (#16716)
- **Description:** The current pubmed tool documentation is referencing
the path to langchain core not the path to the tool in community. The
old tool redirects anyways, but for efficiency of using the more direct
path, just adding this documentation so it references the new path
  - **Issue:** doesn't fix an issue
  - **Dependencies:** no dependencies
  - **Twitter handle:** rooftopzen
2024-01-29 08:25:29 -08:00
Lance Martin
1bfadecdd2
Update Slack agent toolkit (#16732)
Co-authored-by: taimoOptTech <132860814+taimo3810@users.noreply.github.com>
2024-01-29 08:03:44 -08:00
Bob Lin
0866a984fe
Update n_gpu_layers"s description (#16685)
The `n_gpu_layers` parameter in `llama.cpp` supports the use of `-1`,
which means to offload all layers to the GPU, so the document has been
updated.

Ref:
35918873b4/llama_cpp/server/settings.py (L29C22-L29C117)


35918873b4/llama_cpp/llama.py (L125)
2024-01-28 16:46:50 -08:00
Daniel Erenrich
0600998f38
community: Wikidata tool support (#16691)
- **Description:** Adds Wikidata support to langchain. Can read out
documents from Wikidata.
  - **Issue:** N/A
- **Dependencies:** Adds implicit dependencies for
`wikibase-rest-api-client` (for turning items into docs) and
`mediawikiapi` (for hitting the search endpoint)
  - **Twitter handle:** @derenrich

You can see an example of this tool used in a chain
[here](https://nbviewer.org/urls/d.erenrich.net/upload/Wikidata_Langchain.ipynb)
or
[here](https://nbviewer.org/urls/d.erenrich.net/upload/Wikidata_Lars_Kai_Hansen.ipynb)

<!-- Thank you for contributing to LangChain!


Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-01-28 16:45:21 -08:00
Owen Sims
e451c8adc1
Community: Update Ionic Shopping Docs (#16700)
- **Description:** Update to docs as originally introduced in
https://github.com/langchain-ai/langchain/pull/16649 (reviewed by
@baskaryan),
- **Twitter handle:**
[@ioniccommerce](https://twitter.com/ioniccommerce)
2024-01-28 16:39:49 -08:00
Yelin Zhang
bc7607a4e9
docs: remove iprogress warnings (#16697)
- **Description:** removes iprogress warning texts from notebooks,
resulting in a little nicer to read documentation
2024-01-28 16:38:14 -08:00
Leonid Ganeline
5e73603e8a
docs: DeepInfra provider page update (#16665)
- added description, links
- consistent formatting
- added links to the example pages
2024-01-27 16:05:29 -08:00
Jarod Stewart
0bc397957b
docs: document Ionic Tool (#16649)
- **Description:** Documentation for the Ionic Tool. A shopping
assistant tool that effortlessly adds e-commerce capabilities to your
Agent.
2024-01-26 16:02:07 -08:00
baichuan-assistant
70ff54eace
community[minor]: Add Baichuan Text Embedding Model and Baichuan Inc introduction (#16568)
- **Description:** Adding Baichuan Text Embedding Model and Baichuan Inc
introduction.

Baichuan Text Embedding ranks #1 in C-MTEB leaderboard:
https://huggingface.co/spaces/mteb/leaderboard

Co-authored-by: BaiChuanHelper <wintergyc@WinterGYCs-MacBook-Pro.local>
2024-01-26 12:57:26 -08:00
Ghani
e30c6662df
Langchain-community : EdenAI chat integration. (#16377)
- **Description:** This PR adds [EdenAI](https://edenai.co/) for the
chat model (already available in LLM & Embeddings). It supports all
[ChatModel] functionality: generate, async generate, stream, astream and
batch. A detailed notebook was added.

  - **Dependencies**: No dependencies are added as we call a rest API.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-01-26 09:56:43 -05:00
Bagatur
61e876aad8
openai[patch]: Explicitly support embedding dimensions (#16596) 2024-01-25 15:16:04 -08:00
Brian Burgin
148347e858
community[minor]: Add LiteLLM Router Integration (#15588)
community:

  - **Description:**
- Add new ChatLiteLLMRouter class that allows a client to use a LiteLLM
Router as a LangChain chat model.
- Note: The existing ChatLiteLLM integration did not cover the LiteLLM
Router class.
    - Add tests and Jupyter notebook.
  - **Issue:** None
  - **Dependencies:** Relies on existing ChatLiteLLM integration
  - **Twitter handle:** @bburgin_0

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-25 11:03:05 -08:00
Bob Lin
35e60728b7
docs: Fix broken urls (#16559) 2024-01-25 09:20:05 -08:00
Bob Lin
6023953ea7
docs: Fix github link (#16560) 2024-01-25 09:19:09 -08:00
Erick Friis
adc008407e
exa: init pkg (#16553) 2024-01-24 20:57:17 -07:00
Rave Harpaz
c4e9c9ca29
community[minor]: Add OCI Generative AI integration (#16548)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
- **Description:** Adding Oracle Cloud Infrastructure Generative AI
integration. Oracle Cloud Infrastructure (OCI) Generative AI is a fully
managed service that provides a set of state-of-the-art, customizable
large language models (LLMs) that cover a wide range of use cases, and
which is available through a single API. Using the OCI Generative AI
service you can access ready-to-use pretrained models, or create and
host your own fine-tuned custom models based on your own data on
dedicated AI clusters.
https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm
  - **Issue:** None,
  - **Dependencies:** OCI Python SDK,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.
Passed

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

we provide unit tests. However, we cannot provide integration tests due
to Oracle policies that prohibit public sharing of api keys.
 
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-24 18:23:50 -08:00
Leonid Ganeline
f6a05e964b
docs: Hugging Face update (#16490)
- added missed integrations to the platform page
- updated integration examples: added links and fixed formats
2024-01-24 16:59:00 -08:00
Harel Gal
a91181fe6d
community[minor]: add support for Guardrails for Amazon Bedrock (#15099)
Added support for optionally supplying 'Guardrails for Amazon Bedrock'
on both types of model invocations (batch/regular and streaming) and for
all models supported by the Amazon Bedrock service.

@baskaryan  @hwchase17

```python 
llm = Bedrock(model_id="<model_id>", client=bedrock,
                  model_kwargs={},
                  guardrails={"id": " <guardrail_id>",
                              "version": "<guardrail_version>",
                               "trace": True}, callbacks=[BedrockAsyncCallbackHandler()])

class BedrockAsyncCallbackHandler(AsyncCallbackHandler):
    """Async callback handler that can be used to handle callbacks from langchain."""

    async def on_llm_error(
            self,
            error: BaseException,
            **kwargs: Any,
    ) -> Any:
        reason = kwargs.get("reason")
        if reason == "GUARDRAIL_INTERVENED":
           # kwargs contains additional trace information sent by 'Guardrails for Bedrock' service.
            print(f"""Guardrails: {kwargs}""")


# streaming 
llm = Bedrock(model_id="<model_id>", client=bedrock,
                  model_kwargs={},
                  streaming=True,
                  guardrails={"id": "<guardrail_id>",
                              "version": "<guardrail_version>"})
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-24 14:44:19 -08:00
Martin Kolb
04651f0248
community[minor]: VectorStore integration for SAP HANA Cloud Vector Engine (#16514)
- **Description:**
This PR adds a VectorStore integration for SAP HANA Cloud Vector Engine,
which is an upcoming feature in the SAP HANA Cloud database
(https://blogs.sap.com/2023/11/02/sap-hana-clouds-vector-engine-announcement/).

  - **Issue:** N/A
- **Dependencies:** [SAP HANA Python
Client](https://pypi.org/project/hdbcli/)
  - **Twitter handle:** @sapopensource

Implementation of the integration:
`libs/community/langchain_community/vectorstores/hanavector.py`

Unit tests:
`libs/community/tests/unit_tests/vectorstores/test_hanavector.py`

Integration tests:
`libs/community/tests/integration_tests/vectorstores/test_hanavector.py`

Example notebook:
`docs/docs/integrations/vectorstores/hanavector.ipynb`

Access credentials for execution of the integration tests can be
provided to the maintainers.

---------

Co-authored-by: sascha <sascha.stoll@sap.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-24 14:05:07 -08:00
Bob Lin
54dd8e52a8
docs: Updated comments about n_gpu_layers in the Metal section (#16501)
Ref: https://github.com/langchain-ai/langchain/issues/16502
2024-01-24 13:38:48 -08:00
Anastasiia Manokhina
ce595f0203
docs:Updated integration docs structure for chat/google_vertex_ai_palm (#16201)
Description: 

- checked that the doc chat/google_vertex_ai_palm is using new
functions: invoke, stream etc.
- added Gemini example
- fixed wrong output in Sanskrit example

Issue: https://github.com/langchain-ai/langchain/issues/15664
Dependencies: None
Twitter handle: None
2024-01-24 10:21:32 -08:00
Lance Martin
0b740ebd49
Update SQL agent toolkit docs (#16409) 2024-01-24 09:03:17 -08:00
BeatrixCohere
2b2285dac0
docs: Update cohere rerank and comparison docs (#16198)
- **Description:** Update the cohere rerank docs to use cohere
embeddings
  - **Issue:** n/a
  - **Dependencies:** n/a
  - **Twitter handle:** n/a
2024-01-23 19:39:42 -08:00
Raunak
476bf8b763
community[patch]: Load list of files using UnstructuredFileLoader (#16216)
- **Description:** Updated `_get_elements()` function of
`UnstructuredFileLoader `class to check if the argument self.file_path
is a file or list of files. If it is a list of files then it iterates
over the list of file paths, calls the partition function for each one,
and appends the results to the elements list. If self.file_path is not a
list, it calls the partition function as before.
  
  - **Issue:** Fixed #15607,
  - **Dependencies:** NA
  - **Twitter handle:** NA

Co-authored-by: H161961 <Raunak.Raunak@Honeywell.com>
2024-01-23 19:37:37 -08:00
Xudong Sun
019b6ebe8d
community[minor]: Add iFlyTek Spark LLM chat model support (#13389)
- **Description:** This PR enables LangChain to access the iFlyTek's
Spark LLM via the chat_models wrapper.
  - **Dependencies:** websocket-client ^1.6.1
  - **Tag maintainer:** @baskaryan 

### SparkLLM chat model usage

Get SparkLLM's app_id, api_key and api_secret from [iFlyTek SparkLLM API
Console](https://console.xfyun.cn/services/bm3) (for more info, see
[iFlyTek SparkLLM Intro](https://xinghuo.xfyun.cn/sparkapi) ), then set
environment variables `IFLYTEK_SPARK_APP_ID`, `IFLYTEK_SPARK_API_KEY`
and `IFLYTEK_SPARK_API_SECRET` or pass parameters when using it like the
demo below:

```python3
from langchain.chat_models.sparkllm import ChatSparkLLM

client = ChatSparkLLM(
    spark_app_id="<app_id>",
    spark_api_key="<api_key>",
    spark_api_secret="<api_secret>"
)
```
2024-01-23 19:23:46 -08:00
bu2kx
ff3163297b
community[minor]: Add KDBAI vector store (#12797)
Addition of KDBAI vector store (https://kdb.ai).

Dependencies: `kdbai_client` v0.1.2 Python package.

Sample notebook: `docs/docs/integrations/vectorstores/kdbai.ipynb`

Tag maintainer: @bu2kx
Twitter handle: @kxsystems
2024-01-23 18:37:01 -08:00
JongRok BAEK
4ec3fe4680
docs: Updated integration docs structure for chat/anthropic (#16268)
Description: 
- Added output and environment variables
- Updated the documentation for chat/anthropic, changing references from
`langchain.schema` to `langchain_core.prompts`.

Issue: https://github.com/langchain-ai/langchain/issues/15664
Dependencies: None
Twitter handle: None

Since this is my first open-source PR, please feel free to point out any
mistakes, and I'll be eager to make corrections.
2024-01-23 18:36:28 -08:00
Shivani Modi
4e160540ff
community[minor]: Adding Konko Completion endpoint (#15570)
This PR introduces update to Konko Integration with LangChain.

1. **New Endpoint Addition**: Integration of a new endpoint to utilize
completion models hosted on Konko.

2. **Chat Model Updates for Backward Compatibility**: We have updated
the chat models to ensure backward compatibility with previous OpenAI
versions.

4. **Updated Documentation**: Comprehensive documentation has been
updated to reflect these new changes, providing clear guidance on
utilizing the new features and ensuring seamless integration.

Thank you to the LangChain team for their exceptional work and for
considering this PR. Please let me know if any additional information is
needed.

---------

Co-authored-by: Shivani Modi <shivanimodi@Shivanis-MacBook-Pro.local>
Co-authored-by: Shivani Modi <shivanimodi@Shivanis-MBP.lan>
2024-01-23 18:22:32 -08:00
Facundo Santiago
92e6a641fd
feat: adding paygo api support for Azure ML / Azure AI Studio (#14560)
- **Description:** Introducing support for LLMs and Chat models running
in Azure AI studio and Azure ML using the new deployment mode
pay-as-you-go (model as a service).
- **Issue:** NA
- **Dependencies:** None.
- **Tag maintainer:** @prakharg-msft @gdyre 
- **Twitter handle:** @santiagofacundo

Examples added:
*
[docs/docs/integrations/llms/azure_ml.ipynb](https://github.com/santiagxf/langchain/blob/santiagxf/azureml-endpoints-paygo-community/docs/docs/integrations/chat/azureml_endpoint.ipynb)
*
[docs/docs/integrations/chat/azureml_chat_endpoint.ipynb](https://github.com/santiagxf/langchain/blob/santiagxf/azureml-endpoints-paygo-community/docs/docs/integrations/chat/azureml_chat_endpoint.ipynb)

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-23 17:08:51 -08:00
baichuan-assistant
20fcd49348
community: Fix Baichuan Chat. (#15207)
- **Description:** Baichuan Chat (with both Baichuan-Turbo and
Baichuan-Turbo-192K models) has updated their APIs. There are breaking
changes. For example, BAICHUAN_SECRET_KEY is removed in the latest API
but is still required in Langchain. Baichuan's Langchain integration
needs to be updated to the latest version.
  - **Issue:** #15206
  - **Dependencies:** None,
  - **Twitter handle:** None

@hwchase17.

Co-authored-by: BaiChuanHelper <wintergyc@WinterGYCs-MacBook-Pro.local>
2024-01-23 17:01:57 -08:00
gcheron
cfc225ecb3
community: SQLStrStore/SQLDocStore provide an easy SQL alternative to InMemoryStore to persist data remotely in a SQL storage (#15909)
**Description:**

- Implement `SQLStrStore` and `SQLDocStore` classes that inherits from
`BaseStore` to allow to persist data remotely on a SQL server.
- SQL is widely used and sometimes we do not want to install a caching
solution like Redis.
- Multiple issues/comments complain that there is no easy remote and
persistent solution that are not in memory (users want to replace
InMemoryStore), e.g.,
https://github.com/langchain-ai/langchain/issues/14267,
https://github.com/langchain-ai/langchain/issues/15633,
https://github.com/langchain-ai/langchain/issues/14643,
https://stackoverflow.com/questions/77385587/persist-parentdocumentretriever-of-langchain
- This is particularly painful when wanting to use
`ParentDocumentRetriever `
- This implementation is particularly useful when:
     * it's expensive to construct an InMemoryDocstore/dict
     * you want to retrieve documents from remote sources
     * you just want to reuse existing objects
- This implementation integrates well with PGVector, indeed, when using
PGVector, you already have a SQL instance running. `SQLDocStore` is a
convenient way of using this instance to store documents associated to
vectors. An integration example with ParentDocumentRetriever and
PGVector is provided in docs/docs/integrations/stores/sql.ipynb or
[here](https://github.com/gcheron/langchain/blob/sql-store/docs/docs/integrations/stores/sql.ipynb).
- It persists `str` and `Document` objects but can be easily extended.

 **Issue:**

Provide an easy SQL alternative to `InMemoryStore`.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-23 16:50:48 -08:00
Florian MOREL
4b7969efc5
community[minor]: New documents loader for visio files (with extension .vsdx) (#16171)
**Description** : New documents loader for visio files (with extension
.vsdx)

A [visio file](https://fr.wikipedia.org/wiki/Microsoft_Visio) (with
extension .vsdx) is associated with Microsoft Visio, a diagram creation
software. It stores information about the structure, layout, and
graphical elements of a diagram. This format facilitates the creation
and sharing of visualizations in areas such as business, engineering,
and computer science.

A Visio file can contain multiple pages. Some of them may serve as the
background for others, and this can occur across multiple layers. This
loader extracts the textual content from each page and its associated
pages, enabling the extraction of all visible text from each page,
similar to what an OCR algorithm would do.

**Dependencies** : xmltodict package
2024-01-22 22:07:03 -08:00
KhoPhi
fb41b68ea1
docs: Update with LCEL examples to Ollama & ChatOllama Integration notebook (#16194)
- **Description:** Updated the Chat/Ollama docs notebook with LCEL chain
examples

- **Issue:**  #15664 I'm a new contributor 😊

- **Dependencies:** No dependencies

- **Twitter handle:** 

Comments:

- How do I truncate the output of the stream in the notebook if and or
when it goes on and on and on for even the basic of prompts?

Edit:

Looking forward to feedback @baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-22 22:05:59 -08:00
Michael Gorham
3b0226b2c6
docs: Update redis_chat_message_history.ipynb (#16344)
## Problem
Spent several hours trying to figure out how to pass
`RedisChatMessageHistory` as a `GetSessionHistoryCallable` with a
different REDIS hostname. This example kept connecting to
`redis://localhost:6379`, but I wanted to connect to a server not hosted
locally.

## Cause
Assumption the user knows how to implement `BaseChatMessageHistory` and
`GetSessionHistoryCallable`

## Solution
Update documentation to show how to explicitly set the REDIS hostname
using a lambda function much like the MongoDB and SQLite examples.
2024-01-22 21:59:59 -08:00
Ian
c98994c3c9
docs: Improve notebook to show how to use tidb to store history messages (#16420)
After merging [PR
#16304](https://github.com/langchain-ai/langchain/pull/16304), I
realized that our notebook example for integrating TiDB with LangChain
was too basic. To make it more useful and user-friendly, I plan to
create a detailed example. This will show how to use TiDB for saving
history messages in LangChain, offering a clearer, more practical guide
for our users
2024-01-22 21:58:37 -08:00
Boris Feld
404abf139a
community: Add CometLLM tracing context var (#15765)
I also added LANGCHAIN_COMET_TRACING to enable the CometLLM tracing
integration similar to other tracing integrations. This is easier for
end-users to enable it rather than importing the callback and pass it
manually.

(This is the same content as
https://github.com/langchain-ai/langchain/pull/14650 but rebased and
squashed as something seems to confuse Github Action).
2024-01-22 15:17:16 -08:00
Jennifer Melot
d6275e47f2
docs: Updated integration docs structure for tools/arxiv (#16091) (#16250)
- **Description:** Updated docs for tools/arxiv to use `AgentExecutor`
and `invoke`
  - **Issue:** #15664
  - **Dependencies:** None
  - **Twitter handle:** None
2024-01-22 14:34:22 -08:00
ChengZi
a950fa0487
docs: add milvus multitenancy doc (#16177)
- **Description:** add milvus multitenancy doc, it is an example for
this [pr](https://github.com/langchain-ai/langchain/pull/15740) .
  - **Issue:** No,
  - **Dependencies:** No,
  - **Twitter handle:** No

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
2024-01-22 14:25:26 -08:00
parkererickson-tg
b26a22f307
community[minor]: add TigerGraph support (#16280)
**Description:** Add support for querying TigerGraph databases through
the InquiryAI service.
**Issue**: N/A
**Dependencies:** N/A
**Twitter handle:** @TigerGraphDB
2024-01-22 14:07:44 -08:00
Christophe Bornet
8da34118bc
docs: Add documentation for Cassandra Document Loader (#16282) 2024-01-22 14:06:21 -08:00
Jonathan Algar
774e543e1f
docs: fix formatting issue in rockset.ipynb (#16328)
**Description:** randomly discovered while working on another PR
https://github.com/quarto-dev/quarto-cli/discussions/8131#discussioncomment-8027706

@anubhav94N ICYI
2024-01-22 13:59:45 -08:00
Ian
b9f5104e6c
communty[minor]: Store Message History to TiDB Database (#16304)
This pull request integrates the TiDB database into LangChain for
storing message history, marking one of several steps towards a
comprehensive integration of TiDB with LangChain.


A simple usage
```python
from datetime import datetime
from langchain_community.chat_message_histories import TiDBChatMessageHistory

history = TiDBChatMessageHistory(
    connection_string="mysql+pymysql://<host>:<PASSWORD>@<host>:4000/<db>?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true",
    session_id="code_gen",
    earliest_time=datetime.utcnow(),  # Optional to set earliest_time to load messages after this time point.
)

history.add_user_message("hi! How's feature going?")
history.add_ai_message("It's almot done")
```
2024-01-22 13:56:56 -08:00
Omar-aly
873de14cd8
docs: update vectorstores/llm_rails integration doc (#16199)
Description:
- Updated the docs for the vectorstores integration module
llm_rails.ipynb

Issue:
- [Connected to Issue
#15664](https://github.com/langchain-ai/langchain/issues/15664)
 
Dependencies:
- N/A

Co-authored-by: omaraly23 <112936089+omaraly22@users.noreply.github.com>
2024-01-22 11:40:08 -08:00
Lance Martin
369e90d427
docs: Minor update to Robocorp toolkit docs (#16399) 2024-01-22 11:33:13 -08:00
Hadi
a1c0cf21c9
docs: Update import library for StreamlitCallbackHandler (#16401)
- **Description:** Some code sources have been moved from `langchain` to
`langchain_community` and so the documentation is not yet up-to-date.
This is specifically true for `StreamlitCallbackHandler` which returns a
`warning` message if not loaded from `langchain_community`.,
- **Issue:** I don't see a # issue that could address this problem but
perhaps #10744,
- **Dependencies:** Since it's a documentation change no dependencies
are required
2024-01-22 11:33:00 -08:00
JaguarDB
7ecd2f22ac
community[patch]: update documentation on jaguar vector store (#16346)
- **Description:** update documentation on jaguar vector store:
Instruction for setting up jaguar server and usage of text_tag.
  - **Issue:** 
  - **Dependencies:** 
  - **Twitter handle:**

---------

Co-authored-by: JY <jyjy@jaguardb>
2024-01-22 11:28:38 -08:00
Iskren Ivov Chernev
fc196cab12
community[minor]: DeepInfra support for chat models (#16380)
Add deepinfra chat models support.

This is https://github.com/langchain-ai/langchain/pull/14234 re-opened
from my branch (so maintainers can edit).
2024-01-22 11:22:17 -08:00
Bagatur
1dc6c1ce06
core[patch], community[patch], langchain[patch], docs: Update SQL chains/agents/docs (#16168)
Revamp SQL use cases docs. In the process update SQL chains and agents.
2024-01-22 08:19:08 -08:00
Christophe Bornet
f9be877ed7
Docs: Add self-querying retriever and store to AstraDB provider doc (#16362)
Add self-querying retriever and store to AstraDB provider doc
2024-01-22 10:24:28 -05:00
Mateusz Szewczyk
076dbb1a8f
docs: IBM watsonx.ai Use invoke instead of __call__ (#16371)
- **Description:** Updating documentation of IBM
[watsonx.ai](https://www.ibm.com/products/watsonx-ai) LLM with using
`invoke` instead of `__call__`
- **Dependencies:**
[ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),
  - **Tag maintainer:** : 

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally. 

The following warning information show when i use `run` and `__call__`
method:
```
LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.7 and will be removed in 0.2.0. Use invoke instead.
  warn_deprecated(
```

We need to update documentation for using `invoke` method
2024-01-22 10:22:03 -05:00
Bob Lin
c6bd7778b0
Use invoke instead of __call__ (#16369)
The following warning information will be displayed when i use
`llm(PROMPT)`:

```python
/Users/169/llama.cpp/venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.7 and will be removed in 0.2.0. Use invoke instead.
  warn_deprecated(
```

So I changed to standard usage.
2024-01-22 10:18:43 -05:00
Virat Singh
c2a614eddc
community: Add PolygonLastQuote Tool and Toolkit (#15990)
**Description:** 
In this PR, I am adding a `PolygonLastQuote` Tool, which can be used to
get the latest price quote for a given ticker / stock.

Additionally, I've added a Polygon Toolkit, which we can use to
encapsulate future tools that we build for Polygon.

**Twitter handle:** [@virattt](https://twitter.com/virattt)

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-21 15:08:55 -08:00
Lance Martin
881d1c3ec5
Update MultiON toolkit docs (#16286) 2024-01-19 09:37:20 -08:00
Bagatur
6f7a414955
docs: fix links (#16284) 2024-01-19 08:51:12 -08:00
Lance Martin
f63906a9c2
Test and update MultiON agent toolkit docs (#16235) 2024-01-18 20:24:35 -08:00
Ashley Xu
0f99646ca6
docs: add the enrollment form forBigQueryVectorSearch (#16240)
This PR adds the enrollment form for BigQueryVectorSearch.
2024-01-18 18:34:06 -08:00
Erick Friis
aa35b43bcd
docs, google-vertex[patch]: function docs (#16231) 2024-01-18 13:15:09 -08:00
Erick Friis
f2b2d59e82
docs: transport and client options docs (#16226)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-01-18 12:23:04 -08:00
Rajesh Thallam
6bc6d64a12
langchain_google_vertexai[patch]: Add support for SystemMessage for Gemini chat model (#15933)
- **Description:** In Google Vertex AI, Gemini Chat models currently
doesn't have a support for SystemMessage. This PR adds support for it
only if a user provides additional convert_system_message_to_human flag
during model initialization (in this case, SystemMessage would be
prepended to the first HumanMessage). **NOTE:** The implementation is
similar to #14824


- **Twitter handle:** rajesh_thallam

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-01-18 10:22:07 -08:00
jzaldi
ed118950fe
docs: Updated integration docs structure for llm/google_vertex_ai_palm (#16091)
- **Description**: Updated doc for llm/google_vertex_ai_palm with new
functions: `invoke`, `stream`... Changed structure of the document to
match the required one.
- **Issue**: #15664 
- **Dependencies**: None
- **Twitter handle**: None

---------

Co-authored-by: Jorge Zaldívar <jzaldivar@google.com>
2024-01-18 09:45:27 -08:00
Eugene Zapolsky
6b9e3ed9e9
google-vertexai[minor]: added safety_settings property to gemini wrapper (#15344)
**Description:** Gemini model has quite annoying default safety_settings
settings. In addition, current VertexAI class doesn't provide a property
to override such settings.
So, this PR aims to 
 - add safety_settings property to VertexAI
- fix issue with incorrect LLM output parsing when LLM responds with
appropriate 'blocked' response
- fix issue with incorrect parsing LLM output when Gemini API blocks
prompt itself as inappropriate
- add safety_settings related tests

I'm not enough familiar with langchain code base and guidelines. So, any
comments and/or suggestions are very welcome.
 
**Issue:** it will likely fix #14841

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-01-18 08:54:30 -08:00
David DeCaprio
ec9642d667
docs: Updated MongoDB Chat history example notebook to use LCEL format. (#15750)
- **Description:** Updated the MongoDB example integration notebook to
latest standards
- **Issue:**
[15664](https://github.com/langchain-ai/langchain/issues/15664)
  - **Dependencies:** None
  - **Twitter handle:** @davedecaprio

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-17 12:07:17 -08:00
Joshua Carroll
bc0cb1148a
docs: Fix StreamlitChatMessageHistory docs to latest API (#16072)
- **Description:** Update [this
page](https://python.langchain.com/docs/integrations/memory/streamlit_chat_message_history)
to use the latest API
  - **Issue:** https://github.com/langchain-ai/langchain/issues/13995
  - **Dependencies:** None
  - **Twitter handle:** @OhSynap
2024-01-17 09:42:10 -08:00
David DeCaprio
9c2f1f07a0
docs: Updated SQLite example to use LCEL and SQLChatMessageHistory (#16094)
- **Description:** Updated the SQLite example integration notebook to
latest standards
- **Issue:**
[15664](https://github.com/langchain-ai/langchain/issues/15664)
  - **Dependencies:** None
  - **Twitter handle:** @davedecaprio
2024-01-17 09:39:44 -08:00
Abhinav
da96c511d1
docs: Replace azure_cosmos_db_vector_search with azure_cosmos_db in Cosmos DB Documentation (#16122)
**Description**: This PR fixes an error in the documentation for Azure
Cosmos DB Integration.
**Issue**: The correct way to import `AzureCosmosDBVectorSearch` is
```python
from langchain_community.vectorstores.azure_cosmos_db import (
    AzureCosmosDBVectorSearch,
)
```
While the
[documentation](https://python.langchain.com/docs/integrations/vectorstores/azure_cosmos_db)
states it to be
```python
from langchain_community.vectorstores.azure_cosmos_db_vector_search import (
    AzureCosmosDBVectorSearch,
    CosmosDBSimilarityType,
)
```
As you can see in
[azure_cosmos_db.py](c323742f4f/libs/langchain/langchain/vectorstores/azure_cosmos_db.py (L1C45-L2))
**Dependencies:**: None
**Twitter handle**: None
2024-01-17 09:11:16 -08:00
Ikko Eltociear Ashimine
a35e5f19a8
docs: Update gradient.ipynb (#16149)
Enviroment -> Environment
2024-01-17 08:48:24 -08:00
David
c323742f4f
mistralai[minor]: Add embeddings (#15282)
- **Description:** Adds MistralAIEmbeddings class for embeddings, using
the new official API.
- **Dependencies:** mistralai
- **Tag maintainer**: @efriis, @hwchase17
- **Twitter handle:** @LMS_David_RS

Create `integrations/text_embedding/mistralai.ipynb`: an example
notebook for MistralAIEmbeddings class
Modify `embeddings/__init__.py`: Import the class
Create `embeddings/mistralai.py`: The embedding class
Create `integration_tests/embeddings/test_mistralai.py`: The test file.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-01-16 17:48:37 -08:00
Leonid Ganeline
f974eb5b8b
docs: updated Anyscale page (#16107)
- added description
- fixed broken links
- added setting instructions
- added the Chat model reference
2024-01-16 17:13:51 -08:00
Christophe Bornet
6b6269441c
docs: Add page for AstraDB self retriever (#16077)
Preview:
https://langchain-git-fork-cbornet-astra-self-retriever-docs-langchain.vercel.app/docs/integrations/retrievers/self_query/astradb
2024-01-16 09:50:30 -08:00
Juan Bustos
5f057f24ac
docs: Update elasticsearch.ipynb (#16090)
Fixed a typo, the parameter used for the Elasticsearch API key was
called api_key, but the parameter is called es_api_key.
2024-01-16 09:49:42 -08:00
高远
061e63eef2
community[minor]: add vikingdb vecstore (#15155)
---------

Co-authored-by: gaoyuan <gaoyuan.20001218@bytedance.com>
2024-01-15 12:34:01 -08:00
Zhichao HAN
5cf06db3b3
community[minor]: add JsonRequestsWrapper tool (#15374)
**Description:** This new feature enhances the flexibility of pipeline
integration, particularly when working with RESTful APIs.
``JsonRequestsWrapper`` allows for the decoding of JSON output, instead
of the only option for text output.

---------

Co-authored-by: Zhichao HAN <hanzhichao2000@hotmail.com>
2024-01-15 12:27:19 -08:00
Averi Kitsch
ee378a0f40
docs: add page for Firestore Chat Message History integration (#15554)
- **Description:** Adds documentation for the
`FirestoreChatMessageHistory` integration and lists integration in
Google's documentation
  - **Issue:** NA
  - **Dependencies:** No

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-15 11:42:33 -08:00
盐粒 Yanli
ddf4e7c633
community[minor]: Update pgvecto_rs to use its high level sdk (#15574)
- **Description:** Update pgvecto_rs to use its high level sdk, 
  - **Issue:** fix #15173
2024-01-15 11:41:59 -08:00
axiangcoding
daa9ccae52
community[patch]: deprecate ErnieBotChat and ErnieEmbeddings classes (#15862)
- **Description:** add deprecated warning for ErnieBotChat and
ErnieEmbeddings.
- These two classes **lack maintenance** and do not use the sdk provided
by qianfan, which means hard to implement some key feature like
streaming.
- The alternative `langchain_community.chat_models.QianfanChatEndpoint`
and `langchain_community.embeddings.QianfanEmbeddingsEndpoint` can
completely replace these two classes, only need to change configuration
items.
  - **Issue:** None,
  - **Dependencies:** None,
  - **Twitter handle:** None

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-15 11:14:44 -08:00
Bigtable123
7306032dcf
docs: update baidu_qianfan_endpoint.ipynb doc (#15940)
- **Description:** Updated the docs for the chat integration module
baidu_qianfan_endpoint.ipynb
  - **Issue:**  #15664 
  - **Dependencies:**N/A
2024-01-15 11:13:21 -08:00
Bhadresh Savani
6137c7608d
docs: Integration Documentation updated run to invoke for llms/ai21.ipynb (#15889)
- **Description:** Updated Integration Documentation for
[llms/ai21.ipynb](https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/llms/ai21.ipynb)
  - **Issue:** #15664,
  - **Dependencies:** NA,
  - **Twitter handle:** @BhadreshSavani
2024-01-15 10:53:22 -08:00
Massimiliano Pronesti
e80aab2275
docs(community): update Amadeus toolkit to langchain v0.1 (#15976)
- **Description:** docs update following the changes introduced in
#15879

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-01-15 10:50:47 -08:00
Ashley Xu
ce7723c1e5
community[minor]: add additional support for BigQueryVectorSearch (#15904)
BigQuery vector search lets you use GoogleSQL to do semantic search,
using vector indexes for fast but approximate results, or using brute
force for exact results.

This PR:
1. Add `metadata[_job_ib]` in Document returned by any similarity search
2. Add `explore_job_stats` to enable users to explore job statistics and
better the debuggability
3. Set the minimum row limit for running create vector index.
2024-01-15 10:45:15 -08:00
Karim Lalani
14244bd7e5
community[minor]: Added document loader for SurrealDB (#15995)
Added a simple document loader to work with SurrealDB.
2024-01-15 10:32:42 -08:00
Karim Lalani
768e5e33bc
community[minor]: Fix to match SurrealDB 0.3.2 SDK (#15996)
New version of SurrealDB python sdk was causing the integration to
break.
This fix addresses that change.
2024-01-15 10:31:59 -08:00
Mahad
f7706637a8
docs: fix documentation broken link in integrations chroma (#16041)
- **Description:** Fixed broken link in the documentation for Chroma.,
  - **Issue:** 
  - **Dependencies:**
2024-01-15 08:37:03 -08:00
Erick Friis
7b084b4cc7
docs: more pip installs (#15771)
- vertex chat
- google
- some pip openai
- percent and openai
- all percent
- more
- pip
- fmt
- docs: google vertex partner docs
- fmt
- docs: more pip installs
2024-01-12 18:16:00 -08:00
Virat Singh
eb6e385dc5
community: Add PolygonAPIWrapper and get_last_quote endpoint (#15971)
- **Description:** Added a `PolygonAPIWrapper` and an initial
`get_last_quote` endpoint, which allows us to get the last price quote
for a given `ticker`. Once merged, I can add a Polygon tool in `tools/`
for agents to use.
- **Twitter handle:** [@virattt](https://twitter.com/virattt)

The Polygon.io Stocks API provides REST endpoints that let you query the
latest market data from all US stock exchanges.
2024-01-12 17:52:09 -08:00
Jonathan Algar
a74f3a4979
Batch update of alt text and title attributes for images in md/mdx files across repo (#15357)
**Description:** Batch update of alt text and title attributes for
images in `md` & `mdx` files across the repo using
[alttexter](https://github.com/jonathanalgar/alttexter)/[alttexter-ghclient](https://github.com/jonathanalgar/alttexter-ghclient)
(built using LangChain/LangSmith).

**Limitation:** cannot update `ipynb` files because of [this
issue](https://github.com/langchain-ai/langchain/pull/15357#issuecomment-1885037250).
Can revisit when Docusaurus is bumped to v3.

I checked all the generated alt texts and titles and didn't find any
technical inaccuracies. That's not to say they're _perfect_, but a lot
better than what's there currently.


[Deployed](https://langchain-819yf1tbk-langchain.vercel.app/docs/modules/model_io/)
image example:


![chrome_yZQ7BF2GTj](https://github.com/langchain-ai/langchain/assets/93204286/43a9a4d4-70fd-41c4-8978-b6240ff63ffa)

You can see LangSmith traces for all the calls out to the LLM in the PRs
merged into this one:

* https://github.com/jonathanalgar/langchain/pull/6
* https://github.com/jonathanalgar/langchain/pull/4
* https://github.com/jonathanalgar/langchain/pull/3

I didn't add the following files to the PR as the images already have OK
alt texts:

*
27dca2d92f/docs/docs/integrations/providers/argilla.mdx (L3)
*
27dca2d92f/docs/docs/integrations/providers/apify.mdx (L11)

---------

Co-authored-by: github-actions <github-actions@github.com>
2024-01-12 14:37:48 -08:00
Varik Matevosyan
efe6cfafe2
community: Added Lantern as VectorStore (#12951)
Support [Lantern](https://github.com/lanterndata/lantern) as a new
VectorStore type.

- Added Lantern as VectorStore.
It will support 3 distance functions `l2 squared`, `cosine` and
`hamming` and will use `HNSW` index.
- Added tests
- Added example notebook
2024-01-12 12:00:16 -08:00
Christophe Bornet
bc60203d0f
Add documentation for AstraDBStore (#15953)
Preview:
https://langchain-git-fork-cbornet-astradb-store-doc-langchain.vercel.app/docs/integrations/stores/astradb
2024-01-12 10:44:46 -08:00
Rihards Gravis
6a48ea43ec
docs: Update Robocorp Action Server installation instructions (#15943)
**Description:**

Remove section on how to install Action Server and direct the users t o
the instructions on Robocorp repository.

**Reason:**

Robocorp Action Server has moved from a pip installation to a standalone
cli application and is due for changes. Because of that, leaving only
LangChain integration relevant part in the documentation.
2024-01-12 09:46:18 -08:00
Raunak
e26e1f8b37
community: Added functions to make async calls to HuggingFaceHub's embedding endpoint in HuggingFaceHubEmbeddings class (#15737)
**Description:**
Added aembed_documents() and aembed_query() async functions in
HuggingFaceHubEmbeddings class in
langchain_community\embeddings\huggingface_hub.py file. It will support
to make async calls to HuggingFaceHub's
embedding endpoint and generate embeddings asynchronously.

Test Cases: Added test_huggingfacehub_embedding_async_documents() and
test_huggingfacehub_embedding_async_query()
functions in test_huggingface_hub.py file to test the two async
functions created in HuggingFaceHubEmbeddings class.

Documentation: Updated huggingfacehub.ipynb with steps to install
huggingface_hub package and use
HuggingFaceHubEmbeddings.

**Dependencies:** None,
**Twitter handle:** I do not have a Twitter account

---------

Co-authored-by: H161961 <Raunak.Raunak@Honeywell.com>
2024-01-11 21:52:55 -08:00
Mu Xian Ming
560bb49c99
docs: redis_chat_message_history.ipynb integration doc (#15789)
- **Description:** Updated the docs for the memory integration module
redis_chat_message_history.ipynb
  - **Issue:** #15664
  - **Dependencies:** N/A

Co-authored-by: Mu Xianming <mu.xianming@lmwn.com>
2024-01-11 21:42:31 -08:00
Xin Liu
5efec068c9
feat: Implement stream interface (#15875)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

Major changes:

- Rename `wasm_chat.py` to `llama_edge.py`
- Rename the `WasmChatService` class to `ChatService`
- Implement the `stream` interface for `ChatService`
- Add `test_chat_wasm_service_streaming` in the integration test
- Update `llama_edge.ipynb`

---------

Signed-off-by: Xin Liu <sam@secondstate.io>
2024-01-11 21:32:48 -08:00
Christophe Bornet
c56060bb7d
Add document loader section to Astra provider doc page (#15882)
See preview:
https://langchain-git-fork-cbornet-provider-astra-doc-loader-langchain.vercel.app/docs/integrations/providers/astradb#ocument-loader
2024-01-11 07:52:29 -08:00
xvjixiang
611f18c944
Docs: Fix a typo in elasticsearch vectorstore notebook (#15807)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-01-10 20:30:44 -08:00
mogith-pn
9e1ed17bfb
Community : Modified doc strings and example notebook for Clarifai (#15816)
Community : Modified doc strings and example notebook for Clarifai

Description:
1. Modified doc strings inside clarifai vectorstore class and
embeddings.
2. Modified notebook examples.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-01-10 19:33:10 -08:00
Daniel
6d299a55c0
docs: Update cohere.mdx, Text embedding had incorrect code snippet (#15840)
text embedding code snippet was incorrect.
2024-01-10 19:25:29 -08:00
Erick Friis
38523d7c57
together[minor]: add llm (#15853) 2024-01-10 17:55:34 -08:00
Harrison Chase
21a1538949
add raga reranker (#15838) 2024-01-10 11:07:19 -08:00
Harrison Chase
a1d7f2b3e1
add dspy notebook (#15798) 2024-01-10 08:01:08 -08:00
NuODaniel
70b6315b23
community[patch]: fix qianfan chat stream calling caused exception (#13800)
- **Description:** 
`QianfanChatEndpoint` extends `BaseChatModel` as a super class, which
has a default stream implement might concat the MessageChunk with
`__add__`. When call stream(), a ValueError for duplicated key will be
raise.
  - **Issues:** 
     * #13546  
     * #13548
     * merge two single test file related to qianfan.
  - **Dependencies:** no
  - **Tag maintainer:**

---------

Co-authored-by: root <liujun45@baidu.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-09 15:29:25 -08:00
Erick Friis
7be72e1103
openai[patch], docs: readme (#15773) 2024-01-09 11:52:24 -08:00
Erick Friis
7562f70c95
robocorp[minor]: Add robocorp action server toolkit (#15766)
Co-authored-by: Rihards Gravis <rihards@gravis.lv>
Co-authored-by: Mikko Korpela <mikko@robocorp.com>
2024-01-09 11:29:19 -08:00
Erick Friis
7bc100fd43
docs: integration package pip installs (#15762)
More than 300 files - will fail check_diff. Will merge after Vercel
deploy succeeds

Still occurrences that need changing - will update more later
2024-01-09 11:13:10 -08:00
Kane Sweet
167a0ac5f5
docs: update aws_dynamodb integration doc (#15666)
- **Description:** 
- Updated the docs for the memory integration module
`aws_dynamodb.ipynb`
  - **Issue:** 
    - #15664 
  - **Dependencies:** 
    - N/A
2024-01-08 12:27:29 -08:00
Christophe Bornet
1f5f6381ec
Add doc for AstraDB document loader (#15703)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
See preview :
https://langchain-git-fork-cbornet-astra-loader-doc-langchain.vercel.app/docs/integrations/document_loaders/astradb
2024-01-08 12:21:46 -08:00
Harrison Chase
38ae4df3a1
update ragatouille integration (#15658) 2024-01-07 10:51:34 -08:00
Shaurya Rohatgi
1bfb1725a1
fix: Ollama import statements (#15493)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-01-07 09:31:24 -08:00
Nan LI
0b393315ce
community: Correct Input API Key Name in Notebook and Enhance Readability of Comments for ZhipuAI Chat Model (#15529)
- **Description:** This update rectifies an error in the notebook by
changing the input variable from `zhipu_api_key` to `api_key`. It also
includes revisions to comments to improve program readability.
- **Issue:** The input variable in the notebook example should be
`api_key` instead of `zhipu_api_key`.
- **Dependencies:** No additional dependencies are required for this
change.

To ensure quality and standards, we have performed extensive linting and
testing. Commands such as make format, make lint, and make test have
been run from the root of the modified package to ensure compliance with
LangChain's coding standards.
2024-01-07 09:27:47 -08:00
V.Prasanna kumar
378d40f3ea
changed broken link for wandb tracing with agent (#15578)
fix of #14905 

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-07 08:48:15 -08:00
Erick Friis
b1fa726377
docs: langchain-openai (#15513)
Updates docs and cookbooks to import ChatOpenAI, OpenAI, and OpenAI
Embeddings from `langchain_openai`

There are likely more

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-06 15:54:48 -08:00
Bagatur
c5226d7a18
docs: update cohere chat integration (#15562)
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-05 16:33:29 -08:00
Harrison Chase
fd5fbb507d
fix links (#15566)
there are still a few broken ones:

- some in the chains docs, which I will delete soon :)
- some pointing to a sqlite tool, which we should add
2024-01-04 21:57:30 -08:00
V.Prasanna kumar
aa1c7a56a9
docs: removed deprecated openai model (#15533)
removed the deprecated model from text embedding page of openai notebook
and added the suggested model from openai page


<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-01-04 17:53:42 -08:00
Harrison Chase
14966581df
add ragatouille (#15561) 2024-01-04 13:45:20 -08:00
Bagatur
baeac236b6
langchain[patch], experimental[patch]: update utilities imports (#15438) 2024-01-03 02:18:15 -05:00
Xin Liu
0a7d360ba4
feat: new integration wasm_chat (#14787)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

Adds `WasmChat` integration. `WasmChat` runs GGUF models locally or via
chat service in lightweight and secure WebAssembly containers. In this
PR, `WasmChatService` is introduced as the first step of the
integration. `WasmChatService` is driven by
[llama-api-server](https://github.com/second-state/llama-utils) and
[WasmEdge Runtime](https://wasmedge.org/).

---------

Signed-off-by: Xin Liu <sam@secondstate.io>
2024-01-02 22:33:14 -08:00
Erick Friis
620168e459
docs: together ai updates (#15435) 2024-01-02 16:05:53 -08:00
Bagatur
93e924ec96
langchain[patch], docs: update agent toolkit imports (#15434) 2024-01-02 18:58:50 -05:00
Ashley Xu
0ce7858529
feat: add Google BigQueryVectorSearch in vectorstore (#14829)
BigQuery vector search lets you use GoogleSQL to do semantic search,
using vector indexes for fast but approximate results, or using brute
force for exact results.

This PR integrates LangChain vectorstore with BigQuery Vector Search.

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Vlad Kolesnikov <vladkol@google.com>
2024-01-02 15:57:14 -08:00
Shaurya Rohatgi
e1c2cd7a28
community: Semanticscholar tool to search 200M+ scientific articles (#15151)
- **Description:** Tool now supports querying over 200 million
scientific articles, vastly expanding its reach beyond the 2 million
articles accessible through Arxiv. This update significantly broadens
access to the entire scope of scientific literature.
- **Dependencies:** semantischolar
https://github.com/danielnsilva/semanticscholar
  - **Twitter handle:** @shauryr

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-02 15:36:03 -08:00
Bagatur
1678d6ca17
langchain[patch], experimental[patch], docs: update tools imports (#15433) 2024-01-02 18:23:34 -05:00
Leonid Ganeline
1e6519edc2
docs Microsoft platform page update (#15420)
Added two new document_loader references. Improved the format
consistency of the example pages
2024-01-02 14:59:40 -08:00
Bagatur
fa5d49f2c1
docs, experimental[patch], langchain[patch], community[patch]: update storage imports (#15429)
ran 
```bash
g grep -l "langchain.vectorstores" | xargs -L 1 sed -i '' "s/langchain\.vectorstores/langchain_community.vectorstores/g"
g grep -l "langchain.document_loaders" | xargs -L 1 sed -i '' "s/langchain\.document_loaders/langchain_community.document_loaders/g"
g grep -l "langchain.chat_loaders" | xargs -L 1 sed -i '' "s/langchain\.chat_loaders/langchain_community.chat_loaders/g"
g grep -l "langchain.document_transformers" | xargs -L 1 sed -i '' "s/langchain\.document_transformers/langchain_community.document_transformers/g"
g grep -l "langchain\.graphs" | xargs -L 1 sed -i '' "s/langchain\.graphs/langchain_community.graphs/g"
g grep -l "langchain\.memory\.chat_message_histories" | xargs -L 1 sed -i '' "s/langchain\.memory\.chat_message_histories/langchain_community.chat_message_histories/g"
gco master libs/langchain/tests/unit_tests/*/test_imports.py
gco master libs/langchain/tests/unit_tests/**/test_public_api.py
```
2024-01-02 16:47:11 -05:00
Bagatur
480626dc99
docs, community[patch], experimental[patch], langchain[patch], cli[pa… (#15412)
…tch]: import models from community

ran
```bash
git grep -l 'from langchain\.chat_models' | xargs -L 1 sed -i '' "s/from\ langchain\.chat_models/from\ langchain_community.chat_models/g"
git grep -l 'from langchain\.llms' | xargs -L 1 sed -i '' "s/from\ langchain\.llms/from\ langchain_community.llms/g"
git grep -l 'from langchain\.embeddings' | xargs -L 1 sed -i '' "s/from\ langchain\.embeddings/from\ langchain_community.embeddings/g"
git checkout master libs/langchain/tests/unit_tests/llms
git checkout master libs/langchain/tests/unit_tests/chat_models
git checkout master libs/langchain/tests/unit_tests/embeddings/test_imports.py
make format
cd libs/langchain; make format
cd ../experimental; make format
cd ../core; make format
```
2024-01-02 15:32:16 -05:00
Bagatur
8e0d5813c2
langchain[patch], experimental[patch]: replace langchain.schema imports (#15410)
Import from core instead.

Ran:
```bash
git grep -l 'from langchain.schema\.output_parser' | xargs -L 1 sed -i '' "s/from\ langchain\.schema\.output_parser/from\ langchain_core.output_parsers/g"
git grep -l 'from langchain.schema\.messages' | xargs -L 1 sed -i '' "s/from\ langchain\.schema\.messages/from\ langchain_core.messages/g"
git grep -l 'from langchain.schema\.document' | xargs -L 1 sed -i '' "s/from\ langchain\.schema\.document/from\ langchain_core.documents/g"
git grep -l 'from langchain.schema\.runnable' | xargs -L 1 sed -i '' "s/from\ langchain\.schema\.runnable/from\ langchain_core.runnables/g"
git grep -l 'from langchain.schema\.vectorstore' | xargs -L 1 sed -i '' "s/from\ langchain\.schema\.vectorstore/from\ langchain_core.vectorstores/g"
git grep -l 'from langchain.schema\.language_model' | xargs -L 1 sed -i '' "s/from\ langchain\.schema\.language_model/from\ langchain_core.language_models/g"
git grep -l 'from langchain.schema\.embeddings' | xargs -L 1 sed -i '' "s/from\ langchain\.schema\.embeddings/from\ langchain_core.embeddings/g"
git grep -l 'from langchain.schema\.storage' | xargs -L 1 sed -i '' "s/from\ langchain\.schema\.storage/from\ langchain_core.stores/g"
git checkout master libs/langchain/tests/unit_tests/schema/
make format
cd libs/experimental
make format
cd ../langchain
make format
```
2024-01-02 15:09:45 -05:00
Bob Lin
4488234d64
Update gpt4all.mdx doc (#15392)
The [pyllamacpp](https://github.com/nomic-ai/pyllamacpp) repository has
been archived and the model name and usage need to be changed.
2024-01-02 08:57:57 -08:00
Mateusz Szewczyk
cbfaccc424
WatsonxLLM updates/enhancements (#14598)
- **Description:** updates/enhancements to IBM
[watsonx.ai](https://www.ibm.com/products/watsonx-ai) LLM provider
(prompt tuned models and prompt templates deployments support)
- **Dependencies:**
[ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),
  - **Tag maintainer:** : @hwchase17 , @eyurtsev , @baskaryan 
  - **Twitter handle:** details in comment below.

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally. 

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-01 18:50:05 -08:00
Muntaqa Mahmood
cf2dd2fa25
Added: docs Headers to Steam Tool notebook steps (#14749)
Added some Headers in steam tool notebook to match consistency with the
other toolkit notebooks
  - Dependencies: no new dependencies
  - Tag maintainer: @hwchase17, @baskaryan

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-01 16:21:40 -08:00
Leonid Ganeline
de682761c5
docs microsoft pages sort order fix (#14771)
`integrations/document_loaders/` `Excel` and `OneNote` pages in the
navbar were in the wrong sort order. It is because the file names are
not equal to the page titles.
- renamed `excel` and `onenote` file names
2024-01-01 16:10:59 -08:00
Nan LI
f506b4cfd2
community: Integration of New Chat Model Based on ChatGLM3 via ZhipuAI API (#15105)
- **Description:** 
- This PR introduces a significant enhancement to the LangChain project
by integrating a new chat model powered by the third-generation base
large model, ChatGLM3, via the zhipuai API.
- This advanced model supports functionalities like function calls, code
interpretation, and intelligent Agent capabilities.
- The additions include the chat model itself, comprehensive
documentation in the form of Python notebook docs, and thorough testing
with both unit and integrated tests.
- **Dependencies:** This update relies on the ZhipuAI package as a key
dependency.
- **Twitter handle:** If this PR receives spotlight attention, we would
be honored to receive a mention for our integration of the advanced
ChatGLM3 model via the ZhipuAI API. Kindly tag us at @kaiwu.

To ensure quality and standards, we have performed extensive linting and
testing. Commands such as make format, make lint, and make test have
been run from the root of the modified package to ensure compliance with
LangChain's coding standards.

TO DO: Continue refining and enhancing both the unit tests and
integrated tests.

---------

Co-authored-by: jing <jingguo92@gmail.com>
Co-authored-by: hyy1987 <779003812@qq.com>
Co-authored-by: jianchuanqi <qijianchuan@hotmail.com>
Co-authored-by: lirq <whuclarence@gmail.com>
Co-authored-by: whucalrence <81530213+whucalrence@users.noreply.github.com>
Co-authored-by: Jing Guo <48378126+JaneCrystall@users.noreply.github.com>
2024-01-01 15:17:03 -08:00
Hin
2cf1e73d12
Feat add volcano embedding (#14693)
Description: Volcano Ark is an enterprise-grade large-model service
platform for developers, providing a full range of functions and
services such as model training, inference, evaluation, fine-tuning. You
can visit its homepage at https://www.volcengine.com/docs/82379/1099455
for details. This change could help developers use the platform for
embedding.
Issue: None
Dependencies: volcengine
Tag maintainer: @baskaryan
Twitter handle: @hinnnnnnnnnnnns

---------

Co-authored-by: lujingxuansc <lujingxuansc@bytedance.com>
2024-01-01 14:37:35 -08:00
Harrison Chase
81a7a83b21
[docs] update toolkit docs (#15294)
its probably too much work to do all toolkits before 0.1

Also, some agent toolkits (like pandas and python) will require more
work
2024-01-01 14:21:55 -08:00
Vardhaman
c2c2f252c2
docs: updated document for 'Return Source Documents' Functionality (#15106)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

- **Description:** updated the outdated code in the document that was
generating the error,
  - **Issue:** #15086 ,
  - **Dependencies:** N/A,
- **Twitter handle:** [@vardhaman722](https://twitter.com/vardhaman722)
2024-01-01 14:03:16 -08:00
Abhishek Mishra
f466b6aa4a
Documentation: Update playwright documentation for langchain version >= 0.0.351 (#15260)
- A documentation change in the example listed under:
https://python.langchain.com/docs/integrations/toolkits/playwright
- `create_async_playwright_browser` does not exist under the module:
`langchain.tools.playwright.utils` post >= 0.0.351 version
  - No dependencies to be changed

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-01 13:38:56 -08:00
Yinghao Zhu
870b4033ed
docs(ollama): Fix Documentation in CallbackManager, missing ]) (#15380)
- **Description:** This PR corrects a documentation error in the
`ollama` usage tutorial. Specifically, it fixes a missing `])` in the
`CallbackManager()` example, ensuring that the code snippet is
syntactically correct and can be successfully executed.
  - **Issue:** N/A
- **Dependencies:** No additional dependencies are required for this
change.
  - **Twitter handle:** My twitter is @yhzhu99
2024-01-01 13:17:32 -08:00
Ofer Mendelevitch
11accf8366
Community: Newlines before bullets in IPYNB files (Vectara) (#15330)
- **Description:** updated all Vectara IPYNB files so that bullets look
okay in docs (added newline)
  - **Twitter handle:** @ofermend
2023-12-30 14:04:04 -08:00
Harrison Chase
f20c56db41
[documentation] documentation revamp (#15281)
needs new versions of langchain-core and langchain

---------

Co-authored-by: Nuno Campos <nuno@langchain.dev>
2023-12-29 14:51:06 -08:00
Bob Lin
a464eb4394
community: Make doctran synchronous (#15264)
### Description

I found that the methods in [the doctran
library](https://github.com/psychic-api/doctran) have been restructured
into [synchronized
versions](14944a59f7),

And [the example
ipynb](https://github.com/psychic-api/doctran/blob/main/examples.ipynb)
also shows that the code is synchronized, but the README has not been
updated yet.

so we need to modify the code and update the documentation.

### Issue

https://github.com/langchain-ai/langchain/issues/14645
2023-12-28 08:05:24 -08:00
Vardhaman
15e53a99b2
docs: updated wrong output in Upstash Redis Cache section of LLM Ca… (#15140)
…ching documentation

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->


- **Description:** Fixed the wrong output and code block comment in
`Upstash Redis` Cache section of LLM Caching documentation,
  - **Issue:** #15139 ,
  - **Dependencies:** N/A,
- **Twitter handle:** [@vardhaman722](https://twitter.com/vardhaman722)
2023-12-26 13:08:21 -08:00
KallieLev
3154c9bc9f
docs: Update dependencies installation cell in steam toolkit (#15148)
**Description:** `decouple` is not the correct package, it's
`python-decouple`, and the notebook cell doesn't compile.

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2023-12-26 12:07:03 -08:00
Takuya Igei
6da2246215
Add support Vertex AI Gemini uses a public image URL (#14949)
## What

Since `langchain_google_genai.ChatGoogleGenerativeAI` supported A public
image URL, we add to support it in `langchain.chat_models.ChatVertexAI`
as well.

### Example

```py
from langchain.chat_models.vertexai import ChatVertexAI
from langchain_core.messages import HumanMessage

llm = ChatVertexAI(model_name="gemini-pro-vision")
image_message = {
    "type": "image_url",
    "image_url": {
        "url": "https://python.langchain.com/assets/images/cell-18-output-1-0c7fb8b94ff032d51bfe1880d8370104.png",
    },
}
text_message = {
    "type": "text",
    "text": "What is shown in this image?",
}
message = HumanMessage(content=[text_message, image_message])

output = llm([message])
print(output.content)
```

## Refs

-
https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm
-
https://python.langchain.com/docs/integrations/chat/google_generative_ai
2023-12-22 13:19:09 -08:00
Sid Sarasvati
0e3da6d8d2
Update youtube_transcript.ipynb (#15015)
add_video_info should be false in the first example

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2023-12-22 12:47:05 -08:00
Philip Kiely - Baseten
6342da333a
community: refactor Baseten integration with new API endpoints & docs (#15017)
- **Description:** In response to user feedback, this PR refactors the
Baseten integration with updated model endpoints, as well as updates
relevant documentation. This PR has been tested by end users in
production and works as expected.
  - **Issue:** N/A
- **Dependencies:** This PR actually removes the dependency on the
`baseten` package!
  - **Twitter handle:** https://twitter.com/basetenco
2023-12-22 12:46:24 -08:00
QIAN Zifei
2460f977c5
community[minor]: Azure DocumentIntelligenceLoader/Parser support update with latest SDK (#14389)
- **Description:**
Add DocumentIntelligenceLoader & DocumentIntelligenceParser
implementation using the latest Azure Document Intelligence SDK with
markdown support.
The core logic resides in DocumentIntelligenceParser and
DocumentIntelligenceLoader is a mere wrapper of the parser.
The parser will takes api_endpoint and api_key and creates
DocumentIntelligenceClient for the user. 4 parsing modes are supported:
1. Markdown (default)
2. Single
3. Page 
4. Object

UT and notebook are also updated accordingly.

- **Dependencies:** Azure Document Intelligence SDK:
azure-ai-documentintelligence
[azure-sdk-for-python/sdk/documentintelligence/azure-ai-documentintelligence
at 7c42462ac662522a6fd21b17d2a20f4cd40d0356 · Azure/azure-sdk-for-python
(github.com)](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAzure%2Fazure-sdk-for-python%2Ftree%2F7c42462ac662522a6fd21b17d2a20f4cd40d0356%2Fsdk%2Fdocumentintelligence%2Fazure-ai-documentintelligence&data=05%7C01%7CZifei.Qian%40microsoft.com%7C298225aa3e31468a863108dbf07374ff%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638368150928704292%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=oE0Sl4HERnMKdbkV9KgBV46Z2xytcQAShdTWf7ZNl%2Bs%3D&reserved=0).

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-12-21 16:40:27 -08:00