### Description
The feature for pseudonymizing data with ability to retrieve original
text (deanonymization) has been implemented. In order to protect private
data, such as when querying external APIs (OpenAI), it is worth
pseudonymizing sensitive data to maintain full privacy. But then, after
the model response, it would be good to have the data in the original
form.
I implemented the `PresidioReversibleAnonymizer`, which consists of two
parts:
1. anonymization - it works the same way as `PresidioAnonymizer`, plus
the object itself stores a mapping of made-up values to original ones,
for example:
```
{
"PERSON": {
"<anonymized>": "<original>",
"John Doe": "Slim Shady"
},
"PHONE_NUMBER": {
"111-111-1111": "555-555-5555"
}
...
}
```
2. deanonymization - using the mapping described above, it matches fake
data with original data and then substitutes it.
Between anonymization and deanonymization user can perform different
operations, for example, passing the output to LLM.
### Future works
- **instance anonymization** - at this point, each occurrence of PII is
treated as a separate entity and separately anonymized. Therefore, two
occurrences of the name John Doe in the text will be changed to two
different names. It is therefore worth introducing support for full
instance detection, so that repeated occurrences are treated as a single
object.
- **better matching and substitution of fake values for real ones** -
currently the strategy is based on matching full strings and then
substituting them. Due to the indeterminism of language models, it may
happen that the value in the answer is slightly changed (e.g. *John Doe*
-> *John* or *Main St, New York* -> *New York*) and such a substitution
is then no longer possible. Therefore, it is worth adjusting the
matching for your needs.
- **Q&A with anonymization** - when I'm done writing all the
functionality, I thought it would be a cool resource in documentation to
write a notebook about retrieval from documents using anonymization. An
iterative process, adding new recognizers to fit the data, lessons
learned and what to look out for
### Twitter handle
@deepsense_ai / @MaksOpp
---------
Co-authored-by: MaksOpp <maks.operlejn@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Squashed from #7454 with updated features
We have separated the `SQLDatabseChain` from `VectorSQLDatabseChain` and
put everything into `experimental/`.
Below is the original PR message from #7454.
-------
We have been working on features to fill up the gap among SQL, vector
search and LLM applications. Some inspiring works like self-query
retrievers for VectorStores (for example
[Weaviate](https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html)
and
[others](https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html))
really turn those vector search databases into a powerful knowledge
base! 🚀🚀
We are thinking if we can merge all in one, like SQL and vector search
and LLMChains, making this SQL vector database memory as the only source
of your data. Here are some benefits we can think of for now, maybe you
have more 👀:
With ALL data you have: since you store all your pasta in the database,
you don't need to worry about the foreign keys or links between names
from other data source.
Flexible data structure: Even if you have changed your schema, for
example added a table, the LLM will know how to JOIN those tables and
use those as filters.
SQL compatibility: We found that vector databases that supports SQL in
the marketplace have similar interfaces, which means you can change your
backend with no pain, just change the name of the distance function in
your DB solution and you are ready to go!
### Issue resolved:
- [Feature Proposal: VectorSearch enabled
SQLChain?](https://github.com/hwchase17/langchain/issues/5122)
### Change made in this PR:
- An improved schema handling that ignore `types.NullType` columns
- A SQL output Parser interface in `SQLDatabaseChain` to enable Vector
SQL capability and further more
- A Retriever based on `SQLDatabaseChain` to retrieve data from the
database for RetrievalQAChains and many others
- Allow `SQLDatabaseChain` to retrieve data in python native format
- Includes PR #6737
- Vector SQL Output Parser for `SQLDatabaseChain` and
`SQLDatabaseChainRetriever`
- Prompts that can implement text to VectorSQL
- Corresponding unit-tests and notebook
### Twitter handle:
- @MyScaleDB
### Tag Maintainer:
Prompts / General: @hwchase17, @baskaryan
DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
### Dependencies:
No dependency added
# Description
This pull request allows to use the
[NucliaDB](https://docs.nuclia.dev/docs/docs/nucliadb/intro) as a vector
store in LangChain.
It works with both a [local NucliaDB
instance](https://docs.nuclia.dev/docs/docs/nucliadb/deploy/basics) or
with [Nuclia Cloud](https://nuclia.cloud).
# Dependencies
It requires an up-to-date version of the `nuclia` Python package.
@rlancemartin, @eyurtsev, @hinthornw, please review it when you have a
moment :)
Note: our Twitter handler is `@NucliaAI`
This PR replaces the generic `SET search_path TO` statement by `USE` for
the Trino dialect since Trino does not support `SET search_path`.
Official Trino documentation can be found
[here](https://trino.io/docs/current/sql/use.html).
With this fix, the `SQLdatabase` will now be able to set the current
schema and execute queries using the Trino engine. It will use the
catalog set as default by the connection uri.
- Description: Remove hardcoded/duplicated distance strategies in the
PGVector store.
- Issue: NA
- Dependencies: NA
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: @archmonkeymojo
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Updated Additional Resources section of documentation and
added to YouTube videos with excellent playlist of Langchain content
from Sam Witteveen
- Issue: None -- updating documentation
- Dependencies: None
- Tag maintainer: @baskaryan
I have updated the code to ensure consistent error handling for
ImportError. Instead of relying on ValueError as before, I've followed
the standard practice of raising ImportError while also including
detailed error messages. This modification improves code clarity and
explicitly indicates that any issues are related to module imports.
`mypy` cannot type-check code that relies on dependencies that aren't
installed.
Eventually we'll probably want to install as many optional dependencies
as possible. However, the full "extended deps" setup for langchain
creates a 3GB cache file and takes a while to unpack and install. We'll
probably want something a bit more targeted.
This is a first step toward something better.
A test file was accidentally dropping a `results.json` file in the
current working directory as a result of running `make test`.
This is undesirable, since we don't want to risk accidentally adding
stray files into the repo if we run tests locally and then do `git add
.` without inspecting the file list very closely.
Makes it easier to do recursion using regular python compositional
patterns
```py
def lambda_decorator(func):
"""Decorate function as a RunnableLambda"""
return runnable.RunnableLambda(func)
@lambda_decorator
def fibonacci(a, config: runnable.RunnableConfig) -> int:
if a <= 1:
return a
else:
return fibonacci.invoke(
a - 1, config
) + fibonacci.invoke(a - 2, config)
fibonacci.invoke(10)
```
https://smith.langchain.com/public/cb98edb4-3a09-4798-9c22-a930037faf88/r
Also makes it more natural to do things like error handle and call other
langchain objects in ways we probably don't want to support in
`with_fallbacks()`
```py
@lambda_decorator
def handle_errors(a, config: runnable.RunnableConfig) -> int:
try:
return my_chain.invoke(a, config)
except MyExceptionType as exc:
return my_other_chain.invoke({"original": a, "error": exc}, config)
```
In this case, the next chain takes in the exception object. Maybe this
could be something we toggle in `with_fallbacks` but I fear we'll get
into uglier APIs + heavier cognitive load if we try to do too much there
---------
Co-authored-by: Nuno Campos <nuno@boringbits.io>
- Description: Fix bug in SPARQL intent selection
- Issue: After the change in #7758 the intent is always set to "UPDATE".
Indeed, if the answer to the prompt contains only "SELECT" the
`find("SELECT")` operation returns a higher value w.r.t. `-1` returned
by `find("UPDATE")`.
- Dependencies: None,
- Tag maintainer: @baskaryan @aditya-29
- Twitter handle: @mario_scrock
It seems the caching action was not always correctly recreating
softlinks. At first glance, the softlinks it created seemed fine, but
they didn't always work. Possibly hitting some kind of underlying bug,
but not particularly worth debugging in depth -- we can manually create
the soft links we need.
- Revert "Temporarily disable step that seems to be transiently failing.
(#10234)"
- Refresh shell hashtable and show poetry/python location and version.
Make sure that changes to CI infrastructure get tested on CI before
being merged.
Without this PR, changes to the poetry setup action don't trigger a CI
run and in principle could break `master` when merged.
Text Generation Inference's client permits the use of a None temperature
as seen
[here](033230ae66/clients/python/text_generation/client.py (L71C9-L71C20)).
While I haved dived into TGI's server code and don't know about the
implications of using None as a temperature setting, I think we should
grant users the option to pass None as a temperature parameter to TGI.
#9304 introduced a critical bug. The S3DirectoryLoader fails completely
because boto3 checks the naming of kw arguments and one of the args is
badly named (very sorry for that)
cc @baskaryan
Changes in:
- `create_sql_agent` function so that user can easily add custom tools
as complement for the toolkit.
- updating **sql use case** notebook to showcase 2 examples of extra
tools.
Motivation for these changes is having the possibility of including
domain expert knowledge to the agent, which improves accuracy and
reduces time/tokens.
---------
Co-authored-by: Manuel Soria <manuel.soria@greyscaleai.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
## Description
### Issue
This pull request addresses a lingering issue identified in PR #7070. In
that previous pull request, an attempt was made to address the problem
of empty embeddings when using the `OpenAIEmbeddings` class. While PR
#7070 introduced a mechanism to retry requests for embeddings, it didn't
fully resolve the issue as empty embeddings still occasionally
persisted.
### Problem
In certain specific use cases, empty embeddings can be encountered when
requesting data from the OpenAI API. In some cases, these empty
embeddings can be skipped or removed without affecting the functionality
of the application. However, they might not always be resolved through
retries, and their presence can adversely affect the functionality of
applications relying on the `OpenAIEmbeddings` class.
### Solution
To provide a more robust solution for handling empty embeddings, we
propose the introduction of an optional parameter, `skip_empty`, in the
`OpenAIEmbeddings` class. When set to `True`, this parameter will enable
the behavior of automatically skipping empty embeddings, ensuring that
problematic empty embeddings do not disrupt the processing flow. The
developer will be able to optionally toggle this behavior if needed
without disrupting the application flow.
## Changes Made
- Added an optional parameter, `skip_empty`, to the `OpenAIEmbeddings`
class.
- When `skip_empty` is set to `True`, empty embeddings are automatically
skipped without causing errors or disruptions.
### Example Usage
```python
from openai.embeddings import OpenAIEmbeddings
# Initialize the OpenAIEmbeddings class with skip_empty=True
embeddings = OpenAIEmbeddings(api_key="your_api_key", skip_empty=True)
# Request embeddings, empty embeddings are automatically skipped. docs is a variable containing the already splitted text.
results = embeddings.embed_documents(docs)
# Process results without interruption from empty embeddings
```
- Description:
Add a 'download_dir' argument to VLLM model (to change the cache
download directotu when retrieving a model from HF hub)
- Issue:
On some remote machine, I want the cache dir to be in a volume where I
have space (models are heavy nowadays). Sometimes the default HF cache
dir might not be what we want.
- Dependencies:
None
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Previous PR #9353 has incomplete type checks and deprecation warnings.
This PR will fix those type check and add deprecation warning to myscale
vectorstore
(Reopen PR #7706, hope this problem can fix.)
When using `pdfplumber`, some documents may be parsed incorrectly,
resulting in **duplicated characters**.
Taking the
[linked](https://bruusgaard.no/wp-content/uploads/2021/05/Datasheet1000-series.pdf)
document as an example:
## Before
```python
from langchain.document_loaders import PDFPlumberLoader
pdf_file = 'file.pdf'
loader = PDFPlumberLoader(pdf_file)
docs = loader.load()
print(docs[0].page_content)
```
Results:
```
11000000 SSeerriieess
PPoorrttaabbllee ssiinnggllee ggaass ddeetteeccttoorrss ffoorr HHyyddrrooggeenn aanndd CCoommbbuussttiibbllee ggaasseess
TThhee RRiikkeenn KKeeiikkii GGPP--11000000 iiss aa ccoommppaacctt aanndd
lliigghhttwweeiigghhtt ggaass ddeetteeccttoorr wwiitthh hhiigghh sseennssiittiivviittyy ffoorr
tthhee ddeetteeccttiioonn ooff hhyyddrrooccaarrbboonnss.. TThhee mmeeaassuurreemmeenntt
iiss ppeerrffoorrmmeedd ffoorr tthhiiss ppuurrppoossee bbyy mmeeaannss ooff ccaattaallyyttiicc
sseennssoorr.. TThhee GGPP--11000000 hhaass aa bbuuiilltt--iinn ppuummpp wwiitthh
ppuummpp bboooosstteerr ffuunnccttiioonn aanndd aa ddiirreecctt sseelleeccttiioonn ffrroomm
aa lliisstt ooff 2255 hhyyddrrooccaarrbboonnss ffoorr eexxaacctt aalliiggnnmmeenntt ooff tthhee
ttaarrggeett ggaass -- OOnnllyy ccaalliibbrraattiioonn oonn CCHH iiss nneecceessssaarryy..
44
FFeeaattuurreess
TThhee RRiikkeenn KKeeiikkii 110000vvvvttaabbllee ssiinnggllee HHyyddrrooggeenn aanndd
CCoommbbuussttiibbllee ggaass ddeetteeccttoorrss..
TThheerree aarree 33 ssttaannddaarrdd mmooddeellss::
GGPP--11000000:: 00--1100%%LLEELL // 00--110000%%LLEELL ›› LLEELL ddeetteeccttoorr
NNCC--11000000:: 00--11000000ppppmm // 00--1100000000ppppmm ›› PPPPMM
ddeetteeccttoorr
DDiirreecctt rreeaaddiinngg ooff tthhee ccoonncceennttrraattiioonn vvaalluueess ooff
ccoommbbuussttiibbllee ggaasseess ooff 2255 ggaasseess ((55 NNPP--11000000))..
EEaassyy ooppeerraattiioonn ffeeaattuurree ooff cchhaannggiinngg tthhee ggaass nnaammee
ddiissppllaayy wwiitthh 11 sswwiittcchh bbuuttttoonn..
LLoonngg ddiissttaannccee ddrraawwiinngg ppoossssiibbllee wwiitthh tthhee ppuummpp
bboooosstteerr ffuunnccttiioonn..
VVaarriioouuss ccoommbbuussttiibbllee ggaasseess ccaann bbee mmeeaassuurreedd bbyy tthhee
ppppmm oorrddeerr wwiitthh NNCC--11000000..
www.bruusgaard.no postmaster@bruusgaard.no +47 67 54 93 30 Rev: 446-2
```
We can see that there are a large number of duplicated characters in the
text, which can cause issues in subsequent applications.
## After
Therefore, based on the
[solution](https://github.com/jsvine/pdfplumber/issues/71) provided by
the `pdfplumber` source project. I added the `"dedupe_chars()"` method
to address this problem. (Just pass the parameter `dedupe` to `True`)
```python
from langchain.document_loaders import PDFPlumberLoader
pdf_file = 'file.pdf'
loader = PDFPlumberLoader(pdf_file, dedupe=True)
docs = loader.load()
print(docs[0].page_content)
```
Results:
```
1000 Series
Portable single gas detectors for Hydrogen and Combustible gases
The Riken Keiki GP-1000 is a compact and
lightweight gas detector with high sensitivity for
the detection of hydrocarbons. The measurement
is performed for this purpose by means of catalytic
sensor. The GP-1000 has a built-in pump with
pump booster function and a direct selection from
a list of 25 hydrocarbons for exact alignment of the
target gas - Only calibration on CH is necessary.
4
Features
The Riken Keiki 100vvtable single Hydrogen and
Combustible gas detectors.
There are 3 standard models:
GP-1000: 0-10%LEL / 0-100%LEL › LEL detector
NC-1000: 0-1000ppm / 0-10000ppm › PPM
detector
Direct reading of the concentration values of
combustible gases of 25 gases (5 NP-1000).
Easy operation feature of changing the gas name
display with 1 switch button.
Long distance drawing possible with the pump
booster function.
Various combustible gases can be measured by the
ppm order with NC-1000.
www.bruusgaard.no postmaster@bruusgaard.no +47 67 54 93 30 Rev: 446-2
```
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Implemented the MilvusTranslator for self-querying using Milvus vector
store
- Made unit tests to test its functionality
- Documented the Milvus self-querying
- Description: this PR adds the possibility to configure boto3 in the S3
loaders. Any named argument you add will be used to create the Boto3
session. This is useful when the AWS credentials can't be passed as env
variables or can't be read from the credentials file.
- Issue: N/A
- Dependencies: N/A
- Tag maintainer: ?
- Twitter handle: cbornet_
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Various improvements to the Model I/O section of the documentation
- Changed "Chat Model" to "chat model" in a few spots for internal
consistency
- Minor spelling & grammar fixes to improve readability & comprehension
Hi,
I noticed a typo in the local_llms.ipynb file and fixed it. The word
challenge is without 'a' in the original file.
@baskaryan , @eyurtsev
Thanks.
Co-authored-by: Fliprise <fliprise@Fliprises-MacBook-Pro.local>
This PR implements two new classes in the cache module: `CassandraCache`
and `CassandraSemanticCache`, similar in structure and functionality to
their Redis counterpart: providing a cache for the response to a
(prompt, llm) pair.
Integration tests are included. Moreover, linting and type checks are
all passing on my machine.
Dependencies: the `pyproject.toml` and `poetry.lock` have the newest
version of cassIO (the very same as in the Cassandra vector store
metadata PR, submitted as #9280).
If I may suggest, this issue and #9280 might be reviewed together (as
they bring the same poetry changes along), so I'm tagging @baskaryan who
already helped out a little with poetry-related conflicts there. (Thank
you!)
I'd be happy to add a short notebook if this is deemed necessary (but it
seems to me that, contrary e.g. to vector stores, caches are not covered
in specific notebooks).
Thank you!
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Various miscellaneous fixes to most pages in the 'Retrievers' section of
the documentation:
- "VectorStore" and "vectorstore" changed to "vector store" for
consistency
- Various spelling, grammar, and formatting improvements for readability
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Enhance SerpApi response which potential to have more relevant output.
<img width="345" alt="Screenshot 2023-09-01 at 8 26 13 AM"
src="https://github.com/langchain-ai/langchain/assets/10222402/80ff684d-e02e-4143-b218-5c1b102cbf75">
Query: What is the weather in Pomfret?
**Before:**
> I should look up the current weather conditions.
...
Final Answer: The current weather in Pomfret is 73°F with 1% chance of
precipitation and winds at 10 mph.
**After:**
> I should look up the current weather conditions.
...
Final Answer: The current weather in Pomfret is 62°F, 1% precipitation,
61% humidity, and 4 mph wind.
---
Query: Top team in english premier league?
**Before:**
> I need to find out which team is currently at the top of the English
Premier League
...
Final Answer: Liverpool FC is currently at the top of the English
Premier League.
**After:**
> I need to find out which team is currently at the top of the English
Premier League
...
Final Answer: Man City is currently at the top of the English Premier
League.
---
Query: Top team in english premier league?
**Before:**
> I need to find out which team is currently at the top of the English
Premier League
...
Final Answer: Liverpool FC is currently at the top of the English
Premier League.
**After:**
> I need to find out which team is currently at the top of the English
Premier League
...
Final Answer: Man City is currently at the top of the English Premier
League.
---
Query: Any upcoming events in Paris?
**Before:**
> I should look for events in Paris
Action: Search
...
Final Answer: Upcoming events in Paris this month include Whit Sunday &
Whit Monday (French National Holiday), Makeup in Paris, Paris Jazz
Festival, Fete de la Musique, and Salon International de la Maison de.
**After:**
> I should look for events in Paris
Action: Search
...
Final Answer: Upcoming events in Paris include Elektric Park 2023, The
Aces, and BEING AS AN OCEAN.
JSONLoader.load does not specify `encoding` in
`self.file_path.read_text()` as `self.file_path.open()`
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
Description:
Gmail message retrieval in GmailGetMessage and GmailSearch returned an
empty string when encountering multipart emails. This change correctly
extracts the email body for multipart emails.
Dependencies: None
@hwchase17 @vowelparrot
# Description
This change allows you to customize the prompt used in
`create_extraction_chain` as well as `create_extraction_chain_pydantic`.
It also adds the `verbose` argument to
`create_extraction_chain_pydantic` - because `create_extraction_chain`
had it already and `create_extraction_chain_pydantic` did not.
# Issue
N/A
# Dependencies
N/A
# Twitter
https://twitter.com/CamAHutchison
Hi,
- Description:
- Solves the issue #6478.
- Includes some additional rework on the `JSONLoader` class:
- Getting metadata is decoupled from `_get_text`
- Validating metadata_func is perform now by `_validate_metadata_func`,
instead of `_validate_content_key`
- Issue: #6478
- Dependencies: NA
- Tag maintainer: @hwchase17
Description: Adds tags and dataview fields to ObsidianLoader doc
metadata.
- Issue: #9800, #4991
- Dependencies: none
- Tag maintainer: My best guess is @hwchase17 looking through the git
logs
- Twitter handle: I don't use twitter, sorry!
### Description
There is a really nice class for saving chat messages into a database -
SQLChatMessageHistory.
It leverages SqlAlchemy to be compatible with any supported database (in
contrast with PostgresChatMessageHistory, which is basically the same
but is limited to Postgres).
However, the class is not really customizable in terms of what you can
store. I can imagine a lot of use cases, when one will need to save a
message date, along with some additional metadata.
To solve this, I propose to extract the converting logic from
BaseMessage to SQLAlchemy model (and vice versa) into a separate class -
message converter. So instead of rewriting the whole
SQLChatMessageHistory class, a user will only need to write a custom
model and a simple mapping class, and pass its instance as a parameter.
I also noticed that there is no documentation on this class, so I added
that too, with an example of custom message converter.
### Issue
N/A
### Dependencies
N/A
### Tag maintainer
Not yet
### Twitter handle
N/A
Description: new chain for logical fallacy removal from model output in
chain and docs
Issue: n/a see above
Dependencies: none
Tag maintainer: @hinthornw in past from my end but not sure who that
would be for maintenance of chains
Twitter handle: no twitter feel free to call out my git user if shout
out j-space-b
Note: created documentation in docs/extras
---------
Co-authored-by: Jon Bennion <jb@Jons-MacBook-Pro.local>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Issue: closes#9855
* consolidates `from_texts` and `add_texts` functions for pinecone
upsert
* adds two types of batching (one for embeddings and one for index
upsert)
* adds thread pool size when instantiating pinecone index
## Description
When the `MultiQueryRetriever` is used to get the list of documents
relevant according to a query, inside a vector store, and at least one
of these contain metadata with nested dictionaries, a `TypeError:
unhashable type: 'dict'` exception is thrown.
This is caused by the `unique_union` function which, to guarantee the
uniqueness of the returned documents, tries, unsuccessfully, to hash the
nested dictionaries and use them as a part of key.
```python
unique_documents_dict = {
(doc.page_content, tuple(sorted(doc.metadata.items()))): doc
for doc in documents
}
```
## Issue
#9872 (MultiQueryRetriever (get_relevant_documents) raises TypeError:
unhashable type: 'dict' with dic metadata)
## Solution
A possible solution is to dump the metadata dict to a string and use it
as a part of hashed key.
```python
unique_documents_dict = {
(doc.page_content, json.dumps(doc.metadata, sort_keys=True)): doc
for doc in documents
}
```
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Hi, this PR enables configuring the html2text package, instead of being
bound to use the hardcoded values. While simply passing `ignore_links`
and `ignore_images` to the `transform_documents` method was possible, I
preferred passing them to the `__init__` method for 2 reasons:
1. It is more efficient in case of subsequent calls to
`transform_documents`.
2. It allows to move the "complexity" to the instantiation, keeping the
actual execution simple and general enough. IMO the transformers should
all follow this pattern, allowing something like this:
```python
# Instantiate transformers
transformers = [
TransformerA(foo='bar'),
TransformerB(bar='foo'),
# others
]
# During execution, call them sequentially
documents = ...
for tr in transformers:
documents = tr.transform_documents(documents)
```
Thanks for the reviews!
---------
Co-authored-by: taamedag <Davide.Menini@swisscom.com>
If last_accessed_at metadata is a float use it as a timestamp. This
allows to support vector stores that do not store datetime objects like
ChromaDb.
Fixes: https://github.com/langchain-ai/langchain/issues/3685
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
- Description: Adds two optional parameters to the
DynamoDBChatMessageHistory class to enable users to pass in a name for
their PrimaryKey, or a Key object itself to enable the use of composite
keys, a common DynamoDB paradigm.
[AWS DynamoDB Key
docs](https://aws.amazon.com/blogs/database/choosing-the-right-dynamodb-partition-key/)
- Issue: N/A
- Dependencies: N/A
- Twitter handle: N/A
---------
Co-authored-by: Josh White <josh@ctrlstack.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Add SQLDatabaseSequentialChain Class to __init__.py so it can be
accessed and used
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: SQLDatabaseSequentialChain is not found when importing
Langchain_experimental package, when I open __init__.py
Langchain_expermental.sql, I found that SQLDatabaseSequentialChain is
imported and add to __all__ list
- Issue: SQLDatabaseSequentialChain is not found in
Langchain_experimental package
- Dependencies: None,
- Tag maintainer: None,
- Twitter handle: None,
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
The output at times lacks the closing markdown code block. The prompt is
changed to explicitly request the closing backticks.
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
## Description
This PR introduces a minor change to the TitanTakeoff integration.
Instead of specifying a port on localhost, this PR will allow users to
specify a baseURL instead. This will allow users to use the integration
if they have TitanTakeoff deployed externally (not on localhost). This
removes the hardcoded reference to localhost "http://localhost:{port}".
### Info about Titan Takeoff
Titan Takeoff is an inference server created by
[TitanML](https://www.titanml.co/) that allows you to deploy large
language models locally on your hardware in a single command. Most
generative model architectures are included, such as Falcon, Llama 2,
GPT2, T5 and many more.
Read more about Titan Takeoff here:
-
[Blog](https://medium.com/@TitanML/introducing-titan-takeoff-6c30e55a8e1e)
- [Docs](https://docs.titanml.co/docs/titan-takeoff/getting-started)
### Dependencies
No new dependencies are introduced. However, users will need to install
the titan-iris package in their local environment and start the Titan
Takeoff inferencing server in order to use the Titan Takeoff
integration.
Thanks for your help and please let me know if you have any questions.
cc: @hwchase17 @baskaryan
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
The current document has not mentioned that splits larger than chunk
size would happen. I update the related document and explain why it
happens and how to solve it.
related issue #1349#3838#2140
- Description: Added example of running Q&A over structured data using
the `Airbyte` loaders and `pandas`
- Dependencies: any dependencies required for this change,
- Tag maintainer: @hwchase17
- Twitter handle: @pelaseyed
Hi,
this PR contains loader / parser for Azure Document intelligence which
is a ML-based service to ingest arbitrary PDFs / images, even if
scanned. The loader generates Documents by pages of the original
document. This is my first contribution to LangChain.
Unfortunately I could not find the correct place for test cases. Happy
to add one if you can point me to the location, but as this is a
cloud-based service, a test would require network access and credentials
- so might be of limited help.
Dependencies: The needed dependency was already part of pyproject.toml,
no change.
Twitter: feel free to mention @LarsAC on the announcement
Fixed navbar:
- renamed several files, so ToC is sorted correctly
- made ToC items consistent: formatted several Titles
- added several links
- reformatted several docs to a consistent format
- renamed several files (removed `_example` suffix)
- added renamed files to the `docs/docs_skeleton/vercel.json`
This notebook was mistakenly placed in the `toolkits` folder and appears
within `Agents & Toolkits` menu. But it should be in `Tools`.
Moved example into `tools/`; updated title to consistent format.
This small PR aims at supporting the following missing parameters in the
`HuggingfaceTextGen` LLM:
- `return_full_text` - sometimes useful for completion tasks
- `do_sample` - quite handy to control the randomness of the model.
- `watermark`
@hwchase17 @baskaryan
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This PR follows the **Eden AI (LLM + embeddings) integration**. #8633
We added an optional parameter to choose different AI models for
providers (like 'text-bison' for provider 'google', 'text-davinci-003'
for provider 'openai', etc.).
Usage:
```python
llm = EdenAI(
feature="text",
provider="google",
params={
"model": "text-bison", # new
"temperature": 0.2,
"max_tokens": 250,
},
)
```
You can also change the provider + model after initialization
```python
llm = EdenAI(
feature="text",
provider="google",
params={
"temperature": 0.2,
"max_tokens": 250,
},
)
prompt = """
hi
"""
llm(prompt, providers='openai', model='text-davinci-003') # change provider & model
```
The jupyter notebook as been updated with an example well.
Ping: @hwchase17, @baskaryan
---------
Co-authored-by: RedhaWassim <rwasssim@gmail.com>
Co-authored-by: sam <melaine.samy@gmail.com>
Adapting Microsoft Presidio to other languages requires a bit more work,
so for now it will be good idea to remove the language option to choose,
so as not to cause errors and confusion.
https://microsoft.github.io/presidio/analyzer/languages/
I will handle different languages after the weekend 😄
This adds sqlite-vss as an option for a vector database. Contains the
code and a few tests. Tests are passing and the library sqlite-vss is
added as optional as explained in the contributing guidelines. I
adjusted the code for lint/black/ and mypy. It looks that everything is
currently passing.
Adding sqlite-vss was mentioned in this issue:
https://github.com/langchain-ai/langchain/issues/1019.
Also mentioned here in the sqlite-vss repo for the curious:
https://github.com/asg017/sqlite-vss/issues/66
Maintainer tag: @baskaryan
---------
Co-authored-by: Philippe Oger <philippe.oger@adevinta.com>
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
- with_config() allows binding any config values to a Runnable, like
.bind() does for kwargs
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
This PR fixes an issues I found when upgrading to a more recent version
of Langchain. I was using 0.0.142 before, and this issue popped up
already when the `_custom_parser` was added to `output_parsers/json`.
Anyway, the issue is that the parser tries to escape quotes when they
are double-escaped (e.g. `\\"`), leading to OutputParserException.
This is particularly undesired in my app, because I have an Agent that
uses a single input Tool, which expects as input a JSON string with the
structure:
```python
{
"foo": string,
"bar": string
}
```
The LLM (GPT3.5) response is (almost) always something like
`"action_input": "{\\"foo\\": \\"bar\\", \\"bar\\": \\"foo\\"}"` and
since the upgrade this is not correctly parsed.
---------
Co-authored-by: taamedag <Davide.Menini@swisscom.com>
This fixes the exampe import line in the general "cassandra" doc page
mdx file. (it was erroneously a copy of the chat message history import
statement found below).
Description: updated the prompt name in a sequential chain example so
that it is not overwritten by the same prompt name in the next chain
(this is a sequential chain example)
Issue: n/a
Dependencies: none
Tag maintainer: not known
Twitter handle: not on twitter, feel free to use my git username for
anything
Adds a call to Pydantic's `update_forward_refs` for the `Run` class (in
addition to the `ChainRun` and `ToolRun` classes, for which that method
is already called). Without it, the self-reference of child classes
(type `List[Run]`) is problematic. For example:
```python
from langchain.callbacks import StdOutCallbackHandler
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from wandb.integration.langchain import WandbTracer
llm = OpenAI()
prompt = PromptTemplate.from_template("1 + {number} = ")
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[StdOutCallbackHandler(), WandbTracer()])
print(chain.run(number=2))
```
results in the following output before the change
```
WARNING:root:Error in on_chain_start callback: field "child_runs" not yet prepared so type is still a ForwardRef, you might need to call Run.update_forward_refs().
> Entering new LLMChain chain...
Prompt after formatting:
1 + 2 =
WARNING:root:Error in on_chain_end callback: No chain Run found to be traced
> Finished chain.
3
```
but afterwards the callback error messages are gone.
Hi there!
I'm excited to open this PR to add support for using 'Tencent Cloud
VectorDB' as a vector store.
Tencent Cloud VectorDB is a fully-managed, self-developed,
enterprise-level distributed database service designed for storing,
retrieving, and analyzing multi-dimensional vector data. The database
supports multiple index types and similarity calculation methods, with a
single index supporting vector scales up to 1 billion and capable of
handling millions of QPS with millisecond-level query latency. Tencent
Cloud VectorDB not only provides external knowledge bases for large
models to improve their accuracy, but also has wide applications in AI
fields such as recommendation systems, NLP services, computer vision,
and intelligent customer service.
The PR includes:
Implementation of Vectorstore.
I have read your [contributing
guidelines](72b7d76d79/.github/CONTRIBUTING.md).
And I have passed the tests below
make format
make lint
make coverage
make test
This PR brings structural updates to `PlaywrightURLLoader`, aiming at
making the code more readable and extensible through the abstraction of
page evaluation logic. These changes also align this implementation with
a similar structure used in LangChain.js.
The key enhancements include:
1. Introduction of 'PlaywrightEvaluator', an abstract base class for all
evaluators.
2. Creation of 'UnstructuredHtmlEvaluator', a concrete class
implementing 'PlaywrightEvaluator', which uses `unstructured` library
for processing page's HTML content.
3. Extension of 'PlaywrightURLLoader' constructor to optionally accept
an evaluator of the type 'PlaywrightEvaluator'. It defaults to
'UnstructuredHtmlEvaluator' if no evaluator is provided.
4. Refactoring of 'load' and 'aload' methods to use the 'evaluate' and
'evaluate_async' methods of the provided 'PageEvaluator' for page
content handling.
This update brings flexibility to 'PlaywrightURLLoader' as it can now
utilize different evaluators for page processing depending on the
requirement. The abstraction also improves code maintainability and
readability.
Twitter: @ywkim
- Description: Add bloomz_7b, llama-2-7b, llama-2-13b, llama-2-70b to
ErnieBotChat, which only supported ERNIE-Bot-turbo and ERNIE-Bot.
- Issue: #10022,
- Dependencies: no extra dependencies
---------
Co-authored-by: hetianfeng <hetianfeng@meituan.com>
- Description: A change in the documentation example for Azure Cognitive
Vector Search with Scoring Profile so the example works as written
- Issue: #10015
- Dependencies: None
- Tag maintainer: @baskaryan @ruoccofabrizio
- Twitter handle: @poshporcupine
### Description
The feature for anonymizing data has been implemented. In order to
protect private data, such as when querying external APIs (OpenAI), it
is worth pseudonymizing sensitive data to maintain full privacy.
Anonynization consists of two steps:
1. **Identification:** Identify all data fields that contain personally
identifiable information (PII).
2. **Replacement**: Replace all PIIs with pseudo values or codes that do
not reveal any personal information about the individual but can be used
for reference. We're not using regular encryption, because the language
model won't be able to understand the meaning or context of the
encrypted data.
We use *Microsoft Presidio* together with *Faker* framework for
anonymization purposes because of the wide range of functionalities they
provide. The full implementation is available in `PresidioAnonymizer`.
### Future works
- **deanonymization** - add the ability to reverse anonymization. For
example, the workflow could look like this: `anonymize -> LLMChain ->
deanonymize`. By doing this, we will retain anonymity in requests to,
for example, OpenAI, and then be able restore the original data.
- **instance anonymization** - at this point, each occurrence of PII is
treated as a separate entity and separately anonymized. Therefore, two
occurrences of the name John Doe in the text will be changed to two
different names. It is therefore worth introducing support for full
instance detection, so that repeated occurrences are treated as a single
object.
### Twitter handle
@deepsense_ai / @MaksOpp
---------
Co-authored-by: MaksOpp <maks.operlejn@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: this PR adds `s3_object_key` and `s3_bucket` to the doc
metadata when loading an S3 file. This is particularly useful when using
`S3DirectoryLoader` to remove the files from the dir once they have been
processed (getting the object keys from the metadata `source` field
seems brittle)
- Dependencies: N/A
- Tag maintainer: ?
- Twitter handle: _cbornet
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
This PR makes the following changes:
1. Documents become serializable using langhchain serialization
2. Make a utility to create a docstore kw store
Will help to address issue here:
https://github.com/langchain-ai/langchain/issues/9345
In the function _load_run_evaluators the function _get_keys was not
called if only custom_evaluators parameter is used
- Description: In the function _load_run_evaluators the function
_get_keys was not called if only custom_evaluators parameter is used,
- Issue: no issue created for this yet,
- Dependencies: None,
- Tag maintainer: @vowelparrot,
- Twitter handle: Buckler89
---------
Co-authored-by: ddroghini <d.droghini@mflgroup.com>
Description: This commit uses the new Service object in Selenium
webdriver as executable_path has been [deprecated and removed in
selenium version
4.11.2](9f5801c82f)
Issue: https://github.com/langchain-ai/langchain/issues/9808
Tag Maintainer: @eyurtsev
- Description: In my previous PR, I had modified the code to catch all
kinds of [SOURCES, sources, Source, Sources]. However, this change
included checking for a colon or a white space which should actually
have been only checking for a colon.
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
Adds support for [llmonitor](https://llmonitor.com) callbacks.
It enables:
- Requests tracking / logging / analytics
- Error debugging
- Cost analytics
- User tracking
Let me know if anythings neds to be changed for merge.
Thank you!
The [Memory](https://python.langchain.com/docs/modules/memory/) menu is
clogged with unnecessary wording.
I've made it more concise by simplifying titles of the example
notebooks.
As results, menu is shorter and better for comprehend.
The [Memory
Types](https://python.langchain.com/docs/modules/memory/types/) menu is
clogged with unnecessary wording.
I've made it more concise by simplifying titles of the example
notebooks.
As results, menu is shorter and better for comprehend.
- Description: the implementation for similarity_search_with_score did
not actually include a score or logic to filter. Now fixed.
- Tag maintainer: @rlancemartin
- Twitter handle: @ofermend
# Description
This PR adds additional documentation on how to use Azure Active
Directory to authenticate to an OpenAI service within Azure. This method
of authentication allows organizations with more complex security
requirements to use Azure OpenAI.
# Issue
N/A
# Dependencies
N/A
# Twitter
https://twitter.com/CamAHutchison
Recently we made the decision that PromptGuard takes a list of strings
instead of a string.
@ggroode implemented the integration change.
---------
Co-authored-by: ggroode <ggroode@berkeley.edu>
Co-authored-by: ggroode <46691276+ggroode@users.noreply.github.com>
Clearly document that the PAL and CPAL techniques involve generating
code, and that such code must be properly sandboxed and given
appropriate narrowly-scoped credentials in order to ensure security.
While our implementations include some mitigations, Python and SQL
sandboxing is well-known to be a very hard problem and our mitigations
are no replacement for proper sandboxing and permissions management. The
implementation of such techniques must be performed outside the scope of
the Python process where this package's code runs, so its correct setup
and administration must therefore be the responsibility of the user of
this code.
- Description: added the _cosine_relevance_score_fn to
_select_relevance_score_fn of faiss.py to enable the use of cosine
distance for similarity for this vector store and to comply with the
Error Message, that implies, that cosine should be a valid distance
strategy
- Issue: no relevant Issue found, but needed this function myself and
tested it in a private repo
- Dependencies: none
Neo4j has added vector index integration just recently. To allow both
ingestion and integrating it as vector RAG applications, I wrapped it as
a vector store as the implementation is completely different from
`GraphCypherQAChain`. Here, we are not generating any Cypher statements
at query time, we are simply doing the vector similarity search using
the new vector index as if we were dealing with a vector database.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Update google drive doc loader and retriever notebooks. Show how to use with langchain-googledrive package.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Fixed title for the `extras/integrations/llms/llm_caching.ipynb`.
Existing title breaks the sorted order of items in the navbar.
Updated some formatting.
* Added links to the AI Network
* Made title consistent to other tool kits
* Added `integrations/providers/` integration card page
* **No changes** in the example code!
Mypy was not able to determine a good type for `type_to_loader_dict`,
since the values in the dict are functions whose return types are
related to each other in a complex way. One can see this by adding a
line like `reveal_type(type_to_loader_dict)` and running mypy, which
will get mypy to show what type it has inferred for that value.
Adding an explicit type hint to help out mypy avoids the need for a mypy
suppression and allows the code to type-check cleanly.
In order to use `requires` marker in langchain-experimental, there's a
need for *conftest.py* file inside. Everything is identical to the main
langchain module.
Co-authored-by: maks-operlejn-ds <maks.operlejn@gmail.com>
- Fixed a broken link in the `integrations/providers/infino.mdx`
- Fixed a title in the `integration/collbacks/infino.ipynb` example
- Updated text format in this example.
We always overwrote the required args but we infer them by default.
Doing it only the old way makes it so the llm guesses even if an arg is
optional (e.g., for uuids)
The most reliable way to not have a chain run an undesirable SQL command
is to not give it database permissions to run that command. That way the
database itself performs the rule enforcement, so it's much easier to
configure and use properly than anything we could add in ourselves.
## Description
The following PR enables the [grammar-based
sampling](https://github.com/ggerganov/llama.cpp/tree/master/grammars)
in llama-cpp LLM.
In short, loading file with formal grammar definition will constrain
model outputs. For instance, one can force the model to generate valid
JSON or generate only python lists.
In the follow-up PR we will add:
* docs with some description why it is cool and how it works
* maybe some code sample for some task such as in llama repo
---------
Co-authored-by: Lance Martin <lance@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
Hi LangChain :) Thank you for such a great project!
I was going through the CONTRIBUTING.md and found a few minor issues.
Expose classmethods to convenient initialize the vectostore.
The purpose of this PR is to make it easy for users to initialize an
empty vectorstore that's properly pre-configured without having to index
documents into it via `from_documents`.
This will make it easier for users to rely on the following indexing
code: https://github.com/langchain-ai/langchain/pull/9614
to help manage data in the qdrant vectorstore.
### Description
The previous Redis implementation did not allow for the user to specify
the index configuration (i.e. changing the underlying algorithm) or add
additional metadata to use for querying (i.e. hybrid or "filtered"
search).
This PR introduces the ability to specify custom index attributes and
metadata attributes as well as use that metadata in filtered queries.
Overall, more structure was introduced to the Redis implementation that
should allow for easier maintainability moving forward.
# New Features
The following features are now available with the Redis integration into
Langchain
## Index schema generation
The schema for the index will now be automatically generated if not
specified by the user. For example, the data above has the multiple
metadata categories. The the following example
```python
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores.redis import Redis
embeddings = OpenAIEmbeddings()
rds, keys = Redis.from_texts_return_keys(
texts,
embeddings,
metadatas=metadata,
redis_url="redis://localhost:6379",
index_name="users"
)
```
Loading the data in through this and the other ``from_documents`` and
``from_texts`` methods will now generate index schema in Redis like the
following.
view index schema with the ``redisvl`` tool. [link](redisvl.com)
```bash
$ rvl index info -i users
```
Index Information:
| Index Name | Storage Type | Prefixes | Index Options | Indexing |
|--------------|----------------|---------------|-----------------|------------|
| users | HASH | ['doc:users'] | [] | 0 |
Index Fields:
| Name | Attribute | Type | Field Option | Option Value |
|----------------|----------------|---------|----------------|----------------|
| user | user | TEXT | WEIGHT | 1 |
| job | job | TEXT | WEIGHT | 1 |
| credit_score | credit_score | TEXT | WEIGHT | 1 |
| content | content | TEXT | WEIGHT | 1 |
| age | age | NUMERIC | | |
| content_vector | content_vector | VECTOR | | |
### Custom Metadata specification
The metadata schema generation has the following rules
1. All text fields are indexed as text fields.
2. All numeric fields are index as numeric fields.
If you would like to have a text field as a tag field, users can specify
overrides like the following for the example data
```python
# this can also be a path to a yaml file
index_schema = {
"text": [{"name": "user"}, {"name": "job"}],
"tag": [{"name": "credit_score"}],
"numeric": [{"name": "age"}],
}
rds, keys = Redis.from_texts_return_keys(
texts,
embeddings,
metadatas=metadata,
redis_url="redis://localhost:6379",
index_name="users"
)
```
This will change the index specification to
Index Information:
| Index Name | Storage Type | Prefixes | Index Options | Indexing |
|--------------|----------------|----------------|-----------------|------------|
| users2 | HASH | ['doc:users2'] | [] | 0 |
Index Fields:
| Name | Attribute | Type | Field Option | Option Value |
|----------------|----------------|---------|----------------|----------------|
| user | user | TEXT | WEIGHT | 1 |
| job | job | TEXT | WEIGHT | 1 |
| content | content | TEXT | WEIGHT | 1 |
| credit_score | credit_score | TAG | SEPARATOR | , |
| age | age | NUMERIC | | |
| content_vector | content_vector | VECTOR | | |
and throw a warning to the user (log output) that the generated schema
does not match the specified schema.
```text
index_schema does not match generated schema from metadata.
index_schema: {'text': [{'name': 'user'}, {'name': 'job'}], 'tag': [{'name': 'credit_score'}], 'numeric': [{'name': 'age'}]}
generated_schema: {'text': [{'name': 'user'}, {'name': 'job'}, {'name': 'credit_score'}], 'numeric': [{'name': 'age'}]}
```
As long as this is on purpose, this is fine.
The schema can be defined as a yaml file or a dictionary
```yaml
text:
- name: user
- name: job
tag:
- name: credit_score
numeric:
- name: age
```
and you pass in a path like
```python
rds, keys = Redis.from_texts_return_keys(
texts,
embeddings,
metadatas=metadata,
redis_url="redis://localhost:6379",
index_name="users3",
index_schema=Path("sample1.yml").resolve()
)
```
Which will create the same schema as defined in the dictionary example
Index Information:
| Index Name | Storage Type | Prefixes | Index Options | Indexing |
|--------------|----------------|----------------|-----------------|------------|
| users3 | HASH | ['doc:users3'] | [] | 0 |
Index Fields:
| Name | Attribute | Type | Field Option | Option Value |
|----------------|----------------|---------|----------------|----------------|
| user | user | TEXT | WEIGHT | 1 |
| job | job | TEXT | WEIGHT | 1 |
| content | content | TEXT | WEIGHT | 1 |
| credit_score | credit_score | TAG | SEPARATOR | , |
| age | age | NUMERIC | | |
| content_vector | content_vector | VECTOR | | |
### Custom Vector Indexing Schema
Users with large use cases may want to change how they formulate the
vector index created by Langchain
To utilize all the features of Redis for vector database use cases like
this, you can now do the following to pass in index attribute modifiers
like changing the indexing algorithm to HNSW.
```python
vector_schema = {
"algorithm": "HNSW"
}
rds, keys = Redis.from_texts_return_keys(
texts,
embeddings,
metadatas=metadata,
redis_url="redis://localhost:6379",
index_name="users3",
vector_schema=vector_schema
)
```
A more complex example may look like
```python
vector_schema = {
"algorithm": "HNSW",
"ef_construction": 200,
"ef_runtime": 20
}
rds, keys = Redis.from_texts_return_keys(
texts,
embeddings,
metadatas=metadata,
redis_url="redis://localhost:6379",
index_name="users3",
vector_schema=vector_schema
)
```
All names correspond to the arguments you would set if using Redis-py or
RedisVL. (put in doc link later)
### Better Querying
Both vector queries and Range (limit) queries are now available and
metadata is returned by default. The outputs are shown.
```python
>>> query = "foo"
>>> results = rds.similarity_search(query, k=1)
>>> print(results)
[Document(page_content='foo', metadata={'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '14', 'id': 'doc:users:657a47d7db8b447e88598b83da879b9d', 'score': '7.15255737305e-07'})]
>>> results = rds.similarity_search_with_score(query, k=1, return_metadata=False)
>>> print(results) # no metadata, but with scores
[(Document(page_content='foo', metadata={}), 7.15255737305e-07)]
>>> results = rds.similarity_search_limit_score(query, k=6, score_threshold=0.0001)
>>> print(len(results)) # range query (only above threshold even if k is higher)
4
```
### Custom metadata filtering
A big advantage of Redis in this space is being able to do filtering on
data stored alongside the vector itself. With the example above, the
following is now possible in langchain. The equivalence operators are
overridden to describe a new expression language that mimic that of
[redisvl](redisvl.com). This allows for arbitrarily long sequences of
filters that resemble SQL commands that can be used directly with vector
queries and range queries.
There are two interfaces by which to do so and both are shown.
```python
>>> from langchain.vectorstores.redis import RedisFilter, RedisNum, RedisText
>>> age_filter = RedisFilter.num("age") > 18
>>> age_filter = RedisNum("age") > 18 # equivalent
>>> results = rds.similarity_search(query, filter=age_filter)
>>> print(len(results))
3
>>> job_filter = RedisFilter.text("job") == "engineer"
>>> job_filter = RedisText("job") == "engineer" # equivalent
>>> results = rds.similarity_search(query, filter=job_filter)
>>> print(len(results))
2
# fuzzy match text search
>>> job_filter = RedisFilter.text("job") % "eng*"
>>> results = rds.similarity_search(query, filter=job_filter)
>>> print(len(results))
2
# combined filters (AND)
>>> combined = age_filter & job_filter
>>> results = rds.similarity_search(query, filter=combined)
>>> print(len(results))
1
# combined filters (OR)
>>> combined = age_filter | job_filter
>>> results = rds.similarity_search(query, filter=combined)
>>> print(len(results))
4
```
All the above filter results can be checked against the data above.
### Other
- Issue: #3967
- Dependencies: No added dependencies
- Tag maintainer: @hwchase17 @baskaryan @rlancemartin
- Twitter handle: @sampartee
---------
Co-authored-by: Naresh Rangan <naresh.rangan0@walmart.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
This PR implements a custom chain that wraps Amazon Comprehend API
calls. The custom chain is aimed to be used with LLM chains to provide
moderation capability that let’s you detect and redact PII, Toxic and
Intent content in the LLM prompt, or the LLM response. The
implementation accepts a configuration object to control what checks
will be performed on a LLM prompt and can be used in a variety of setups
using the LangChain expression language to not only detect the
configured info in chains, but also other constructs such as a
retriever.
The included sample notebook goes over the different configuration
options and how to use it with other chains.
### Usage sample
```python
from langchain_experimental.comprehend_moderation import BaseModerationActions, BaseModerationFilters
moderation_config = {
"filters":[
BaseModerationFilters.PII,
BaseModerationFilters.TOXICITY,
BaseModerationFilters.INTENT
],
"pii":{
"action": BaseModerationActions.ALLOW,
"threshold":0.5,
"labels":["SSN"],
"mask_character": "X"
},
"toxicity":{
"action": BaseModerationActions.STOP,
"threshold":0.5
},
"intent":{
"action": BaseModerationActions.STOP,
"threshold":0.5
}
}
comp_moderation_with_config = AmazonComprehendModerationChain(
moderation_config=moderation_config, #specify the configuration
client=comprehend_client, #optionally pass the Boto3 Client
verbose=True
)
template = """Question: {question}
Answer:"""
prompt = PromptTemplate(template=template, input_variables=["question"])
responses = [
"Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.",
"Final Answer: This is a really shitty way of constructing a birdhouse. This is fucking insane to think that any birds would actually create their motherfucking nests here."
]
llm = FakeListLLM(responses=responses)
llm_chain = LLMChain(prompt=prompt, llm=llm)
chain = (
prompt
| comp_moderation_with_config
| {llm_chain.input_keys[0]: lambda x: x['output'] }
| llm_chain
| { "input": lambda x: x['text'] }
| comp_moderation_with_config
)
response = chain.invoke({"question": "A sample SSN number looks like this 123-456-7890. Can you give me some more samples?"})
print(response['output'])
```
### Output
```
> Entering new AmazonComprehendModerationChain chain...
Running AmazonComprehendModerationChain...
Running pii validation...
Found PII content..stopping..
The prompt contains PII entities and cannot be processed
```
---------
Co-authored-by: Piyush Jain <piyushjain@duck.com>
Co-authored-by: Anjan Biswas <anjanavb@amazon.com>
Co-authored-by: Jha <nikjha@amazon.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
This PR fixes `QuestionListOutputParser` text splitting.
`QuestionListOutputParser` incorrectly splits numbered list text into
lines. If text doesn't end with `\n` , the regex doesn't capture the
last item. So it always returns `n - 1` items, and
`WebResearchRetriever.llm_chain` generates less queries than requested
in the search prompt.
How to reproduce:
```python
from langchain.retrievers.web_research import QuestionListOutputParser
parser = QuestionListOutputParser()
good = parser.parse(
"""1. This is line one.
2. This is line two.
""" # <-- !
)
bad = parser.parse(
"""1. This is line one.
2. This is line two.""" # <-- No new line.
)
assert good.lines == ['1. This is line one.\n', '2. This is line two.\n'], good.lines
assert bad.lines == ['1. This is line one.\n', '2. This is line two.'], bad.lines
```
NOTE: Last item will not contain a line break but this seems ok because
the items are stripped in the
`WebResearchRetriever.clean_search_query()`.
Description: You cannot execute spark_sql with versions prior to 3.4 due
to the introduction of pyspark.errors in version 3.4.
And if you are below you get 3.4 "pyspark is not installed. Please
install it with pip nstall pyspark" which is not helpful. Also if you
not have pyspark installed you get already the error in init. I would
return all errors. But if you have a different idea feel free to
comment.
Issue: None
Dependencies: None
Maintainer:
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description:
- adding implementation of delete for pgvector
- adding modification time in docs metadata for confluence and google
drive.
Issue:
https://github.com/langchain-ai/langchain/issues/9312
Tag maintainer: @baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
This adds Xata as a memory store also to the python version of
LangChain, similar to the [one for
LangChain.js](https://github.com/hwchase17/langchainjs/pull/2217).
I have added a Jupyter Notebook with a simple and a more complex example
using an agent.
To run the integration test, you need to execute something like:
```
XATA_API_KEY='xau_...' XATA_DB_URL="https://demo-uni3q8.eu-west-1.xata.sh/db/langchain" poetry run pytest tests/integration_tests/memory/test_xata.py
```
Where `langchain` is the database you create in Xata.
Still working out interface/notebooks + need discord data dump to test
out things other than copy+paste
Update:
- Going to remove the 'user_id' arg in the loaders themselves and just
standardize on putting the "sender" arg in the extra kwargs. Then can
provide a utility function to map these to ai and human messages
- Going to move the discord one into just a notebook since I don't have
a good dump to test on and copy+paste maybe isn't the greatest thing to
support in v0
- Need to do more testing on slack since it seems the dump only includes
channels and NOT 1 on 1 convos
-
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Adds the qdrant search filter/params to the
`max_marginal_relevance_search` method, which is present on others. I
did not add `offset` for pagination, because it's behavior would be
ambiguous in this setting (since we fetch extra and down-select).
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Kacper Łukawski <lukawski.kacper@gmail.com>
The Graph Chains are different in the way that it uses two LLMChains
instead of one like the retrievalQA chains. Therefore, sometimes you
want to use different LLM to generate the database query and to generate
the final answer.
This feature would make it more convenient to use different LLMs in the
same chain.
I have also renamed the Graph DB QA Chain to Neo4j DB QA Chain in the
documentation only as it is used only for Neo4j. The naming was
ambigious as it was the first graphQA chain added and wasn't sure how do
you want to spin it.
Updated design of the "API Reference" text
Here is an example of the current format:

It changed to
`langchain.retrievers.ElasticSearchBM25Retriever` format. The same
format as it is in the API Reference Toc.
It also resembles code:
`from langchain.retrievers import ElasticSearchBM25Retriever` (namespace
THEN class_name)
Current format is
`ElasticSearchBM25Retriever from langchain.retrievers` (class_name THEN
namespace)
This change is in line with other formats and improves readability.
@baskaryan
Uses the shorter import path
`from langchain.document_loaders import` instead of the full path
`from langchain.document_loaders.assemblyai`
Applies those changes to the docs and the unit test.
See #9667 that adds this new loader.
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
Note: There are no changes in the file names!
- The group name on the main navbar changed: `Agent toolkits` -> `Agents
& Toolkits`. Examples here are the mix of the Agent and Toolkit examples
because Agents and Toolkits in examples are always used together.
- Titles changed: removed "Agent" and "Toolkit" suffixes. The reason is
the same.
- Formatting: mostly cleaning the header structure, so it could be
better on the right-side navbar.
Main navbar is looking much cleaner now.
⏳
- updated the top-level descriptions to a consistent format;
- changed several `ValueError` to `ImportError` in the import cases;
- changed the format of several internal functions from "name" to
"_name". So, these functions are not shown in the Top-level API
Reference page (with lists of classes/functions)
Currently, ChatOpenAI._stream does not reflect finish_reason to
generation_info. Change it to reflect that.
Same patch as https://github.com/langchain-ai/langchain/pull/9431 , but
also applies to _stream.
This PR adds a new document loader `AssemblyAIAudioTranscriptLoader`
that allows to transcribe audio files with the [AssemblyAI
API](https://www.assemblyai.com) and loads the transcribed text into
documents.
- Add new document_loader with class `AssemblyAIAudioTranscriptLoader`
- Add optional dependency `assemblyai`
- Add unit tests (using a Mock client)
- Add docs notebook
This is the equivalent to the JS integration already available in
LangChain.js. See the [LangChain JS docs AssemblyAI
page](https://js.langchain.com/docs/modules/data_connection/document_loaders/integrations/web_loaders/assemblyai_audio_transcription).
At its simplest, you can use the loader to get a transcript back from an
audio file like this:
```python
from langchain.document_loaders.assemblyai import AssemblyAIAudioTranscriptLoader
loader = AssemblyAIAudioTranscriptLoader(file_path="./testfile.mp3")
docs = loader.load()
```
To use it, it needs the `assemblyai` python package installed, and the
environment variable `ASSEMBLYAI_API_KEY` set with your API key.
Alternatively, the API key can also be passed as an argument.
Twitter handles to shout out if so kindly 🙇
[@AssemblyAI](https://twitter.com/AssemblyAI) and
[@patloeber](https://twitter.com/patloeber)
---------
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Improve internal consistency in LangChain documentation
- Change occurrences of eg and eg. to e.g.
- Fix headers containing unnecessary capital letters.
- Change instances of "few shot" to "few-shot".
- Add periods to end of sentences where missing.
- Minor spelling and grammar fixes.
This PR introduces a persistence layer to help with indexing workflows
into
vectostores.
The indexing code helps users to:
1. Avoid writing duplicated content into the vectostore
2. Avoid over-writing content if it's unchanged
Importantly, this keeps on working even if the content being written is
derived
via a set of transformations from some source content (e.g., indexing
children
documents that were derived from parent documents by chunking.)
The two main components are:
1. Persistence layer that keeps track of which keys were updated and
when.
Keeping track of the timestamp of updates, allows to clean up old
content
safely, and with minimal complexity.
2. HashedDocument which is used to hash the contents (including
metadata) of
the documents. We rely on the hashes for identifying duplicates.
The indexing code works with **ANY** document loader. To add
transformations
to the documents, users for now can add a custom document loader
that composes an existing loader together with document transformers.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: ~~Creates a new root_validator in `_AnthropicCommon` that
allows the use of `model_name` and `max_tokens` keyword arguments.~~
Adds pydantic field aliases to support `model_name` and `max_tokens` as
keyword arguments. Ultimately, this makes `ChatAnthropic` more
consistent with `ChatOpenAI`, making the two classes more
interchangeable for the developer.
- Issue: https://github.com/langchain-ai/langchain/issues/9510
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
Async equivalent coming in future PR
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
The Docugami loader was not returning the source metadata key. This was
triggering this exception when used with retrievers, per
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/schema/prompt_template.py#L193C1-L195C41
The fix is simple and just updates the metadata key name for the
document each chunk is sourced from, from "name" to "source" as
expected.
I tested by running the python notebook that has an end to end scenario
in it.
Tagging DataLoader maintainers @rlancemartin @eyurtsev
This pull request corrects the URL links in the Async API documentation
to align with the updated project layout. The links had not been updated
despite the changes in layout.
Not obvious what the error is when you cannot index. This pr adds the
ability to log the first errors reason, to help the user diagnose the
issue.
Also added some more documentation for when you want to use the
vectorstore with an embedding model deployed in elasticsearch.
Credit: @elastic and @phoey1
- Description: a description of the change
when I set `content_format=ContentFormat.VIEW` and
`keep_markdown_format=True` on ConfluenceLoader, it shows the following
error:
```
langchain/document_loaders/confluence.py", line 459, in process_page
page["body"]["storage"]["value"], heading_style="ATX"
KeyError: 'storage'
```
The reason is because the content format was set to `view` but it was
still trying to get the content from `page["body"]["storage"]["value"]`.
Also added the other content formats which are supported by Atlassian
API
https://stackoverflow.com/questions/34353955/confluence-rest-api-expanding-page-body-when-retrieving-page-by-title/34363386#34363386
- Issue: the issue # it fixes (if applicable),
Not applicable.
- Dependencies: any dependencies required for this change,
Added optional dependency `markdownify` if anyone wants to extract in
markdown format.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: Added the capability to handles structured data from
google enterprise search,
- Issue: Retriever failed when underline search engine was integrated
with structured data,
- Dependencies: google-api-core
- Tag maintainer: @jarokaz
- Twitter handle: anifort
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
---------
Co-authored-by: Christos Aniftos <aniftos@google.com>
Co-authored-by: Holt Skinner <13262395+holtskinner@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Updates the hub stubs to not fail when no api key is found. For
supporting singleton tenants and default values from sdk 0.1.6.
Also adds the ability to define is_public and description for backup
repo creation on push.
Currently, generation_info is not respected by only reflecting messages
in chunks. Change it to add generations so that generation chunks are
merged properly.
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
With this PR:
- All lint and test jobs use the exact same Python + Poetry installation
approach, instead of lints doing it one way and tests doing it another
way.
- The Poetry installation itself is cached, which saves ~15s per run.
- We no longer pass shell commands as workflow arguments to a workflow
that just runs them in a shell. This makes our actions more resilient to
shell code injection.
If y'all like this approach, I can modify the scheduled tests workflow
and the release workflow to use this too.
Update installation instructions to only install test dependencies rather than all dependencies.
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
- Description: current code does not work very well on jupyter notebook,
so I changed the code so that it imports `tqdm.auto` instead.
- Issue: #9582
- Dependencies: N/A
- Tag maintainer: @hwchase17, @baskaryan
- Twitter handle: N/A
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
If another push to the same PR or branch happens while its CI is still
running, cancel the earlier run in favor of the next run.
There's no point in testing an outdated version of the code. GitHub only
allows a limited number of job runners to be active at the same time, so
it's better to cancel pointless jobs early so that more useful jobs can
run sooner.
It's possible that langchain-experimental works fine with the latest
*published* langchain, but is broken with the langchain on `master`.
Unfortunately, you can see this is currently the case — this is why this
PR also includes a minor fix for the `langchain` package itself.
We want to catch situations like that *before* releasing a new
langchain, hence this test.
The current timeouts are too long, and mean that if the GitHub cache
decides to act up, jobs get bogged down for 15min at a time. This has
happened 2-3 times already this week -- a tiny fraction of our total
workflows but really annoying when it happens to you. We can do better.
Installing deps on cache miss takes about ~4min, so it's not worth
waiting more than 4min for the deps cache. The black and mypy caches
save 1 and 2min, respectively, so wait only up to that long to download
them.
The previous approach was relying on `_test.yml` taking an input
parameter, and then doing almost completely orthogonal things for each
parameter value. I've separated out each of those test situations as its
own job or workflow file, which eliminated all the special-casing and,
in my opinion, improved maintainability by making it much more obvious
what code runs when.
# Description
This PR introduces a new toolkit for interacting with the AINetwork
blockchain. The toolkit provides a set of tools for performing various
operations on the AINetwork blockchain, such as transferring AIN,
reading and writing values to the blockchain database, managing apps,
setting rules and owners.
# Dependencies
[ain-py](https://github.com/ainblockchain/ain-py) >= 1.0.2
# Misc
The example notebook
(langchain/docs/extras/integrations/toolkits/ainetwork.ipynb) is in the
PR
---------
Co-authored-by: kriii <kriii@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Introduces a conditional in `ArangoGraph.generate_schema()` to exclude
empty ArangoDB Collections from the schema
- Add empty collection test case
Issue: N/A
Dependencies: None
Description: Link an example of deploying a Langchain app to an AzureML
online endpoint to the deployments documentation page.
Co-authored-by: Vanessa Arndorfer <vaarndor@microsoft.com>
### Description
Polars is a DataFrame interface on top of an OLAP Query Engine
implemented in Rust.
Polars is faster to read than pandas, so I'm looking forward to seeing
it added to the document loader.
### Dependencies
polars (https://pola-rs.github.io/polars-book/user-guide/)
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
I have restructured the code to ensure uniform handling of ImportError.
In place of previously used ValueError, I've adopted the standard
practice of raising ImportError with explanatory messages. This
modification enhances code readability and clarifies that any problems
stem from module importation.
@eyurtsev , @baskaryan
Thanks
Add PromptGuard integration
-------
There are two approaches to integrate PromptGuard with a LangChain
application.
1. PromptGuardLLMWrapper
2. functions that can be used in LangChain expression.
-----
- Dependencies
`promptguard` python package, which is a runtime requirement if you'd
try out the demo.
- @baskaryan @hwchase17 Thanks for the ideas and suggestions along the
development process.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Using `${{ }}` to construct shell commands is risky, since the `${{ }}`
interpolation runs first and ignores shell quoting rules. This means
that shell commands that look safely quoted, like `echo "${{
github.event.issue.title }}"`, are actually vulnerable to shell
injection.
More details here:
https://github.blog/2023-08-09-four-tips-to-keep-your-github-actions-workflows-secure/
- Description: added graph_memgraph_qa.ipynb which shows how to use LLMs
to provide a natural language interface to a Memgraph database using
[MemgraphGraph](https://github.com/langchain-ai/langchain/pull/8591)
class.
- Dependencies: given that the notebook utilizes the MemgraphGraph
class, it relies on both this class and several Python packages that are
installed in the notebook using pip (langchain, openai, neo4j,
gqlalchemy). The notebook is dependent on having a functional Memgraph
instance running, as it requires this instance to establish a
connection.
### Description
When we're loading documents using `ConfluenceLoader`:`load` function
and, if both `include_comments=True` and `keep_markdown_format=True`,
we're getting an error saying `NameError: free variable 'BeautifulSoup'
referenced before assignment in enclosing scope`.
loader = ConfluenceLoader(url="URI", token="TOKEN")
documents = loader.load(
space_key="SPACE",
include_comments=True,
keep_markdown_format=True,
)
This happens because previous imports only consider the
`keep_markdown_format` parameter, however to include the comments, it's
using `BeautifulSoup`
Now it's fixed to handle all four scenarios considering both
`include_comments` and `keep_markdown_format`.
### Twitter
`@SathinduGA`
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Allows the user of `ConfluenceLoader` to pass a
`requests.Session` object in lieu of an authentication mechanism
- Issue: None
- Dependencies: None
- Tag maintainer: @hwchase17
- Improved docs
- Improved performance in multiple ways through batching, threading,
etc.
- fixed error message
- Added support for metadata filtering during similarity search.
@baskaryan PTAL
The package is linted with mypy, so its type hints are correct and
should be exposed publicly. Without this file, the type hints remain
private and cannot be used by downstream users of the package.
Trusted Publishing is the current best practice for publishing Python
packages. Rather than long-lived secret keys, it uses OpenID Connect
(OIDC) to allow our GitHub runner to directly authenticate itself to
PyPI and get a short-lived publishing token. This locks down publishing
quite a bit:
- There's no long-lived publish key to steal anymore.
- Publishing is *only* allowed via the *specifically designated* GitHub
workflow in the designated repo.
It also is operationally easier: no keys means there's nothing that
needs to be periodically rotated, nothing to worry about leaking, and
nobody can accidentally publish a release from their laptop because they
happened to have PyPI keys set up.
After this gets merged, we'll need to configure PyPI to start expecting
trusted publishing. It's only a few clicks and should only take a
minute; instructions are here:
https://docs.pypi.org/trusted-publishers/adding-a-publisher/
More info:
- https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
- https://github.com/pypa/gh-action-pypi-publish
- Description: Updated marqo integration to use tensor_fields instead of
non_tensor_fields. Upgraded marqo version to 1.2.4
- Dependencies: marqo 1.2.4
---------
Co-authored-by: Raynor Kirkson E. Chavez <raynor.chavez@192.168.254.171>
Co-authored-by: Bagatur <baskaryan@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
This is safer than the prior approach, since it's safe by default: the
release workflows never get triggered for non-merged PRs, so there's no
possibility of a buggy conditional accidentally letting a workflow
proceed when it shouldn't have.
The only loss is that publishing no longer requires a `release` label on
the merged PR that bumps the version. We can add a separate CI step that
enforces that part as a condition for merging into `master`, if
desirable.
I have discovered a bug located within `.github/workflows/_release.yml`
which is the primary cause of continuous integration (CI) errors. The
problem can be solved; therefore, I have constructed a PR to address the
issue.
## The Issue
Access the following link to view the exact errors: [Langhain Release
Workflow](https://github.com/langchain-ai/langchain/actions/workflows/langchain_release.yml)
The instances of these errors take place for **each PR** that updates
`pyproject.toml`, excluding those specifically associated with bumping
PRs.
See below for the specific error message:
```
Error: Error 422: Validation Failed: {"resource":"Release","code":"already_exists","field":"tag_name"}
```
An image of the error can be viewed here:

The `_release.yml` document contains the following if-condition:
```yaml
if: |
${{ github.event.pull_request.merged == true }}
&& ${{ contains(github.event.pull_request.labels.*.name, 'release') }}
```
## The Root Cause
The above job constantly runs as the `if-condition` is always identified
as `true`.
## The Logic
The `if-condition` can be defined as `if: ${{ b1 }} && ${{ b2 }}`, where
`b1` and `b2` are boolean values. However, in terms of condition
evaluation with GitHub Actions, `${{ false }}` is identified as a string
value, thereby rendering it as truthy as per the [official
documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idif).
I have run some tests regarding this behavior within my forked
repository. You can consult my [debug
PR](https://github.com/zawakin/langchain/pull/1) for reference.
Here is the result of the tests:
|If-Condition|Outcome|
|:--:|:--:|
|`if: true && ${{ false }}`|Execution|
|`if: ${{ false }}` |Skipped|
|`if: true && false` |Skipped|
|`if: false`|Skipped|
|`if: ${{ true && false }}` |Skipped|
In view of the first and second results, we can infer that `${{ false
}}` can only be interpreted as `true` for conditions composed of some
expressions.
It is consistent that the condition of `if: ${{ inputs.working-directory
== 'libs/langchain' }}` works.
It is surprised to be skipped for the second case but it seems the spec
of GitHub Actions 😓
Anyway, the PR would fix these errors, I believe 👍
Could you review this? @hwchase17 or @shoelsch , who is the author of
[PR](https://github.com/langchain-ai/langchain/pull/360).
- Description: Changed metadata retrieval so that it combines Vectara
doc level and part level metadata
- Tag maintainer: @rlancemartin
- Twitter handle: @ofermend
Made the notion document of how Langchain executes agents method by
method in the codebase.
Can be helpful for developers that just started working with the
Langchain codebase.
The current Collab URL returns a 404, since there is no `chatbots`
directory under `use_cases`.
<!-- Thank you for contributing to LangChain!
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
**Description**:
- Uniformed the current valid suffixes (file formats) for loading agents
from hubs and files (to better handle future additions);
- Clarified exception messages (also in unit test).
@rlancemartin The current implementation within `Geopandas.GeoDataFrame`
loader uses the python builtin `str()` function on the input geometries.
While this looks very close to WKT (Well known text), Python's str
function doesn't guarantee that.
In the interest of interop., I've changed to the of use `wkt` property
on the Shapely geometries for generating the text representation of the
geometries.
Also, included here:
- validation of the input `page_content_column` as being a GeoSeries.
- geometry `crs` (Coordinate Reference System) / bounds
(xmin/ymin/xmax/ymax) added to Document metadata. Having the CRS is
critical... having the bounds is just helpful!
I think there is a larger question of "Should the geometry live in the
`page_content`, or should the record be better summarized and tuck the
geom into metadata?" ...something for another day and another PR.
This is an extension of #8104. I updated some of the signatures so all
the tests pass.
@danhnn I couldn't commit to your PR, so I created a new one. Thanks for
your contribution!
@baskaryan Could you please merge it?
---------
Co-authored-by: Danh Nguyen <dnncntt@gmail.com>
### Summary
Fixes a bug from #7850 where post processing functions in Unstructured
loaders were not apply. Adds a assertion to the test to verify the post
processing function was applied and also updates the explanation in the
example notebook.
Issue: https://github.com/langchain-ai/langchain/issues/9401
In the Async mode, SequentialChain implementation seems to run the same
callbacks over and over since it is re-using the same callbacks object.
Langchain version: 0.0.264, master
The implementation of this aysnc route differs from the sync route and
sync approach follows the right pattern of generating a new callbacks
object instead of re-using the old one and thus avoiding the cascading
run of callbacks at each step.
Async mode:
```
_run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()
callbacks = _run_manager.get_child()
...
for i, chain in enumerate(self.chains):
_input = await chain.arun(_input, callbacks=callbacks)
...
```
Regular mode:
```
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
for i, chain in enumerate(self.chains):
_input = chain.run(_input, callbacks=_run_manager.get_child(f"step_{i+1}"))
...
```
Notice how we are reusing the callbacks object in the Async code which
will have a cascading effect as we run through the chain. It runs the
same callbacks over and over resulting in issues.
Solution:
Define the async function in the same pattern as the regular one and
added tests.
---------
Co-authored-by: vamsee_yarlagadda <vamsee.y@airbnb.com>
Ternary operators in GitHub Actions syntax are pretty ugly and hard to
read: `inputs.working-directory == '' && '.' ||
inputs.working-directory` means "if the condition is true, use `'.'` and
otherwise use the expression after the `||`".
This PR performs the ternary as few times as possible, assigning its
outcome to an env var we can then reuse as needed.
Fix spelling errors in the text: 'Therefore' and 'Retrying
I want to stress that your feedback is invaluable to us and is genuinely
cherished.
With gratitude,
@baskaryan @hwchase17
Only lint on the min and max supported Python versions.
It's extremely unlikely that there's a lint issue on any version in
between that doesn't show up on the min or max versions.
GitHub rate-limits how many jobs can be running at any one time.
Starting new jobs is also relatively slow, so linting on fewer versions
makes CI faster.
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
📜
- updated the top-level descriptions to a consistent format;
- changed the format of several 100% internal functions from "name" to
"_name". So, these functions are not shown in the Top-level API
Reference page (with lists of classes/functions)
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
Using `poetry add` to install `pydantic@2.1` was also causing poetry to
change its lockfile. This prevented dependency caching from working:
- When attempting to restore a cache, it would hash the lockfile in git
and use it as part of the cache key. Say this is a cache miss.
- Then, it would attempt to save the cache -- but the lockfile will have
changed, so the cache key would be *different* than the key in the
lookup. So the cache save would succeed, but to a key that cannot be
looked up in the next run -- meaning we never get a cache hit.
In addition to busting the cache, the lockfile update itself is also
non-trivially long, over 30s:

This PR fixes the problems by using `pip` to perform the installation,
avoiding the lockfile change.
Refactored code to ensure consistent handling of ImportError. Replaced
instances of raising ValueError with raising ImportError.
The choice of raising a ValueError here is somewhat unconventional and
might lead to confusion for anyone reading the code. Typically, when
dealing with import-related errors, the recommended approach is to raise
an ImportError with a descriptive message explaining the issue. This
provides a clearer indication that the problem is related to importing
the required module.
@hwchase17 , @baskaryan , @eyurtsev
Thanks
Aashish
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This PR fills in more missing type annotations on pydantic models.
It's OK if it missed some annotations, we just don't want it to get
annotations wrong at this stage.
I'll do a few more passes over the same files!
The previous caching configuration was attempting to cache poetry venvs
created in the default shared virtualenvs directory. However, all
langchain packages use `in-project = true` for their poetry virtualenv
setup, which moves the venv inside the package itself instead. This
meant that poetry venvs were not being cached at all.
This PR ensures that the venv gets cached by adding the in-project venv
directory to the cached directories list.
It also makes sure that the cache key *only* includes the lockfile being
installed, as opposed to *all lockfiles* (unnecessary cache misses) or
just the *top-level lockfile* (cache hits when it shouldn't).
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
Removed extra "the" in the sentence about the chicken crossing the road
in fallbacks.ipynb. The sentence now reads correctly: "Why did the
chicken cross the road?" This resolves the grammatical error and
improves the overall quality of the content.
@baskaryan , @hinthornw , @hwchase17
I want to extend my heartfelt gratitude to the creator for masterfully
crafting this remarkable application. 🙌 I am truly impressed by the
meticulous attention to grammar and spelling in the documentation, which
undoubtedly contributes to a polished and seamless reader experience.
As always, your feedback holds immense value and is greatly appreciated.
@baskaryan , @hwchase17
I want to convey my deep appreciation to the creator for their expert
craftsmanship in developing this exceptional application. 👏 The
remarkable dedication to upholding impeccable grammar and spelling in
the documentation significantly enhances the polished and seamless
experience for readers.
I want to stress that your feedback is invaluable to us and is genuinely
cherished.
With gratitude,
@baskaryan, @hwchase17
In this commit, I have made a modification to the term "Langchain" to
correctly reflect the project's name as "LangChain". This change ensures
consistency and accuracy throughout the codebase and documentation.
@baskaryan , @hwchase17
Refined the example in router.ipynb by addressing a minor typographical
error. The typo "rins" has been corrected to "rains" in the code snippet
that demonstrates the usage of the MultiPromptChain. This change ensures
accuracy and consistency in the provided code example.
This improvement enhances the readability and correctness of the
notebook, making it easier for users to understand and follow the
demonstration. The commit aims to maintain the quality and accuracy of
the content within the repository.
Thank you for your attention to detail, and please review the change at
your convenience.
@baskaryan , @hwchase17
This PR fixes the Airbyte loaders when doing incremental syncs. The
notebooks are calling out to access `loader.last_state` to get the
current state of incremental syncs, but this didn't work due to a
refactoring of how the loaders are structured internally in the original
PR.
This PR fixes the issue by adding a `last_state` property that forwards
the state correctly from the CDK adapter.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Fix a minor variable naming inconsistency in a code
snippet in the docs
- Issue: N/A
- Dependencies: none
- Tag maintainer: N/A
- Twitter handle: N/A
## Type:
Improvement
---
## Description:
Running QAWithSourcesChain sometimes raises ValueError as mentioned in
issue #7184:
```
ValueError: too many values to unpack (expected 2)
Traceback:
response = qa({"question": pregunta}, return_only_outputs=True)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 166, in __call__
raise e
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\qa_with_sources\base.py", line 132, in _call
answer, sources = re.split(r"SOURCES:\s", answer)
```
This is due to LLM model generating subsequent question, answer and
sources, that is complement in a similar form as below:
```
<final_answer>
SOURCES: <sources>
QUESTION: <new_or_repeated_question>
FINAL ANSWER: <new_or_repeated_final_answer>
SOURCES: <new_or_repeated_sources>
```
It leads the following line
```
re.split(r"SOURCES:\s", answer)
```
to return more than 2 elements and result in ValueError. The simple fix
is to split also with "QUESTION:\s" and take the first two elements:
```
answer, sources = re.split(r"SOURCES:\s|QUESTION:\s", answer)[:2]
```
Sometimes LLM might also generate some other texts, like alternative
answers in a form:
```
<final_answer_1>
SOURCES: <sources>
<final_answer_2>
SOURCES: <sources>
<final_answer_3>
SOURCES: <sources>
```
In such cases it is the best to split previously obtained sources with
new line:
```
sources = re.split(r"\n", sources.lstrip())[0]
```
---
## Issue:
Resolves#7184
---
## Maintainer:
@baskaryan
I quick change to allow the output key of create_openai_fn_chain to
optionally be changed.
@baskaryan
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Added improvements in Nebula LLM to perform auto-retry;
more generation parameters supported. Conversation is no longer required
to be passed in the LLM object. Examples are updated.
- Issue: N/A
- Dependencies: N/A
- Tag maintainer: @baskaryan
- Twitter handle: symbldotai
---------
Co-authored-by: toshishjawale <toshish@symbl.ai>
Update documentation and URLs for the Langchain Context integration.
We've moved from getcontext.ai to context.ai \o/
Thanks in advance for the review!
* PR updates test.yml to test with both pydantic versions
* Code should be refactored to make it easier to do testing in matrix
format w/ packages
* Added steps to assert that pydantic version in the environment is as
expected
Now with ElasticsearchStore VectorStore merged, i've added support for
the self-query retriever.
I've added a notebook also to demonstrate capability. I've also added
unit tests.
**Credit**
@elastic and @phoey1 on twitter.
# Poetry updates
This PR updates LangChains poetry file to remove
any dependencies that aren't pydantic v2 compatible yet.
All packages remain usable under pydantic v1, and can be installed
separately.
## Bumping the following packages:
* langsmith
## Removing the following packages
not used in extended unit-tests:
* zep-python, anthropic, jina, spacy, steamship, betabageldb
not used at all:
* octoai-sdk
Cleaning up extras w/ for removed packages.
## Snapshots updated
Some snapshots had to be updated due to a change in the data model in
langsmith. RunType used to be Union of Enum and string and was changed
to be string only.
This PR adds serialization support for protocol bufferes in
`WandbTracer`. This allows code generation chains to be visualized.
Additionally, it also fixes a minor bug where the settings are not
honored when a run is initialized before using the `WandbTracer`
@agola11
---------
Co-authored-by: Bharat Ramanathan <ramanathan.parameshwaran@gohuddl.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Todo:
- [x] Connection options (cloud, localhost url, es_connection) support
- [x] Logging support
- [x] Customisable field support
- [x] Distance Similarity support
- [x] Metadata support
- [x] Metadata Filter support
- [x] Retrieval Strategies
- [x] Approx
- [x] Approx with Hybrid
- [x] Exact
- [x] Custom
- [x] ELSER (excluding hybrid as we are working on RRF support)
- [x] integration tests
- [x] Documentation
👋 this is a contribution to improve Elasticsearch integration with
Langchain. Its based loosely on the changes that are in master but with
some notable changes:
## Package name & design improvements
The import name is now `ElasticsearchStore`, to aid discoverability of
the VectorStore.
```py
## Before
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch, ElasticKnnSearch
## Now
from langchain.vectorstores.elasticsearch import ElasticsearchStore
```
## Retrieval Strategy support
Before we had a number of classes, depending on the strategy you wanted.
`ElasticKnnSearch` for approx, `ElasticVectorSearch` for exact / brute
force.
With `ElasticsearchStore` we have retrieval strategies:
### Approx Example
Default strategy for the vast majority of developers who use
Elasticsearch will be inferring the embeddings from outside of
Elasticsearch. Uses KNN functionality of _search.
```py
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts,
FakeEmbeddings(),
es_url="http://localhost:9200",
index_name="sample-index"
)
output = docsearch.similarity_search("foo", k=1)
```
### Approx, with hybrid
Developers who want to search, using both the embedding and the text
bm25 match. Its simple to enable.
```py
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts,
FakeEmbeddings(),
es_url="http://localhost:9200",
index_name="sample-index",
strategy=ElasticsearchStore.ApproxRetrievalStrategy(hybrid=True)
)
output = docsearch.similarity_search("foo", k=1)
```
### Approx, with `query_model_id`
Developers who want to infer within Elasticsearch, using the model
loaded in the ml node.
This relies on the developer to setup the pipeline and index if they
wish to embed the text in Elasticsearch. Example of this in the test.
```py
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts,
FakeEmbeddings(),
es_url="http://localhost:9200",
index_name="sample-index",
strategy=ElasticsearchStore.ApproxRetrievalStrategy(
query_model_id="sentence-transformers__all-minilm-l6-v2"
),
)
output = docsearch.similarity_search("foo", k=1)
```
### I want to provide my own custom Elasticsearch Query
You might want to have more control over the query, to perform
multi-phase retrieval such as LTR, linearly boosting on document
parameters like recently updated or geo-distance. You can do this with
`custom_query_fn`
```py
def my_custom_query(query_body: dict, query: str) -> dict:
return {"query": {"match": {"text": {"query": "bar"}}}}
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts, FakeEmbeddings(), **elasticsearch_connection, index_name=index_name
)
docsearch.similarity_search("foo", k=1, custom_query=my_custom_query)
```
### Exact Example
Developers who have a small dataset in Elasticsearch, dont want the cost
of indexing the dims vs tradeoff on cost at query time. Uses
script_score.
```py
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts,
FakeEmbeddings(),
es_url="http://localhost:9200",
index_name="sample-index",
strategy=ElasticsearchStore.ExactRetrievalStrategy(),
)
output = docsearch.similarity_search("foo", k=1)
```
### ELSER Example
Elastic provides its own sparse vector model called ELSER. With these
changes, its really easy to use. The vector store creates a pipeline and
index thats setup for ELSER. All the developer needs to do is configure,
ingest and query via langchain tooling.
```py
texts = ["foo", "bar", "baz"]
docsearch = ElasticsearchStore.from_texts(
texts,
FakeEmbeddings(),
es_url="http://localhost:9200",
index_name="sample-index",
strategy=ElasticsearchStore.SparseVectorStrategy(),
)
output = docsearch.similarity_search("foo", k=1)
```
## Architecture
In future, we can introduce new strategies and allow us to not break bwc
as we evolve the index / query strategy.
## Credit
On release, could you credit @elastic and @phoey1 please? Thank you!
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Updated prompts for the MultiOn toolkit for better functionality
- Non-blocking but good to have it merged to improve the overall
performance for the toolkit
@hinthornw @hwchase17
---------
Co-authored-by: Naman Garg <ngarg3@binghamton.edu>
Add ability to track langchain usage for Rockset. Rockset's new python
client allows setting this. To prevent old clients from failing, it
ignore if setting throws exception (we can't track old versions)
Tested locally with old and new Rockset python client
cc @baskaryan
2 things:
- Implement the private method rather than the public one so callbacks
are handled properly
- Add search_kwargs (Open to not adding this if we are trying to
deprecate this UX but seems like as a user i'd assume similar args to
the vector store retriever. In fact some may assume this implements the
same interface but I'm not dealing with that here)
-
First of a few PRs to add full compatibility to both pydantic v1 and v2.
This PR creates pydantic v1 namespace and adds it to sys.modules.
Upcoming changes:
1. Handle `openapi-schema-pydantic = "^1.2"` and dependent chains/tools
2. bump dependencies to versions that are cross compatible for pydantic
or remove them (see below)
3. Add tests to github workflows to test with pydantic v1 and v2
**Dependencies**
From a quick look (could be wrong since was done manually)
**dependencies pinning pydantic below 2** (some of these can be bumped
to newer versions are provide cross-compatible code)
anthropic
bentoml
confection
fastapi
langsmith
octoai-sdk
openapi-schema-pydantic
qdrant-client
spacy
steamship
thinc
zep-python
Unpinned
marqo (*)
nomic (*)
xinference(*)
## Description:
Sets default values for `client` and `model` attributes in the
BaseOpenAI class to fix Pylance Typing issue.
- Issue: #9182.
- Twitter handle: @evanmschultz
# Added SmartGPT workflow by providing SmartLLM wrapper around LLMs
Edit:
As @hwchase17 suggested, this should be a chain, not an LLM. I have
adapted the PR.
It is used like this:
```
from langchain.prompts import PromptTemplate
from langchain.chains import SmartLLMChain
from langchain.chat_models import ChatOpenAI
hard_question = "I have a 12 liter jug and a 6 liter jug. I want to measure 6 liters. How do I do it?"
hard_question_prompt = PromptTemplate.from_template(hard_question)
llm = ChatOpenAI(model_name="gpt-4")
prompt = PromptTemplate.from_template(hard_question)
chain = SmartLLMChain(llm=llm, prompt=prompt, verbose=True)
chain.run({})
```
Original text:
Added SmartLLM wrapper around LLMs to allow for SmartGPT workflow (as in
https://youtu.be/wVzuvf9D9BU). SmartLLM can be used wherever LLM can be
used. E.g:
```
smart_llm = SmartLLM(llm=OpenAI())
smart_llm("What would be a good company name for a company that makes colorful socks?")
```
or
```
smart_llm = SmartLLM(llm=OpenAI())
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=smart_llm, prompt=prompt)
chain.run("colorful socks")
```
SmartGPT consists of 3 steps:
1. Ideate - generate n possible solutions ("ideas") to user prompt
2. Critique - find flaws in every idea & select best one
3. Resolve - improve upon best idea & return it
Fixes#4463
## Who can review?
Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
- @hwchase17
- @agola11
Twitter: [@UmerHAdil](https://twitter.com/@UmerHAdil) | Discord:
RicChilligerDude#7589
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
# Ensure deployment_id is set to provided deployment, required for Azure
OpenAI.
---------
Co-authored-by: Lucas Pickup <lupickup@microsoft.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
This commit adds the LangChain utility which allows for the real-time
retrieval of cryptocurrency exchange prices. With LangChain, users can
easily access up-to-date pricing information by running the command
".run(from_currency, to_currency)". This new feature provides a
convenient way to stay informed on the latest exchange rates and make
informed decisions when trading crypto.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Adds the ArcGISLoader class to
`langchain.document_loaders`
- Allows users to load data from ArcGIS Online, Portal, and similar
- Users can authenticate with `arcgis.gis.GIS` or retrieve public data
anonymously
- Uses the `arcgis.features.FeatureLayer` class to retrieve the data
- Defines the most relevant keywords arguments and accepts `**kwargs`
- Dependencies: Using this class requires `arcgis` and, optionally,
`bs4.BeautifulSoup`.
Tagging maintainers:
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Formatted docstrings from different formats to consistent format, lile:
>Loads processed docs from Docugami.
"Load from `Docugami`."
>Loader that uses Unstructured to load HTML files.
"Load `HTML` files using `Unstructured`."
>Load documents from a directory.
"Load from a directory."
- `Load` - no `Loads`
- DocumentLoader always loads Documents, so no more
"documents/docs/texts/ etc"
- integrated systems and APIs enclosed in backticks,
Updated interactive walkthrough link in index.md to resolve 404 error.
Also, expressing deep gratitude to LangChain library developers for
their exceptional efforts 🥇 .
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
As stated in the title the SVM retriever discarded the metadata of
passed in docs. This code fixes that. I also added one unit test that
should test that.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Added a new use case category called "Web Scraping", and
a tutorial to scrape websites using OpenAI Functions Extraction chain to
the docs.
- Tag maintainer:@baskaryan @hwchase17 ,
- Twitter handle: https://www.linkedin.com/in/haiphunghiem/ (I'm on
LinkedIn mostly)
---------
Co-authored-by: Lance Martin <lance@langchain.dev>
This change updates the central utility class to recognize a Redis
cluster server after connection and returns an new cluster aware Redis
client. The "normal" Redis client would not be able to talk to a cluster
node because keys might be stored on other shards of the Redis cluster
and therefor not readable or writable.
With this patch clients do not need to know what Redis server it is,
they just connect though the same API calls for standalone and cluster
server.
There are no dependencies added due to this MR.
Remark - with current redis-py client library (4.6.0) a cluster cannot
be used as VectorStore. It can be used for other use-cases. There is a
bug / missing feature(?) in the Redis client breaking the VectorStore
implementation. I opened an issue at the client library too
(redis/redis-py#2888) to fix this. As soon as this is fixed in
`redis-py` library it should be usable there too.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This PR introduces [Label Studio](https://labelstud.io/) integration
with LangChain via `LabelStudioCallbackHandler`:
- sending data to the Label Studio instance
- labeling dataset for supervised LLM finetuning
- rating model responses
- tracking and displaying chat history
- support for custom data labeling workflow
### Example
```
chat_llm = ChatOpenAI(callbacks=[LabelStudioCallbackHandler(mode="chat")])
chat_llm([
SystemMessage(content="Always use emojis in your responses."),
HumanMessage(content="Hey AI, how's your day going?"),
AIMessage(content="🤖 I don't have feelings, but I'm running smoothly! How can I help you today?"),
HumanMessage(content="I'm feeling a bit down. Any advice?"),
AIMessage(content="🤗 I'm sorry to hear that. Remember, it's okay to seek help or talk to someone if you need to. 💬"),
HumanMessage(content="Can you tell me a joke to lighten the mood?"),
AIMessage(content="Of course! 🎭 Why did the scarecrow win an award? Because he was outstanding in his field! 🌾"),
HumanMessage(content="Haha, that was a good one! Thanks for cheering me up."),
AIMessage(content="Always here to help! 😊 If you need anything else, just let me know."),
HumanMessage(content="Will do! By the way, can you recommend a good movie?"),
])
```
<img width="906" alt="image"
src="https://github.com/langchain-ai/langchain/assets/6087484/0a1cf559-0bd3-4250-ad96-6e71dbb1d2f3">
### Dependencies
- [label-studio](https://pypi.org/project/label-studio/)
- [label-studio-sdk](https://pypi.org/project/label-studio-sdk/)
https://twitter.com/labelstudiohq
---------
Co-authored-by: nik <nik@heartex.net>
As of the recent PR at #9043, after some testing we've realised that the
default values were not being used for `api_key` and `api_url`. Besides
that, the default for `api_key` was set to `argilla.apikey`, but since
the default values are intended for people using the Argilla Quickstart
(easy to run and setup), the defaults should be instead `owner.apikey`
if using Argilla 1.11.0 or higher, or `admin.apikey` if using a lower
version of Argilla.
Additionally, we've removed the f-string replacements from the
docstrings.
---------
Co-authored-by: Gabriel Martin <gabriel@argilla.io>
This MR corrects the IndexError arising in prep_prompts method when no
documents are returned from a similarity search.
Fixes#1733
Co-authored-by: Sam Groenjes <sam.groenjes@darkwolfsolutions.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
In second section it looks like a copy/paste from the first section and
doesn't include the specific embedding model mentioned in the example so
I added it for clarity.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
### Description:
`ConversationBufferTokenMemory` should have a simple way of returning
the conversation messages as a string.
Previously to complete this, you would only have the option to return
memory as an array through the buffer method and call
`get_buffer_string` by importing it from `langchain.schema`, or use the
`load_memory_variables` method and key into `self.memory_key`.
### Maintainer
@hwchase17
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Now that we accept any runnable or arbitrary function to evaluate, we
don't always look up the input keys. If an evaluator requires
references, we should try to infer if there's one key present. We only
have delayed validation here but it's better than nothing
The table creation process in these examples commands do not match what
the recently updated functions in these example commands is looking for.
This change updates the type in the table creation command.
Issue Number for my report of the doc problem #7446
@rlancemartin and @eyurtsev I believe this is your area
Twitter: @j1philli
Co-authored-by: Bagatur <baskaryan@gmail.com>
- **Description**: [BagelDB](bageldb.ai) a collaborative vector
database. Integrated the bageldb PyPi package with langchain with
related tests and code.
- **Issue**: Not applicable.
- **Dependencies**: `betabageldb` PyPi package.
- **Tag maintainer**: @rlancemartin, @eyurtsev, @baskaryan
- **Twitter handle**: bageldb_ai (https://twitter.com/BagelDB_ai)
We ran `make format`, `make lint` and `make test` locally.
Followed the contribution guideline thoroughly
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
---------
Co-authored-by: Towhid1 <nurulaktertowhid@gmail.com>
Description: updated BabyAGI examples and experimental to append the
iteration to the result id to fix error storing data to vectorstore.
Issue: 7445
Dependencies: no
Tag maintainer: @eyurtsev
This fix worked for me locally. Happy to take some feedback and iterate
on a better solution. I was considering appending a uuid instead but
didn't want to over complicate the example.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Add convenience methods to `ConversationBufferMemory` and
`ConversationBufferWindowMemory` to get buffer either as messages or as
string.
Helps when `return_messages` is set to `True` but you want access to the
messages as a string, and vice versa.
@hwchase17
One use case: Using a `MultiPromptRouter` where `default_chain` is
`ConversationChain`, but destination chains are `LLMChains`. Injecting
chat memory into prompts for destination chains prints a stringified
`List[Messages]` in the prompt, which creates a lot of noise. These
convenience methods allow caller to choose either as needed.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description: Due to some issue on the test, this is a separate PR with
the test for #8502
Tag maintainer: @rlancemartin
---------
Co-authored-by: Lance Martin <lance@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Current regex only extracts agent's action between '` ``` ``` `', this
commit will extract action between both '` ```json ``` `' and '` ``` ```
`'
This is very similar to #7511
Co-authored-by: zjl <junlinzhou@yzbigdata.com>
## Description
This PR adds the `aembed_query` and `aembed_documents` async methods for
improving the embeddings generation for large documents. The
implementation uses asyncio tasks and gather to achieve concurrency as
there is no bedrock async API in boto3.
### Maintainers
@agola11
@aarora79
### Open questions
To avoid throttling from the Bedrock API, should there be an option to
limit the concurrency of the calls?
I was initially confused weather to use create_vectorstore_agent or
create_vectorstore_router_agent due to lack of documentation so I
created a simple documentation for each of the function about their
different usecase.
Replace this comment with:
- Description: Added the doc_strings in create_vectorstore_agent and
create_vectorstore_router_agent to point out the difference in their
usecase
- Tag maintainer: @rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Hi @agola11, or whoever is reviewing this PR 😄
## What's in this PR?
As of the latest Argilla release, we'll change and refactor some things
to make some workflows easier, one of those is how everything's pushed
to Argilla, so that now there's no need to call `push_to_argilla` over a
`FeedbackDataset` when either `push_to_argilla` is called for the first
time, or `from_argilla` is called; among others.
We also add some class variables to make sure those are easy to update
in case we update those internally in the future, also to make the
`warnings.warn` message lighter from the code view.
P.S. Regarding the Twitter/X mention feel free to do so at either
https://twitter.com/argilla_io or https://twitter.com/alvarobartt, or
both if applicable, otherwise, just the first Twitter/X handle.
## Description:
This PR adds the Titan Takeoff Server to the available LLMs in
LangChain.
Titan Takeoff is an inference server created by
[TitanML](https://www.titanml.co/) that allows you to deploy large
language models locally on your hardware in a single command. Most
generative model architectures are included, such as Falcon, Llama 2,
GPT2, T5 and many more.
Read more about Titan Takeoff here:
-
[Blog](https://medium.com/@TitanML/introducing-titan-takeoff-6c30e55a8e1e)
- [Docs](https://docs.titanml.co/docs/titan-takeoff/getting-started)
#### Testing
As Titan Takeoff runs locally on port 8000 by default, no network access
is needed. Responses are mocked for testing.
- [x] Make Lint
- [x] Make Format
- [x] Make Test
#### Dependencies
No new dependencies are introduced. However, users will need to install
the titan-iris package in their local environment and start the Titan
Takeoff inferencing server in order to use the Titan Takeoff
integration.
Thanks for your help and please let me know if you have any questions.
cc: @hwchase17 @baskaryan
Expressing gratitude to the creator for crafting this remarkable
application. 🙌, Would like to Enhance grammar and spelling in the
documentation for a polished reader experience.
Your feedback is valuable as always
@baskaryan , @hwchase17 , @eyurtsev
- Description: Fixes an issue with Metaphor Search Tool throwing when
missing keys in API response.
- Issue: #9048
- Tag maintainer: @hinthornw @hwchase17
- Twitter handle: @pelaseyed
This PR adds the ability to temporarily cache or persistently store
embeddings.
A notebook has been included showing how to set up the cache and how to
use it with a vectorstore.
- Description: Improvement in the Grobid loader documentation, typos and
suggesting to use the docker image instead of installing Grobid in local
(the documentation was also limited to Mac, while docker allow running
in any platform)
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @whitenoise
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
FileCallbackHandler cannot handle some language, for example: Chinese.
Open file using UTF-8 encoding can fix it.
@agola11
**Issue**: #6919
**Dependencies**: NO dependencies,
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
DirectoryLoader can now return a random sample of files in a directory.
Parameters added are:
sample_size
randomize_sample
sample_seed
@rlancemartin, @eyurtsev
---------
Co-authored-by: Andrew Oseen <amovfx@protonmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Allow GoogleDriveLoader to handle empty spreadsheets
- Issue: Currently GoogleDriveLoader will crash if it tries to load a
spreadsheet with an empty sheet
- Dependencies: n/a
- Tag maintainer: @rlancemartin, @eyurtsev
This pull request aims to ensure that the `OpenAICallbackHandler` can
properly calculate the total cost for Azure OpenAI chat models. The
following changes have resolved this issue:
- The `model_name` has been added to the ChatResult llm_output. Without
this, the default values of `gpt-35-turbo` were applied. This was
causing the total cost for Azure OpenAI's GPT-4 to be significantly
inaccurate.
- A new parameter `model_version` has been added to `AzureChatOpenAI`.
Azure does not include the model version in the response. With the
addition of `model_name`, this is not a significant issue for GPT-4
models, but it's an issue for GPT-3.5-Turbo. Version 0301 (default) of
GPT-3.5-Turbo on Azure has a flat rate of 0.002 per 1k tokens for both
prompt and completion. However, version 0613 introduced a split in
pricing for prompt and completion tokens.
- The `OpenAICallbackHandler` implementation has been updated with the
proper model names, versions, and cost per 1k tokens.
Unit tests have been added to ensure the functionality works as
expected; the Azure ChatOpenAI notebook has been updated with examples.
Maintainers: @hwchase17, @baskaryan
Twitter handle: @jjczopek
---------
Co-authored-by: Jerzy Czopek <jerzy.czopek@avanade.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
---------
Co-authored-by: jacoblee93 <jacoblee93@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Instruction for integration with Log10: an [open
source](https://github.com/log10-io/log10) proxiless LLM data management
and application development platform that lets you log, debug and tag
your Langchain calls
- Tag maintainer: @baskaryan
- Twitter handle: @log10io @coffeephoenix
Several examples showing the integration included
[here](https://github.com/log10-io/log10/tree/main/examples/logging) and
in the PR
Description: Adds Rockset as a chat history store
Dependencies: no changes
Tag maintainer: @hwchase17
This PR passes linting and testing.
I added a test for the integration and an example notebook showing its
use.
This PR adds 8 new loaders:
* `AirbyteCDKLoader` This reader can wrap and run all python-based
Airbyte source connectors.
* Separate loaders for the most commonly used APIs:
* `AirbyteGongLoader`
* `AirbyteHubspotLoader`
* `AirbyteSalesforceLoader`
* `AirbyteShopifyLoader`
* `AirbyteStripeLoader`
* `AirbyteTypeformLoader`
* `AirbyteZendeskSupportLoader`
## Documentation and getting started
I added the basic shape of the config to the notebooks. This increases
the maintenance effort a bit, but I think it's worth it to make sure
people can get started quickly with these important connectors. This is
also why I linked the spec and the documentation page in the readme as
these two contain all the information to configure a source correctly
(e.g. it won't suggest using oauth if that's avoidable even if the
connector supports it).
## Document generation
The "documents" produced by these loaders won't have a text part
(instead, all the record fields are put into the metadata). If a text is
required by the use case, the caller needs to do custom transformation
suitable for their use case.
## Incremental sync
All loaders support incremental syncs if the underlying streams support
it. By storing the `last_state` from the reader instance away and
passing it in when loading, it will only load updated records.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This PR defines an abstract interface for key value stores.
It provides 2 implementations:
1. Local File System
2. In memory -- used to facilitate testing
It also provides an encoder utility to help take care of serialization
from arbitrary data to data that can be stored by the given store
Proposal for an internal API to deprecate LangChain code.
This PR is heavily based on:
https://github.com/matplotlib/matplotlib/blob/main/lib/matplotlib/_api/deprecation.py
This PR only includes deprecation functionality (no renaming etc.).
Additional functionality can be added on a need basis (e.g., renaming
parameters), but best to roll out as an MVP to test this
out.
DeprecationWarnings are ignored by default. We can change the policy for
the deprecation warnings, but we'll need to make sure we're not creating
noise for users due to internal code invoking deprecated functionality.
- Description: consistent timeout at 60s for all calls to Vectara API
- Tag maintainer: @rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Replace this comment with:
- Description: Improved query of BGE embeddings after talking with the
devs of BGE embeddings ,
- Dependencies: any dependencies required for this change,
- Tag maintainer: @hwchase17 ,
- Twitter handle: @ManabChetia3
---------
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
- Description: added filter to query methods in VectorStoreIndexWrapper
for filtering by metadata (i.e. search_kwargs)
- Tag maintainer: @rlancemartin, @eyurtsev
Updated the doc snippet on this topic as well. It took me a long while
to figure out how to filter the vectorstore by filename, so this might
help someone else out.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: I have added an example showing how to pass a custom
template to ConversationRetrievalChain. Instead of
CONDENSE_QUESTION_PROMPT we can pass any prompt in the argument
condense_question_prompt. Look in Use cases -> QA over Documents -> How
to -> Store and reference chat history,
- Issue: #8864,
- Dependencies: NA,
- Tag maintainer: @hinthornw,
- Twitter handle:
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This addresses some issues with introducing the Nebula LLM to LangChain
in this PR:
https://github.com/langchain-ai/langchain/pull/8876
This fixes the following:
- Removes `SYMBLAI` from variable names
- Fixes bug with `Bearer` for the API KEY
Thanks again in advance for your help!
cc: @hwchase17, @baskaryan
---------
Co-authored-by: dvonthenen <david.vonthenen@gmail.com>
### Description
Now, we can pass information like a JWT token using user_context:
```python
self.retriever = AmazonKendraRetriever(index_id=kendraIndexId, user_context={"Token": jwt_token})
```
- [x] `make lint`
- [x] `make format`
- [x] `make test`
Also tested by pip installing in my own project, and it allows access
through the token.
### Maintainers
@rlancemartin, @eyurtsev
### My twitter handle
[girlknowstech](https://twitter.com/girlknowstech)
Minor doc fix to awslambda tool notebook.
Add missing import for initialize_agent to awslambda agent example
Co-authored-by: Josh Hart <josharj@amazon.com>
- Description: The API doc passed to LLM only included the content of
responses but did not include the content of requestBody, causing the
agent to be unable to construct the correct request parameters based on
the requestBody information. Add two lines of code fixed the bug,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: @hinthornw ,
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Description:
Fixed inaccurate import in integrations:providers:bedrock documentation
In the current version of the bedrock documentation, page
https://python.langchain.com/docs/integrations/providers/bedrock it
states that the import is from langchain import Bedrock
This has been changed to from langchain.llms.bedrock import Bedrock as
stated in https://python.langchain.com/docs/integrations/llms/bedrock
Issue:
Not applicable
Dependencies
No dependencies required
Tag maintainer
@baskaryan
Twitter handle:
Not applicable
Adds Ollama as an LLM. Ollama can run various open source models locally
e.g. Llama 2 and Vicuna, automatically configuring and GPU-optimizing
them.
@rlancemartin @hwchase17
---------
Co-authored-by: Lance Martin <lance@langchain.dev>
## Description
I am excited to propose an integration with USearch, a lightweight
vector-search engine available for both Python and JavaScript, among
other languages.
## Dependencies
It introduces a new PyPi dependency - `usearch`. I am unsure if it must
be added to the Poetry file, as this would make the PR too clunky.
Please let me know.
## Profiles
- Maintainers: @ashvardanian @davvard
- Twitter handles: @ashvardanian @unum_cloud
---------
Co-authored-by: Davit Vardanyan <78792753+davvard@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
- fix install command
- change example notebook to use Metaphor autoprompt by default
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Update to #8528
Newlines and other special characters within markdown code blocks
returned as `action_input` should be handled correctly (in particular,
unescaped `"` => `\"` and `\n` => `\\n`) so they don't break JSON
parsing.
@baskaryan
when e.g. downloading a sitemap with a malformed url (e.g.
"ttp://example.com/index.html" with the h omitted at the beginning of
the url), this will ensure that the sitemap download does not crash, but
just emits a warning. (maybe should be optional with e.g. a
`skip_faulty_urls:bool=True` parameter, but this was the most
straightforward fix)
@rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Added async parsing functions for RetryOutputParser,
RetryWithErrorOutputParser and OutputFixingParser.
The async parse functions call the arun methods of the used LLMChains.
Fix for #7989
---------
Co-authored-by: Benjamin May <benjamin.may94@gmail.com>
- Description: Adds the ChatAnyscale class with llama-2 7b, llama-2 13b,
and llama-2 70b on [Anyscale
Endpoints](https://app.endpoints.anyscale.com/)
- It inherits from ChatOpenAI and requires openai (probably unnecessary
but it made for a quick and easy implementation)
- Inspired by https://github.com/langchain-ai/langchain/pull/8434
(@kylehh and @baskaryan )
## Description
This PR adds Nebula to the available LLMs in LangChain.
Nebula is an LLM focused on conversation understanding and enables users
to extract conversation insights from video, audio, text, and chat-based
conversations. These conversations can occur between any mix of human or
AI participants.
Examples of some questions you could ask Nebula from a given
conversation are:
- What could be the customer’s pain points based on the conversation?
- What sales opportunities can be identified from this conversation?
- What best practices can be derived from this conversation for future
customer interactions?
You can read more about Nebula here:
https://symbl.ai/blog/extract-insights-symbl-ai-generative-ai-recall-ai-meetings/
#### Integration Test
An integration test is added, but it requires network access. Since
Nebula is fully managed like OpenAI, network access is required to
exercise the integration test.
#### Linting
- [x] make lint
- [x] make test (TODO: there seems to be a failure in another
non-related test??? Need to check on this.)
- [x] make format
### Dependencies
No new dependencies were introduced.
### Twitter handle
[@symbldotai](https://twitter.com/symbldotai)
[@dvonthenen](https://twitter.com/dvonthenen)
If you have any questions, please let me know.
cc: @hwchase17, @baskaryan
---------
Co-authored-by: dvonthenen <david.vonthenen@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
# What
- fix evaluation parse test
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: Fix evaluation parse test
- Issue: None
- Dependencies: None
- Tag maintainer: @baskaryan
- Twitter handle: @MLOpsJ
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: Fix/abstract add message
- Issue: None
- Dependencies: None
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: @MLOpsJ
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Long-term, would be better to use the lower-level batch() method(s) but
it may take me a bit longer to clean up. This unblocks in the meantime,
though it may fail when the evaluated chain raises a
`NotImplementedError` for a corresponding async method
This adds support for [Xata](https://xata.io) (data platform based on
Postgres) as a vector store. We have recently added [Xata to
Langchain.js](https://github.com/hwchase17/langchainjs/pull/2125) and
would love to have the equivalent in the Python project as well.
The PR includes integration tests and a Jupyter notebook as docs. Please
let me know if anything else would be needed or helpful.
I have added the xata python SDK as an optional dependency.
## To run the integration tests
You will need to create a DB in xata (see the docs), then run something
like:
```
OPENAI_API_KEY=sk-... XATA_API_KEY=xau_... XATA_DB_URL='https://....xata.sh/db/langchain' poetry run pytest tests/integration_tests/vectorstores/test_xata.py
```
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Philip Krauss <35487337+philkra@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
#7469
since 1.29.0, Vertex SDK supports a chat history provided to a codey
chat model.
Co-authored-by: Leonid Kuligin <kuligin@google.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Hello langchain maintainers,
this PR aims at integrating
[vllm](https://vllm.readthedocs.io/en/latest/#) into langchain. This PR
closes#8729.
This feature clearly depends on `vllm`, but I've seen other models
supported here depend on packages that are not included in the
pyproject.toml (e.g. `gpt4all`, `text-generation`) so I thought it was
the case for this as well.
@hwchase17, @baskaryan
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
@hwchase17, @baskaryan
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Nuno Campos <nuno@boringbits.io>
- Updated to use newer better function interaction
- Previous version had only one callback
- @hinthornw @hwchase17 Can you look into this
- Shout out to @MultiON_AI @DivGarg9 on twitter
---------
Co-authored-by: Naman Garg <ngarg3@binghamton.edu>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Description: The lines I have changed looks like incorrectly escaped for
regex. In python 3.11, I receive DeprecationWarning for these lines.
You don't see any warnings unless you explicitly run python with `-W
always::DeprecationWarning` flag. So, this is my attempt to fix it.
Here are the warnings from log files:
```
/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py:919: DeprecationWarning: invalid escape sequence '\s'
/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py:918: DeprecationWarning: invalid escape sequence '\s'
/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py:917: DeprecationWarning: invalid escape sequence '\s'
/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py:916: DeprecationWarning: invalid escape sequence '\c'
/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py:903: DeprecationWarning: invalid escape sequence '\*'
/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py:804: DeprecationWarning: invalid escape sequence '\*'
/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py:804: DeprecationWarning: invalid escape sequence '\*'
```
cc @baskaryan
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Description: This PR improves the function of recursive_url_loader, such
as limiting the depth of the access, and customizable extractors(from
the raw webpage to the text of the Document object), so that users can
use other tools to extract the webpage. This PR also includes the
document and test for the new loader.
Old PR closed due to project structure change. #7756
Because socket requests are not allowed, the old unit test was removed.
Issue: N/A
Dependencies: asyncio, aiohttp
Tag maintainer: @rlancemartin
Twitter handle: @ Zend_Nihility
---------
Co-authored-by: Lance Martin <lance@langchain.dev>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: docstore had two main method: add and search, however,
dealing with docstore sometimes requires deleting an entry from
docstore. So I have added a simple delete method that deletes items from
docstore. Additionally, I have added the delete method to faiss
vectorstore for the very same reason.
- Issue: NA
- Dependencies: NA
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
- Balancing prioritization between keyword / AI search
- Show snippets of highlighted keywords when searching
- Improved keyword search
- Fixed bugs and issues
Shoutout to @calebpeffer for implementing and gathering feedback on it
cc: @dev2049 @rlancemartin @hwchase17
begining -> beginning
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Fix Issue #7616 with a simpler approach to extract function names (use
`__name__` attribute)
@hwchase17
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Fixes for #8786 @agola11
- Description: The flow of callback is breaking till the last chain, as
callbacks are missed in between chain along nested path. This will help
get full trace and correlate parent child relationship in all nested
chains.
- Issue: the issue #8786
- Dependencies: NA
- Tag maintainer: @agola11
- Twitter handle: Agarwal_Ankur
Description: When using a ReAct Agent with tools and no tool is found,
the InvalidTool gets called. Previously it just asked for a different
action, but I've found that if you list the available actions it
improves the chances of getting a valid action in the next round. I've
added a UnitTest for it also.
@hinthornw
# What
- Add missing test for retrievers self_query
- Add missing import validation
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: Add missing test for retrievers self_query
- Issue: None
- Dependencies: None
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @MlopsJ
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
- Description: 2 links were not working on Question Answering Use Cases
documentation page. Hence, changed them to nearest useful links,
- Issue: NA,
- Dependencies: NA,
- Tag maintainer: @baskaryan,
- Twitter handle: NA
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
- Description: we expose Kendra result item id and document id as
document metadata.
- Tag maintainer: @3coins @baskaryan
- Twitter handle: wilsonleao
**Why**
The result item id and document id might be used to keep track of the
retrieved resources.
Refactor for the extraction use case documentation
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Lance Martin <lance@langchain.dev>
Added a couple of "integration tests" for these that I ran.
Main design point of feedback: at this point, would it just be better to
have separate arguments for each type? Little confusing what is or isn't
supported and what is the intended usage at this point since I try to
wrap the function as runnable or pack or unpack chains/llms.
```
run_on_dataset(
...
llm_or_chain_factory = None,
llm = None,
chain = NOne,
runnable=None,
function=None
):
# raise error if none set
```
Downside with runnables and arbitrary function support is that you get
much less helpful validation and error messages, but I don't think we
should block you from this, at least.
* Documentation to favor creation without declaring input_variables
* Cut out obvious examples, but add more description in a few places
---------
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Update API reference documentation. This PR will pick up a number of missing classes, it also applies selective formatting based on the class / object type.
Resolves occasional JSON parsing error when some predictions are passed
through a `MultiPromptChain`.
Makes [this
modification](https://github.com/langchain-ai/langchain/issues/5163#issuecomment-1652220401)
to `multi_prompt_prompt.py`, which is much cleaner than appending an
entire example object, which is another community-reported solution.
@hwchase17, @baskaryan
cc: @SimasJan
- Description: Added a missing word and rearranged a sentence in the
documentation of Self Query Retrievers.,
- Issue: NA,
- Dependencies: NA,
- Tag maintainer: @baskaryan,
- Twitter handle: NA
Thanks for your time.
llamacpp params (per their own code) are unstable, so instead of
adding/deleting them constantly adding a model_kwargs parameter that
allows for arbitrary additional kwargs
cc @jsjolund and @zacps re #8599 and #8704
There is already a `loads()` function which takes a JSON string and
loads it using the Reviver
But in the callbacks system, there is a `serialized` object that is
passed in and that object is already a deserialized JSON-compatible
object. This allows you to call `load(serialized)` and bypass
intermediate JSON encoding.
I found one other place in the code that benefited from this
short-circuiting (string_run_evaluator.py) so I fixed that too.
Tagging @baskaryan for general/utility stuff.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
---------
Co-authored-by: Nuno Campos <nuno@boringbits.io>
Description: Add ScaNN vectorstore to langchain.
ScaNN is a Open Source, high performance vector similarity library
optimized for AVX2-enabled CPUs.
https://github.com/google-research/google-research/tree/master/scann
- Dependencies: scann
Python notebook to illustrate the usage:
docs/extras/integrations/vectorstores/scann.ipynb
Integration test:
libs/langchain/tests/integration_tests/vectorstores/test_scann.py
@rlancemartin, @eyurtsev for review.
Thanks!
This PR updates _load_reduce_documents_chain to handle
`reduce_documents_chain` and `combine_documents_chain` config
Please review @hwchase17, @baskaryan
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
# What
- This is to add filter option to sklearn vectore store functions
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: Add filter to sklearn vectore store functions.
- Issue: None
- Dependencies: None
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @MlopsJ
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
This is to add save_local and load_local to tfidf_vectorizer and docs in
tfidf_retriever to make the vectorizer reusable.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: add save_local and load_local to tfidf_vectorizer and
docs in tfidf_retriever
- Issue: None
- Dependencies: None
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @MlopsJ
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Removing score threshold parameter of faiss
_similarity_search_with_relevance_scores as the thresholding part is
implemented in similarity_search_with_relevance_scores method which
calls this method.
As this method is supposed to be a private method of faiss.py this will
never receive the score threshold parameter as it is popped in the super
method similarity_search_with_relevance_scores.
@baskaryan @hwchase17
Just a tiny change to use `list.append(...)` and `list.extend(...)`
instead of `list += [...]` so that no unnecessary temporary lists are
created.
Since its a tiny miscellaneous thing I guess @baskaryan is the
maintainer to tag?
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Simple retriever that applies an LLM between the user input and the
query pass the to retriever.
It can be used to pre-process the user input in any way.
The default prompt:
```
DEFAULT_QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an assistant tasked with taking a natural languge query from a user
and converting it into a query for a vectorstore. In this process, you strip out
information that is not relevant for the retrieval task. Here is the user query: {question} """
)
```
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
- Description:
- Provides a new attribute in the AmazonKendraRetriever which processes
a ResultItem and returns a string that will be used as page_content;
- The excerpt metadata should not be changed, it will be kept as was
retrieved. But it is cleaned when composing the page_content;
- Refactors the AmazonKendraRetriever to improve code reusability;
- Issue: #7787
- Tag maintainer: @3coins @baskaryan
- Twitter handle: wilsonleao
**Why?**
Some use cases need to adjust the page_content by dynamically combining
the ResultItem attributes depending on the context of the item.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#7854
Added the ability to use the `separator` ase a regex or a simple
character.
Fixed a bug where `start_index` was incorrectly counting from -1.
Who can review?
@eyurtsev
@hwchase17
@mmz-001
When using AzureChatOpenAI the openai_api_type defaults to "azure". The
utils' get_from_dict_or_env() function triggered by the root validator
does not look for user provided values from environment variables
OPENAI_API_TYPE, so other values like "azure_ad" are replaced with
"azure". This does not allow the use of token-based auth.
By removing the "default" value, this allows environment variables to be
pulled at runtime for the openai_api_type and thus enables the other
api_types which are expected to work.
This fixes#6650
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: updates to Vectara documentation with more details on how
to get started.
- Issue: NA
- Dependencies: NA
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @vectara, @ofermend
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This lets you pass callbacks when you create the summarize chain:
```
summarize = load_summarize_chain(llm, chain_type="map_reduce", callbacks=[my_callbacks])
summary = summarize(documents)
```
See #5572 for a similar surgical fix.
tagging @hwchase17 for callbacks work
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
This is another case, similar to #5572 and #7565 where the callbacks are
getting dropped during construction of the chains.
tagging @hwchase17 and @agola11 for callbacks propagation
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Description: I have added two methods serializer and deserializer
methods. There was method called save local but it saves the to the
local disk. I wanted the vectorstore in the format using which i can
push it to the sql database's blob field. I have used this while i was
working on something
@rlancemartin, @eyurtsev
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
It fails currently because the event loop is already running.
The `retry` decorator alraedy infers an `AsyncRetrying` handler for
coroutines (see [tenacity
line](aa6f8f0a24/tenacity/__init__.py (L535)))
However before_sleep always gets called synchronously (see [tenacity
line](aa6f8f0a24/tenacity/__init__.py (L338))).
Instead, check for a running loop and use that it exists. Of course,
it's running an async method synchronously which is not _nice_. Given
how important LLMs are, it may make sense to have a task list or
something but I'd want to chat with @nfcampos on where that would live.
This PR also fixes the unit tests to check the handler is called and to
make sure the async test is run (it looks like it's just been being
skipped). It would have failed prior to the proposed fixes but passes
now.
Replace this comment with:
- Description: added a document loader for a list of RSS feeds or OPML.
It iterates through the list and uses NewsURLLoader to load each
article.
- Issue: N/A
- Dependencies: feedparser, listparser
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @ruze
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Solves #8644
This embedding models output identical random embedding vectors, given
the input texts are identical.
Useful when used in unittest.
@baskaryan
### Description
Fixes a grammar issue I noticed when reading through the documentation.
### Maintainers
@baskaryan
Co-authored-by: mmillerick <mmillerick@blend.com>
## Description:
1)Map reduce example in docs is missing an important import statement.
Figured other people would benefit from being able to copy 🍝 the code.
2)RefineDocumentsChain example also broken.
## Issue:
None
## Dependencies:
None. One liner.
## Tag maintainer:
@baskaryan
## Twitter handle:
I mean, it's a one line fix lol. But @will_thompson_k is my twitter
handle.
This small PR introduces new parameters into Qdrant (`on_disk`), fixes
some tests and changes the error message to be more clear.
Tagging: @baskaryan, @rlancemartin, @eyurtsev
- Description: run the poetry dependencies
- Issue: #7329
- Dependencies: any dependencies required for this change,
- Tag maintainer: @rlancemartin
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
### Description
OpenSearch supports validation using both Master Credentials (Username
and password) and IAM. For Master Credentials users will not pass the
argument `service` in `http_auth` and the existing code will break. To
fix this, I have updated the condition to check if service attribute is
present in http_auth before accessing it.
### Maintainers
@baskaryan @navneet1v
Signed-off-by: Naveen Tatikonda <navtat@amazon.com>
Description - Integrates Fireworks within Langchain LLMs to allow users
to use Fireworks models with Langchain, mainly for summarization.
Issue - Not applicable
Dependencies - None
Tag maintainer - @rlancemartin
---------
Co-authored-by: Raj Janardhan <rajjanardhan@Rajs-Laptop.attlocal.net>
Existing implementation requires that you install `firebase-admin`
package, and prevents you from using an existing Firestore client
instance if available.
This adds optional `firestore_client` param to
`FirestoreChatMessageHistory`, so users can just use their existing
client/settings. If not passed, existing logic executes to initialize a
`firestore_client`.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Add a StreamlitChatMessageHistory class that stores chat messages in
[Streamlit's Session
State](https://docs.streamlit.io/library/api-reference/session-state).
Note: The integration test uses a currently-experimental Streamlit
testing framework to simulate the execution of a Streamlit app. Marking
this PR as draft until I confirm with the Streamlit team that we're
comfortable supporting it.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
### Summary
Updates the `unstructured` install instructions. For
`unstructured>=0.9.0`, dependencies are broken out by document type and
the base `unstructured` package includes fewer dependencies. `pip
install "unstructured[local-inference]"` has been replace by `pip
install "unstructured[all-docs]"`, though the `local-inference` extra is
still supported for the time being.
### Reviewers
- @rlancemartin
- @eyurtsev
- @hwchase17
- Description: added memgraph_graph.py which defines the MemgraphGraph
class, subclassing off the existing Neo4jGraph class. This lets you
query the Memgraph graph database using natural language. It leverages
the Neo4j drivers and the bolt protocol.
- Dependencies: since it is a subclass off of Neo4jGraph, it is
dependent on it and the GraphCypherQA Chain implementations. It is
dependent on the Neo4j drivers being present. It is dependent on having
a running Memgraph instance to connect to.
- Tag maintainer: @baskaryan
- Twitter handle: @villageideate
- example usage can be seen in this repo
https://github.com/brettdbrewer/MemgraphGraph/
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
## Description
This PR implements a callback handler for SageMaker Experiments which is
similar to that of mlflow.
* When creating the callback handler, it takes the experiment's run
object as an argument. All the callback outputs are then logged to the
run object.
* The output of each callback action (e.g., `on_llm_start`) is saved to
S3 bucket as json file.
* Optionally, you can also log additional information such as the LLM
hyper-parameters to the same run object.
* Once the callback object is no more needed, you will need to call the
`flush_tracker()` method. This makes sure that any intermediate files
are deleted.
* A separate notebook example is provided to show how the callback is
used.
@3coins @agola11
---------
Co-authored-by: Tesfagabir Meharizghi <mehariz@amazon.com>
Description: Made Chroma constructor more robust when client_settings is
provided. Otherwise, existing embeddings will not be loaded correctly
from Chroma.
Issue: #7804
Dependencies: None
Tag maintainer: @rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description:
This PR adds support for loading documents from Huawei OBS (Object
Storage Service) in Langchain. OBS is a cloud-based object storage
service provided by Huawei Cloud. With this enhancement, Langchain users
can now easily access and load documents stored in Huawei OBS directly
into the system.
Key Changes:
- Added a new document loader module specifically for Huawei OBS
integration.
- Implemented the necessary logic to authenticate and connect to Huawei
OBS using access credentials.
- Enabled the loading of individual documents from a specified bucket
and object key in Huawei OBS.
- Provided the option to specify custom authentication information or
obtain security tokens from Huawei Cloud ECS for easy access.
How to Test:
1. Ensure the required package "esdk-obs-python" is installed.
2. Configure the endpoint, access key, secret key, and bucket details
for Huawei OBS in the Langchain settings.
3. Load documents from Huawei OBS using the updated document loader
module.
4. Verify that documents are successfully retrieved and loaded into
Langchain for further processing.
Please review this PR and let us know if any further improvements are
needed. Your feedback is highly appreciated!
@rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- allow overriding run_type in on_chain_start
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
from my understanding, the `check_repeated_memory_variable` validator
will raise an error if any of the variables in the `memories` list are
repeated. However, the `load_memory_variables` method does not check for
repeated variables. This means that it is possible for the
`CombinedMemory` instance to return a dictionary of memory variables
that contains duplicate values. This code will check for repeated
variables in the `data` dictionary returned by the
`load_memory_variables` method of each sub-memory. If a repeated
variable is found, an error will be raised.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
- Description: Adds an optional buffer arg to the memory's
from_messages() method. If provided the existing memory will be loaded
instead of regenerating a summary from the loaded messages.
Why? If we have past messages to load from, it is likely we also have an
existing summary. This is particularly helpful in cases where the chat
is ephemeral and/or is backed by serverless where the chat history is
not stored but where the updated chat history is passed back and forth
between a backend/frontend.
Eg: Take a stateless qa backend implementation that loads messages on
every request and generates a response — without this addition, each
time the messages are loaded via from_messages, the summaries are
recomputed even though they may have just been computed during the
previous response. With this, the previously computed summary can be
passed in and avoid:
1) spending extra $$$ on tokens, and
2) increased response time by avoiding regenerating previously generated
summary.
Tag maintainer: @hwchase17
Twitter handle: https://twitter.com/ShantanuNair
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
- Description: updated BabyAGI examples to append the iteration to the
result id to fix error storing data to vectorstore.
- Issue: 7445
- Dependencies: no
- Tag maintainer: @eyurtsev
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
This fix worked for me locally. Happy to take some feedback and iterate
on a better solution. I was considering appending a uuid instead but
didnt want to over complicate the example.
…call, it needs retry
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Co-authored-by: yangdihang <yangdihang@bytedance.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Works just like the GenericLoader but concurrently for those who choose
to optimize their workflow.
@rlancemartin @eyurtsev
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Description: Using Azure Cognitive Search as a VectorStore. Calling the
`add_texts` method throws an error if there is no metadata property
specified. The `additional_fields` field is set in an `if` statement and
then is used later outside the if statement. This PR just moves the
declaration of `additional_fields` below and puts the usage of it in
context.
Issue: https://github.com/langchain-ai/langchain/issues/8544
Tagging @rlancemartin, @eyurtsev as this is related to Vector stores.
`make format`, `make lint`, `make spellcheck`, and `make test` have been
run
- Description: Follow up of #8478
- Issue: #8477
- Dependencies: None
- Tag maintainer: @baskaryan
- Twitter handle: [@BharatR123](twitter.com/BharatR123)
The links were still broken after #8478 and sadly the issue was not
caught with either the Vercel app build and `make docs_linkcheck`
- Description: This pull request (PR) includes two minor changes:
1. Updated the default prompt for SQL Query Checker: The current prompt
does not clearly specify the final response that the LLM (Language
Model) should provide when checking for the query if `use_query_checker`
is enabled in SQLDatabase Chain. As a result, the LLM adds extra words
like "Here is your updated query" to the response. However, this causes
a syntax error when executing the SQL command in SQLDatabaseChain, as
these additional words are also included in the SQL query.
2. Moved the query's execution part into a separate method for
SQLDatabase: The purpose of this change is to provide users with more
flexibility when obtaining the result of an SQL query in the original
form returned by sqlalchemy. In the previous implementation, the run
method returned the results as a string. By creating a distinct method
for execution, users can now receive the results in original format,
which proves helpful in various scenarios. For example, during the
development of a tool, I found it advantageous to obtain results in
original format rather than a string, as currently done by the run
method.
- Tag maintainer: @hinthornw
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This PR makes minor improvements to our python notebook, and adds
support for `Rockset` workspaces in our vectorstore client.
@rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
**Description: a description of the change**
In this pull request, GitLoader has been updated to handle multiple load
calls, provided the same repository is being cloned. Previously, calling
`load` multiple times would raise an error if a clone URL was provided.
Additionally, a check has been added to raise a ValueError when
attempting to clone a different repository into an existing path.
New tests have also been introduced to verify the correct behavior of
the GitLoader class when `load` is called multiple times.
Lastly, the GitPython package, a dependency for the GitLoader class, has
been added to the project dependencies (pyproject.toml and poetry.lock).
**Issue: the issue # it fixes (if applicable)**
None
**Dependencies: any dependencies required for this change**
GitPython
**Tag maintainer: for a quicker response, tag the relevant maintainer
(see below)**
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
## Description
This PR handles modifying the Chroma DB integration's documentation.
It modifies the **Docker container** example to fix the instructions
mentioned in the documentation.
In the current documentation, the below `client.reset()` line causes a
runtime error:
```py
...
client = chromadb.HttpClient(settings=Settings(allow_reset=True))
client.reset() # resets the database
collection = client.create_collection("my_collection")
...
```
`Exception: {"error":"ValueError('Resetting is not allowed by this
configuration')"}`
This is due to the Chroma DB server needing to have the `allow_reset`
flag set to `true` there as well.
This is fixed by adding the `ALLOW_RESET=TRUE` to the `docker-compose`
file environment variable to the docker container before spinning it
## Issue
This fixes the runtime error that occurs when running the docker
container example code
## Tag Maintainer
@rlancemartin, @eyurtsev
## Description
The imports for `NeptuneOpenCypherQAChain` are failing. This PR adds the
chain class to the `__init__.py` file to fix this issue.
## Maintainers
@dev2049
@krlawrence
Docs for from_documents() were outdated as seen in
https://github.com/langchain-ai/langchain/issues/8457 .
fixes#8457
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
### Description
In the LangChain Documentation and Comments, I've Noticed that `pip
install faiss` was mentioned, instead of `pip install faiss-gpu`, since
installing `pip install faiss` results in an error. I've gone ahead and
updated the Documentation, and `faiss.ipynb`. This Change will ensure
ease of use for the end user, trying to install `faiss-gpu`.
### Issue:
Documentation / Comments Related.
### Dependencies:
No Dependencies we're changed only updated the files with the wrong
reference.
### Tag maintainer:
@rlancemartin, @eyurtsev (Thank You for your contributions 😄 )
# What
- add test to ensure values in time weighted retriever are updated
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: add test to ensure values in time weighted retriever are
updated
- Issue: None
- Dependencies: None
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @MlopsJ
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
- Make _arun optional
- Pass run_manager to inner chains in tools that have them
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
---------
Co-authored-by: Nuno Campos <nuno@boringbits.io>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
- Install langchain
- Set Pinecone API key and environment as env vars
- Create Pinecone index if it doesn't already exist
---
- Description: Fix a couple minor issues I came across when running this
notebook,
- Issue: the issue # it fixes (if applicable),
- Dependencies: none,
- Tag maintainer: @rlancemartin @eyurtsev,
- Twitter handle: @zackproser (certainly not necessary!)
**Description:**
Add support for Meilisearch vector store.
Resolve#7603
- No external dependencies added
- A notebook has been added
@rlancemartin
https://twitter.com/meilisearch
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: The contribution guidlelines using devcontainer refer to
the main repo and not the forked repo. We should create our changes in
our own forked repo, not on langchain/main
- Issue: Just documentation
- Dependencies: N/A,
- Tag maintainer: @baskaryan
- Twitter handle: @levalencia
# PromptTemplate
* Update documentation to highlight the classmethod for instantiating a
prompt template.
* Expand kwargs in the classmethod to make parameters easier to discover
This PR got reverted here:
https://github.com/langchain-ai/langchain/pull/8395/files
* Expands support for a variety of message formats in the
`from_messages` classmethod. Ideally, we could deprecate the other
on-ramps to reduce the amount of classmethods users need to know about.
* Expand documentation with code examples.
- Description: Minimax is a great AI startup from China, recently they
released their latest model and chat API, and the API is widely-spread
in China. As a result, I'd like to add the Minimax llm model to
Langchain.
- Tag maintainer: @hwchase17, @baskaryan
---------
Co-authored-by: the <tao.he@hulu.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Micro convenience PR to avoid warning regarding missing `client`
parameter. It is always set during initialization.
@baskaryan
Co-authored-by: Bagatur <baskaryan@gmail.com>
- [Xorbits
Inference(Xinference)](https://github.com/xorbitsai/inference) is a
powerful and versatile library designed to serve language, speech
recognition, and multimodal models. Xinference supports a variety of
GGML-compatible models including chatglm, whisper, and vicuna, and
utilizes heterogeneous hardware and a distributed architecture for
seamless cross-device and cross-server model deployment.
- This PR integrates Xinference models and Xinference embeddings into
LangChain.
- Dependencies: To install the depenedencies for this integration, run
`pip install "xinference[all]"`
- Example Usage:
To start a local instance of Xinference, run `xinference`.
To deploy Xinference in a distributed cluster, first start an Xinference
supervisor using `xinference-supervisor`:
`xinference-supervisor -H "${supervisor_host}"`
Then, start the Xinference workers using `xinference-worker` on each
server you want to run them on.
`xinference-worker -e "http://${supervisor_host}:9997"`
To use Xinference with LangChain, you also need to launch a model. You
can use command line interface (CLI) to do so. Fo example: `xinference
launch -n vicuna-v1.3 -f ggmlv3 -q q4_0`. This launches a model named
vicuna-v1.3 with `model_format="ggmlv3"` and `quantization="q4_0"`. A
model UID is returned for you to use.
Now you can use Xinference with LangChain:
```python
from langchain.llms import Xinference
llm = Xinference(
server_url="http://0.0.0.0:9997", # suppose the supervisor_host is "0.0.0.0"
model_uid = {model_uid} # model UID returned from launching a model
)
llm(
prompt="Q: where can we visit in the capital of France? A:",
generate_config={"max_tokens": 1024},
)
```
You can also use RESTful client to launch a model:
```python
from xinference.client import RESTfulClient
client = RESTfulClient("http://0.0.0.0:9997")
model_uid = client.launch_model(model_name="vicuna-v1.3", model_size_in_billions=7, quantization="q4_0")
```
The following code block demonstrates how to use Xinference embeddings
with LangChain:
```python
from langchain.embeddings import XinferenceEmbeddings
xinference = XinferenceEmbeddings(
server_url="http://0.0.0.0:9997",
model_uid = model_uid
)
```
```python
query_result = xinference.embed_query("This is a test query")
```
```python
doc_result = xinference.embed_documents(["text A", "text B"])
```
Xinference is still under rapid development. Feel free to [join our
Slack
community](https://xorbitsio.slack.com/join/shared_invite/zt-1z3zsm9ep-87yI9YZ_B79HLB2ccTq4WA)
to get the latest updates!
- Request for review: @hwchase17, @baskaryan
- Twitter handle: https://twitter.com/Xorbitsio
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Added a new tool to the Github toolkit called **Create Pull Request.**
Now we can make our own langchain contributor in langchain 😁
In order to have somewhere to pull from, I also added a new env var,
"GITHUB_BASE_BRANCH." This will allow the existing env var,
"GITHUB_BRANCH," to be a working branch for the bot (so that it doesn't
have to always commit on the main/master). For example, if you want the
bot to work in a branch called `bot_dev` and your repo base is `main`,
you would set up the vars like:
```
GITHUB_BASE_BRANCH = "main"
GITHUB_BRANCH = "bot_dev"
```
Maintainer responsibilities:
- Agents / Tools / Toolkits: @hinthornw
# PromptTemplate
* Update documentation to highlight the classmethod for instantiating a
prompt template.
* Expand kwargs in the classmethod to make parameters easier to discover
In this PR:
- Removed restricted model loading logic for Petals-Bloom
- Removed petals imports (DistributedBloomForCausalLM,
BloomTokenizerFast)
- Instead imported more generalized versions of loader
(AutoDistributedModelForCausalLM, AutoTokenizer)
- Updated the Petals example notebook to allow for a successful
installation of Petals in Apple Silicon Macs
- Tag maintainer: @hwchase17, @baskaryan
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description:
This PR will enable the Open API chain to work with valid Open API
specifications missing `description` and `summary` properties for path
and operation nodes in open api specs.
Since both `description` and `summary` property are declared optional we
cannot be sure they are defined. This PR resolves this problem by
providing an empty (`''`) description as fallback.
The previous behavior of the Open API chain was that the underlying LLM
(OpenAI) throw ed an exception since `None` is not of type string:
```
openai.error.InvalidRequestError: None is not of type 'string' - 'functions.0.description'
```
Using this PR the Open API chain will succeed also using Open API specs
lacking `description` and `summary` properties for path and operation
nodes.
Thanks for your amazing work !
Tag maintainer: @baskaryan
---------
Co-authored-by: Lars Gersmann <lars.gersmann@cm4all.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1. Upgrade the AwaDB from v0.3.7 to v0.3.9
2. Change the default embedding to AwaEmbedding
---------
Co-authored-by: ljeagle <awadb.vincent@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
- Description: Adds AwaEmbeddings class for embeddings, which provides
users with a convenient way to do fine-tuning, as well as the potential
need for multimodality
- Tag maintainer: @baskaryan
Create `Awa.ipynb`: an example notebook for AwaEmbeddings class
Modify `embeddings/__init__.py`: Import the class
Create `embeddings/awa.py`: The embedding class
Create `embeddings/test_awa.py`: The test file.
---------
Co-authored-by: taozhiwang <taozhiwa@gmail.com>
Full set of params are missing from Vertex* LLMs when `dict()` method is
called.
```
>>> from langchain.chat_models.vertexai import ChatVertexAI
>>> from langchain.llms.vertexai import VertexAI
>>> chat_llm = ChatVertexAI()
l>>> llm = VertexAI()
>>> chat_llm.dict()
{'_type': 'vertexai'}
>>> llm.dict()
{'_type': 'vertexai'}
```
This PR just uses the same mechanism used elsewhere to expose the full
params.
Since `_identifying_params()` is on the `_VertexAICommon` class, it
should cover the chat and non-chat cases.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Spelling error fix
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
## Description
This commit introduces the `DropboxLoader` class, a new document loader
that allows loading files from Dropbox into the application. The loader
relies on a Dropbox app, which requires creating an app on Dropbox,
obtaining the necessary scope permissions, and generating an access
token. Additionally, the dropbox Python package is required.
The `DropboxLoader` class is designed to be used as a document loader
for processing various file types, including text files, PDFs, and
Dropbox Paper files.
## Dependencies
`pip install dropbox` and `pip install unstructured` for PDF reading.
## Tag maintainer
@rlancemartin, @eyurtsev (from Data Loaders). I'd appreciate some
feedback here 🙏 .
## Social Networks
https://github.com/rubenbarraganhttps://www.linkedin.com/in/rgbarragan/https://twitter.com/RubenBarraganP
---------
Co-authored-by: Ruben Barragan <rbarragan@Rubens-MacBook-Air.local>
Since the refactoring into sub-projects `libs/langchain` and
`libs/experimental`, the `make` targets `format_diff` and `lint_diff` do
not work anymore when running `make` from these subdirectories. Reason
is that
```
PYTHON_FILES=$(shell git diff --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$')
```
generates paths from the project's root directory instead of the
corresponding subdirectories. This PR fixes this by adding a
`--relative` command line option.
- Tag maintainer: @baskaryan
# [WIP] Tree of Thought introducing a new ToTChain.
This PR adds a new chain called ToTChain that implements the ["Large
Language Model Guided
Tree-of-Though"](https://arxiv.org/pdf/2305.08291.pdf) paper.
There's a notebook example `docs/modules/chains/examples/tot.ipynb` that
shows how to use it.
Implements #4975
## Who can review?
Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
- @hwchase17
- @vowelparrot
---------
Co-authored-by: Vadim Gubergrits <vgubergrits@outbox.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Optimizing important numerical code and making it run faster.
Performance went up by 1.48x (148%). Runtime went down from 138715us to
56020us
Optimization explanation:
The `cosine_similarity_top_k` function is where we made the most
significant optimizations.
Instead of sorting the entire score_array which needs considering all
elements, `np.argpartition` is utilized to find the top_k largest scores
indices, this operation has a time complexity of O(n), higher
performance than sorting. Remember, `np.argpartition` doesn't guarantee
the order of the values. So we need to use argsort() to get the indices
that would sort our top-k values after partitioning, which is much more
efficient because it only sorts the top-K elements, not the entire
array. Then to get the row and column indices of sorted top_k scores in
the original score array, we use `np.unravel_index`. This operation is
more efficient and cleaner than a list comprehension.
The code has been tested for correctness by running the following
snippet on both the original function and the optimized function and
averaged over 5 times.
```
def test_cosine_similarity_top_k_large_matrices():
X = np.random.rand(1000, 1000)
Y = np.random.rand(1000, 1000)
top_k = 100
score_threshold = 0.5
gc.disable()
counter = time.perf_counter_ns()
return_value = cosine_similarity_top_k(X, Y, top_k, score_threshold)
duration = time.perf_counter_ns() - counter
gc.enable()
```
@hwaking @hwchase17 @jerwelborn
Unit tests pass, I also generated more regression tests which all
passed.
Description: Adding support for custom index and scoring profile support
in Azure Cognitive Search
@hwchase17
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This change compacts the left-side Navbar (ToC) of the [API
Reference](https://api.python.langchain.com/en/latest/api_reference.html).
Now almost each namespace item is split into two lines. For example
`langchain.chat_models: Chat Models`
We remove the `Chat Models` and leave one the `langchain.chat_models`.
This effectively compacts the navbar and increases the main page's
usability. On my screen, it reduces # of lines in Toc from 28 t to 18,
which is huge.
Removing the namespace "title" (like `Chat Models`) does not remove any
information because the title is composed directly from the namespace.
API Reference users are developers. Usability for them is very
important. We see less text => we find faster.
This PR introduces async API support for Cohere, both LLM and
embeddings. It requires updating `cohere` package to `^4`.
Tagging @hwchase17, @baskaryan, @agola11
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
# Description:
**Add the possibility to keep text as Markdown in the ConfluenceLoader**
Add a bool variable that allows to keep the Markdown format of the
Confluence pages.
It is useful because it allows to use MarkdownHeaderTextSplitter as a
DataSplitter.
If this variable in set to True in the load() method, the pages are
extracted using the markdownify library.
# Issue:
[4407](https://github.com/langchain-ai/langchain/issues/4407)
# Dependencies:
Add the markdownify library
# Tag maintainer:
@rlancemartin, @eyurtsev
# Twitter handle:
FloBastinHeyI - https://twitter.com/FloBastinHeyI
---------
Co-authored-by: Florian Bastin <florian.bastin@octo.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Objects implementing Runnable: BasePromptTemplate, LLM, ChatModel,
Chain, Retriever, OutputParser
- [x] Implement Runnable in base Retriever
- [x] Raise TypeError in operator methods for unsupported things
- [x] Implement dict which calls values in parallel and outputs dict
with results
- [x] Merge in `+` for prompts
- [x] Confirm precedence order for operators, ideal would be `+` `|`,
https://docs.python.org/3/reference/expressions.html#operator-precedence
- [x] Add support for openai functions, ie. Chat Models must return
messages
- [x] Implement BaseMessageChunk return type for BaseChatModel, a
subclass of BaseMessage which implements __add__ to return
BaseMessageChunk, concatenating all str args
- [x] Update implementation of stream/astream for llm and chat models to
use new `_stream`, `_astream` optional methods, with default
implementation in base class `raise NotImplementedError` use
https://stackoverflow.com/a/59762827 to see if it is implemented in base
class
- [x] Delete the IteratorCallbackHandler (leave the async one because
people using)
- [x] Make BaseLLMOutputParser implement Runnable, accepting either str
or BaseMessage
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
ElasticsearchVectorStore.as_retriever() method is returning
`RecursionError: maximum recursion depth exceeded`
because of incorrect field reference in
`embeddings()` method
- Description: Fix RecursionError because of a typo
- Issue: the issue #8310
- Dependencies: None,
- Tag maintainer: @eyurtsev
- Twitter handle: bpatel
- Description: I fixed an issue in the code snippet related to the
variable name and the evaluation of its length. The original code used
the variable "docs," but the correct variable name is "docs_svm" after
using the SVMRetriever.
- maintainer: @baskaryan
- Twitter handle: @iamreechi_
Co-authored-by: iamreechi <richieakparuorji>
Description:
I wanted to use the DuckDuckGoSearch tool in an agent to let him get the
latest news for a topic. DuckDuckGoSearch has already an implemented
function for retrieving news articles. But there wasn't a tool to use
it. I simply adapted the SearchResult class with an extra argument
"backend". You can set it to "news" to only get news articles.
Furthermore, I added an example to the DuckDuckGo Notebook on how to
further customize the results by using the DuckDuckGoSearchAPIWrapper.
Dependencies: no new dependencies
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description: in the .devcontainer, docker-compose build is currently
failing due to the src paths in the COPY command. This change adds the
full path to the pyproject.toml and poetry.toml to allow the build to
run.
Issue:
You can see the issue if you try to build the dev docker image with:
```
cd .devcontainer
docker-compose build
```
Dependencies: none
Twitter handle: byronsalty
- Description: During streaming, the first chunk may only contain the
name of an OpenAI function and not any arguments. In this case, the
current code presumes there is a streaming response and tries to append
to it, but gets a KeyError. This fixes that case by checking if the
arguments key exists, and if not, creates a new entry instead of
appending.
- Issue: Related to #6462
Sample Code:
```python
llm = AzureChatOpenAI(
deployment_name=deployment_name,
model_name=model_name,
streaming=True
)
tools = [PythonREPLTool()]
callbacks = [StreamingStdOutCallbackHandler()]
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.OPENAI_FUNCTIONS,
callbacks=callbacks
)
agent('Run some python code to test your interpreter')
```
Previous Result:
```
File ...langchain/chat_models/openai.py:344, in ChatOpenAI._generate(self, messages, stop, run_manager, **kwargs)
342 function_call = _function_call
343 else:
--> 344 function_call["arguments"] += _function_call["arguments"]
345 if run_manager:
346 run_manager.on_llm_new_token(token)
KeyError: 'arguments'
```
New Result:
```python
{'input': 'Run some python code to test your interpreter',
'output': "The Python code `print('Hello, World!')` has been executed successfully, and the output `Hello, World!` has been printed."}
```
Co-authored-by: jswe <jswe@polencapital.com>
- Description: Fix mangling issue affecting a couple of VectorStore
classes including Redis.
- Issue: https://github.com/langchain-ai/langchain/issues/8185
- @rlancemartin
This is a simple issue but I lack of some context in the original
implementation.
My changes perhaps are not the definitive fix but to start a quick
discussion.
@hinthornw Tagging you since one of your changes introduced this
[here.](c38965fcba)
I have some Prompt subclasses in my project that I'd like to be able to
deserialize in callbacks. Right now `loads()`/`load()` will bail when it
encounters my object, but I know I can trust the objects because they're
in my own projects.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
### Description
This PR includes the following changes:
- Adds AOSS (Amazon OpenSearch Service Serverless) support to
OpenSearch. Please refer to the documentation on how to use it.
- While creating an index, AOSS only supports Approximate Search with
`nmslib` and `faiss` engines. During Search, only Approximate Search and
Script Scoring (on doc values) are supported.
- This PR also adds support to `efficient_filter` which can be used with
`faiss` and `lucene` engines.
- The `lucene_filter` is deprecated. Instead please use the
`efficient_filter` for the lucene engine.
Signed-off-by: Naveen Tatikonda <navtat@amazon.com>
Given a user question, this will -
* Use LLM to generate a set of queries.
* Query for each.
* The URLs from search results are stored in self.urls.
* A check is performed for any new URLs that haven't been processed yet
(not in self.url_database).
* Only these new URLs are loaded, transformed, and added to the
vectorstore.
* The vectorstore is queried for relevant documents based on the
questions generated by the LLM.
* Only unique documents are returned as the final result.
This code will avoid reprocessing of URLs across multiple runs of
similar queries, which should improve the performance of the retriever.
It also keeps track of all URLs that have been processed, which could be
useful for debugging or understanding the retriever's behavior.
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Added a quick check to make integration easier with Databricks; another
option would be to make a new class, but this seemed more
straightfoward.
cc: @liangz1 Can this be done in a more straightfoward way?
This PR removes operator overloading for base message.
Removing the `+` operating from base message will help make sure that:
1) There's no need to re-define `+` for message chunks
2) That there's no unexpected behavior in terms of types changing
(adding two messages yields a ChatPromptTemplate which is not a message)
- Description: Small change to fix broken Azure streaming. More complete
migration probably still necessary once the new API behavior is
finalized.
- Issue: Implements fix by @rock-you in #6462
- Dependencies: N/A
There don't seem to be any tests specifically for this, and I was having
some trouble adding some. This is just a small temporary fix to allow
for the new API changes that OpenAI are releasing without breaking any
other code.
---------
Co-authored-by: Jacob Swe <jswe@polencapital.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
# What
- This is to add test for faiss vector store with score threshold
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: This is to add test for faiss vector store with score
threshold
- Issue: None
- Dependencies: None
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @MlopsJ
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
# What
- Use `logger` instead of using logging directly.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: Use `logger` instead of using logging directly.
- Issue: None
- Dependencies: None
- Tag maintainer: @baskaryan
- Twitter handle: @MlopsJ
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Refactored `requests.py`. The same as
https://github.com/langchain-ai/langchain/pull/7961#8098#8099
requests.py is in the root code folder. This creates the
`langchain.requests: Requests` group on the API Reference navigation
ToC, on the same level as Chains and Agents which is incorrect.
Refactoring:
- copied requests.py content into utils/requests.py
- I added the backwards compatibility ref in the original requests.py.
- updated imports to requests objects
@hwchase17, @baskaryan
Addresses #7578. `run()` can return dictionaries, Pydantic objects or
strings, so the type hints should reflect that. See the chain from
`create_structured_output_chain` for an example of a non-string return
type from `run()`.
I've updated the BaseLLMChain return type hint from `str` to `Any`.
Although, the differences between `run()` and `__call__()` seem less
clear now.
CC: @baskaryan
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Until now, hybrid search was limited to modules requiring external
services, such as Weaviate/Pinecone Hybrid Search. However, I have
developed a hybrid retriever that can merge a list of retrievers using
the [Reciprocal Rank
Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf)
algorithm. This new approach, similar to Weaviate hybrid search, does
not require the initialization of any external service.
- Dependencies: No - Twitter handle: dayuanjian21687
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Changed "SELECT" and "UPDTAE" intent check from "=" to
"in",
- Issue: Based on my own testing, most of the LLM (StarCoder, NeoGPT3,
etc..) doesn't return a single word response ("SELECT" / "UPDATE")
through this modification, we can accomplish the same output without
curated prompt engineering.
- Dependencies: None
- Tag maintainer: @baskaryan
- Twitter handle: @aditya_0290
Thank you for maintaining this library, Keep up the good efforts.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Stop sequences are useful if you are doing long-running completions and
need to early-out rather than running for the full max_length... not
only does this save inference cost on Replicate, it is also much faster
if you are going to truncate the output later anyway.
Other LLMs support stop sequences natively (e.g. OpenAI) but I didn't
see this for Replicate so adding this via their prediction cancel
method.
Housekeeping: I ran `make format` and `make lint`, no issues reported in
the files I touched.
I did update the replicate integration test and ran `poetry run pytest
tests/integration_tests/llms/test_replicate.py` successfully.
Finally, I am @tjaffri https://twitter.com/tjaffri for feature
announcement tweets... or if you could please tag @docugami
https://twitter.com/docugami we would really appreciate that :-)
Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
@rlancemartin
The modification includes:
* etherscanLoader
* test_etherscan
* document ipynb
I have run the test, lint, format, and spell check. I do encounter a
linting error on ipynb, I am not sure how to address that.
```
docs/extras/modules/data_connection/document_loaders/integrations/Etherscan.ipynb:55: error: Name "null" is not defined [name-defined]
docs/extras/modules/data_connection/document_loaders/integrations/Etherscan.ipynb:76: error: Name "null" is not defined [name-defined]
Found 2 errors in 1 file (checked 1 source file)
```
- Description: The Etherscan loader uses etherscan api to load
transaction histories under specific accounts on Ethereum Mainnet.
- No dependency is introduced by this PR.
- Twitter handle: glazecl
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
ChatGLM LLM integration will by default accumulate conversation
history(with_history=True) to ChatGLM backend api, which is not expected
in most cases. This PR set with_history=False by default, user should
explicitly set llm.with_history=True to turn this feature on. Related
PR: #8048#7774
---------
Co-authored-by: mlot <limpo2000@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
My team recently faced an issue while using MSSQL and passing a schema
name.
We noticed that "SET search_path TO {self.schema}" is being called for
us, which is not a valid ms-sql query, and is specific to postgresql
dialect.
We were able to run it locally after this fix.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Refactored `example_generator.py`. The same as #7961
`example_generator.py` is in the root code folder. This creates the
`langchain.example_generator: Example Generator ` group on the API
Reference navigation ToC, on the same level as `Chains` and `Agents`
which is not correct.
Refactoring:
- moved `example_generator.py` content into
`chains/example_generator.py` (not in `utils` because the
`example_generator` has dependencies on other LangChain classes. It also
doesn't work for moving into `utilities/`)
- added the backwards compatibility ref in the original
`example_generator.py`
@hwchase17
- **Description:** Simple change of the Class that ContentHandler
inherits from. To create an object of type SagemakerEndpointEmbeddings,
the property content_handler must be of type EmbeddingsContentHandler
not ContentHandlerBase anymore,
- **Twitter handle:** @Juanjo_Torres11
Co-authored-by: Bagatur <baskaryan@gmail.com>
Refactored `input.py`. The same as
https://github.com/langchain-ai/langchain/pull/7961#8098#8099
input.py is in the root code folder. This creates the `langchain.input:
Input` group on the API Reference navigation ToC, on the same level as
Chains and Agents which is incorrect.
Refactoring:
- copied input.py file into utils/input.py
- I added the backwards compatibility ref in the original input.py.
- changed several imports to a new ref
@hwchase17, @baskaryan
Description:
This PR adds embeddings for LocalAI (
https://github.com/go-skynet/LocalAI ), a self-hosted OpenAI drop-in
replacement. As LocalAI can re-use OpenAI clients it is mostly following
the lines of the OpenAI embeddings, however when embedding documents, it
just uses string instead of sending tokens as sending tokens is
best-effort depending on the model being used in LocalAI. Sending tokens
is also tricky as token id's can mismatch with the model - so it's safer
to just send strings in this case.
Partly related to: https://github.com/hwchase17/langchain/issues/5256
Dependencies: No new dependencies
Twitter: @mudler_it
---------
Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: Bagatur <baskaryan@gmail.com>
**PR Description:**
This pull request introduces several enhancements and new features to
the `CubeSemanticLoader`. The changes include the following:
1. Added imports for the `json` and `time` modules.
2. Added new constructor parameters: `load_dimension_values`,
`dimension_values_limit`, `dimension_values_max_retries`, and
`dimension_values_retry_delay`.
3. Updated the class documentation with descriptions for the new
constructor parameters.
4. Added a new private method `_get_dimension_values()` to retrieve
dimension values from Cube's REST API.
5. Modified the `load()` method to load dimension values for string
dimensions if `load_dimension_values` is set to `True`.
6. Updated the API endpoint in the `load()` method from the base URL to
the metadata endpoint.
7. Refactored the code to retrieve metadata from the response JSON.
8. Added the `column_member_type` field to the metadata dictionary to
indicate if a column is a measure or a dimension.
9. Added the `column_values` field to the metadata dictionary to store
the dimension values retrieved from Cube's API.
10. Modified the `page_content` construction to include the column title
and description instead of the table name, column name, data type,
title, and description.
These changes improve the functionality and flexibility of the
`CubeSemanticLoader` class by allowing the loading of dimension values
and providing more detailed metadata for each document.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Refactored `formatting.py`. The same as
https://github.com/langchain-ai/langchain/pull/7961#8098#8099
formatting.py is in the root code folder. This creates the
`langchain.formatting: Formatting` group on the API Reference navigation
ToC, on the same level as Chains and Agents which is incorrect.
Refactoring:
- moved formatting.py content into utils/formatting.py
- I did not add the backwards compatibility ref in the original
formatting.py. It seems unnecessary.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: In the llms/__init__.py, the key name is wrong for
mlflowaigateway. It should be mlflow-ai-gateway
- Issue: NA
- Dependencies: NA
- Tag maintainer: @hwchase17, @baskaryan
- Twitter handle: na
Without this fix, when we run the code for mlflowaigateway, we will get
error as below
ValueError: Loading mlflow-ai-gateway LLM not supported
---------
Co-authored-by: rajib76 <rajib76@yahoo.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Fixes an issue with the github tool where the API returned special
objects but the tool was expecting dictionaries.
Also added proper docstrings to the GitHubAPIWraper methods and a (very
basic) integration test.
Maintainer responsibilities:
- Agents / Tools / Toolkits: @hinthornw
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
# What
- Add faiss vector search test for score threshold
- Fix failing faiss vector search test; filtering with list value is
wrong.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: Add faiss vector search test for score threshold; Fix
failing faiss vector search test; filtering with list value is wrong.
- Issue: None
- Dependencies: None
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @MlopsJ
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Codespaces and devcontainer was broken by the [repo
restructure](https://github.com/langchain-ai/langchain/discussions/8043).
- Description: Add libs/langchain to container so it can be built
without error.
- Issue: -
- Dependencies: -
- Tag maintainer: @hwchase17 @baskaryan
- Twitter handle: @finnless
The failed build log says:
```
#10 [langchain-dev-dependencies 2/2] RUN poetry install --no-interaction --no-ansi --with dev,test,docs
#10 sha256:e850ee99fc966158bfd2d85e82b7c57244f47ecbb1462e75bd83b981a56a1929
2023-07-23 23:30:33.692Z: #10 0.827
#10 0.827 Directory libs/langchain does not exist
2023-07-23 23:30:33.738Z: #10 ERROR: executor failed running [/bin/sh -c poetry install --no-interaction --no-ansi --with dev,test,docs]: exit code: 1
```
The new pyproject.toml imports from libs/langchain:
77bf75c236/pyproject.toml (L14-L16)
But libs/langchain is never added to the dev.Dockerfile:
77bf75c236/libs/langchain/dev.Dockerfile (L37-L39)
Hopefully, this doesn't come across as nitpicky! That isn't the
intention. I only noticed it, because I enjoy reading the documentation
and when I hit a mental road bump it is usually due to a missing word or
something =)
@baskaryan
This bugfix PR adds kwargs support to Baseten model invocations so that
e.g. the following script works properly:
```python
chatgpt_chain = LLMChain(
llm=Baseten(model="MODEL_ID"),
prompt=prompt,
verbose=False,
memory=ConversationBufferWindowMemory(k=2),
llm_kwargs={"max_length": 4096}
)
```
Unexpectedly changed at
6792a3557d
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
I guess `allowed_search_types` is unexpectedly changed in
6792a3557d,
so that we cannot specify `similarity_score_threshold` here.
```python
class VectorStoreRetriever(BaseRetriever):
...
allowed_search_types: ClassVar[Collection[str]] = (
"similarity",
"similarityatscore_threshold",
"mmr",
)
@root_validator()
def validate_search_type(cls, values: Dict) -> Dict:
"""Validate search type."""
search_type = values["search_type"]
if search_type not in cls.allowed_search_types:
raise ValueError(...)
if search_type == "similarity_score_threshold":
... # UNREACHABLE CODE
```
VectorStores Maintainers: @rlancemartin @eyurtsev
- Description: Get SQL Cmd directly generated by SQL-Database Chain
without executing it in the DB engine.
- Issue: #4853
- Tag maintainer: @hinthornw,@baskaryan
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
New HTML loader that asynchronously loader a list of urls.
New transformer using [HTML2Text](https://github.com/Alir3z4/html2text/)
for HTML to clean, easy-to-read plain ASCII text (valid Markdown).
In certain 0-shot scenarios, the existing stateful language model can
unintentionally send/accumulate the .history.
This commit adds the "with_history" option to chatglm, allowing users to
control the behavior of .history and prevent unintended accumulation.
Possible reviewers @hwchase17 @baskaryan @mlot
Refer to discussion over this thread:
https://twitter.com/wey_gu/status/1681996149543276545?s=20
The `sql_database.py` is unnecessarily placed in the root code folder.
A similar code is usually placed in the `utilities/`.
As a byproduct of this placement, the sql_database is [placed on the top
level of classes in the API
Reference](https://api.python.langchain.com/en/latest/api_reference.html#module-langchain.sql_database)
which is confusing and not correct.
- moved the `sql_database.py` from the root code folder to the
`utilities/`
@baskaryan
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Fixed the bug causing: `TypeError: generate() got multiple values for
keyword argument 'stop_sequences'`
```python
res = await self.async_client.generate(
prompt,
**self._default_params,
stop_sequences=stop,
**kwargs,
)
```
The above throws an error because stop_sequences is in also in the
self._default_params.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
I've extended the support of async API to local Qdrant mode. It is faked
but allows prototyping without spinning a container. The tests are
improved to test the in-memory case as well.
@baskaryan @rlancemartin @eyurtsev @agola11
Redis cache currently stores model outputs as strings. Chat generations
have Messages which contain more information than just a string. Until
Redis cache supports fully storing messages, cache should not interact
with chat generations.
Streaming support is useful if you are doing long-running completions or
need interactivity e.g. for chat... adding it to replicate, using a
similar pattern to other LLMs that support streaming.
Housekeeping: I ran `make format` and `make lint`, no issues reported in
the files I touched.
I did update the replicate integration test but ran into some issues,
specifically:
1. The original test was failing for me due to the model argument not
being specified... perhaps this test is not regularly run? I fixed it by
adding a call to the lightweight hello world model which should not be
burdensome for replicate infra.
2. I couldn't get the `make integration_tests` command to pass... a lot
of failures in other integration tests due to missing dependencies...
however I did make sure the particluar test file I updated does pass, by
running `poetry run pytest
tests/integration_tests/llms/test_replicate.py`
Finally, I am @tjaffri https://twitter.com/tjaffri for feature
announcement tweets... or if you could please tag @docugami
https://twitter.com/docugami we would really appreciate that :-)
Tagging model maintainers @hwchase17 @baskaryan
Thank for all the awesome work you folks are doing.
---------
Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
## Description
This PR adds a graph class and an openCypher QA chain to work with the
Amazon Neptune database.
## Dependencies
`requests` which is included in the LangChain dependencies.
## Maintainers for Review
@krlawrence
@baskaryan
### Twitter handle
pjain7
`math_utils.py` is in the root code folder. This creates the
`langchain.math_utils: Math Utils` group on the API Reference navigation
ToC, on the same level with `Chains` and `Agents` which is not correct.
Refactoring:
- created the `utils/` folder
- moved `math_utils.py` to `utils/math.py`
- moved `utils.py` to `utils/utils.py`
- split `utils.py` into `utils.py, env.py, strings.py`
- added module description
@baskaryan
- Description: fix to avoid rdflib warnings when concatenating URIs and
strings to create the text snippet for the knowledge graph's schema.
@marioscrock pointed this out in a comment related to #7165
- Issue: None, but the problem was mentioned as a comment in #7165
- Dependencies: None
- Tag maintainer: Related to memory -> @hwchase17, maybe @baskaryan as
it is a fix
Integrating Portkey, which adds production features like caching,
tracing, tagging, retries, etc. to langchain apps.
- Dependencies: None
- Twitter handle: https://twitter.com/portkeyai
- test_portkey.py added for tests
- example notebook added in new utilities folder in modules
Also fixed a bug with OpenAIEmbeddings where headers weren't passing.
cc @baskaryan
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: this change will add the google place ID of the found
location to the response of the GooglePlacesTool
- Issue: Not applicable
- Dependencies: no dependencies
- Tag maintainer: @hinthornw
- Twitter handle: Not applicable
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
---------
Co-authored-by: Jiří Moravčík <jiri.moravcik@gmail.com>
Co-authored-by: Jan Čurn <jan.curn@gmail.com>
- Description: Added the ability to define the open AI model.
- Issue: Currently the Doctran instance uses gpt-4 by default, this does
not work if the user has no access to gpt -4.
- rlancemartin, @eyurtsev, @baskaryan
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
BedrockEmbeddings does not have endpoint_url so that switching to custom
endpoint is not possible. I have access to Bedrock custom endpoint and
cannot use BedrockEmbeddings
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Added a parameter in VectorStoreRetrieverMemory which
filters the input given by the key when constructing the buffering the
document for Vector. This feature is helpful if you have certain inputs
apart from the VectorMemory's own memory_key that needs to be ignored
e.g when using combined memory, we might need to filter the memory_key
of the other memory, Please see the issue.
- Issue: #7695
- Tag maintainer: @rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
**Description:** Golden Query is a wrapper on top of the [Golden Query
API](https://docs.golden.com/reference/query-api) which enables
programmatic access to query results on entities across Golden's
Knowledge Base. For more information about Golden API, please see the
[Golden API Getting
Started](https://docs.golden.com/reference/getting-started) page.
**Issue:** None
**Dependencies:** requests(already present in project)
**Tag maintainer:** @hinthornw
Signed-off-by: Constantin Musca <constantin.musca@gmail.com>
- Description: Adding code to set pandas dataframe to display all the
columns. Otherwise, some data get truncated (it puts a "..." in the
middle and just shows the first 4 and last 4 columns) and the LLM
doesn't realize it isn't getting the full data. Default value is 8, so
this helps Dataframes larger than that.
- Issue: none
- Dependencies: none
- Tag maintainer: @hinthornw
- Twitter handle: none
## Background
With the addition on email and calendar tools, LangChain is continuing
to complete its functionality to automate business processes.
## Challenge
One of the pieces of business functionality that LangChain currently
doesn't have is the ability to search for flights and travel in order to
book business travel.
## Changes
This PR implements an integration with the
[Amadeus](https://developers.amadeus.com/) travel search API for
LangChain, enabling seamless search for flights with a single
authentication process.
## Who can review?
@hinthornw
## Appendix
@tsolakoua and @minjikarin, I utilized your
[amadeus-python](https://github.com/amadeus4dev/amadeus-python) library
extensively. Given the rising popularity of LangChain and similar AI
frameworks, the convergence of libraries like amadeus-python and tools
like this one is likely. So, I wanted to keep you updated on our
progress.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Add verbose support for the extraction_chain
- Issue: Fixes#7982
- Dependencies: NA
- Twitter handle: sheikirfanbasha
@hwchase17 and @agola11
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
## Description
Added a doc about the [Datadog APM integration for
LangChain](https://github.com/DataDog/dd-trace-py/pull/6137).
Note that the integration is on `ddtrace`'s end and so no code is
introduced/required by this integration into the langchain library. For
that reason I've refrained from adding an example notebook (although
I've added setup instructions for enabling the integration in the doc)
as no code is technically required to enable the integration.
Tagging @baskaryan as reviewer on this PR, thank you very much!
## Dependencies
Datadog APM users will need to have `ddtrace` installed, but the
integration is on `ddtrace` end and so does not introduce any external
dependencies to the LangChain project.
Co-authored-by: Bagatur <baskaryan@gmail.com>
Work in Progress.
WIP
Not ready...
Adds Document Loader support for
[Geopandas.GeoDataFrames](https://geopandas.org/)
Example:
- [x] stub out `GeoDataFrameLoader` class
- [x] stub out integration tests
- [ ] Experiment with different geometry text representations
- [ ] Verify CRS is successfully added in metadata
- [ ] Test effectiveness of searches on geometries
- [ ] Test with different geometry types (point, line, polygon with
multi-variants).
- [ ] Add documentation
---------
Co-authored-by: Lance Martin <lance@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Lance Martin <122662504+rlancemartin@users.noreply.github.com>
Removing **kwargs argument from add_texts method in DeepLake vectorstore
as it confuses users and doesn't fail when user is typing incorrect
parameters.
Also added small test to ensure the change is applies correctly.
Guys could pls take a look: @rlancemartin, @eyurtsev, this is a small
PR.
Thx so much!
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
- Adds integration for MLflow AI Gateway (this will be shipped in MLflow
2.5 this week).
Manual testing:
```sh
# Move to mlflow repo
cd /path/to/mlflow
# install langchain
pip install git+https://github.com/harupy/langchain.git@gateway-integration
# launch gateway service
mlflow gateway start --config-path examples/gateway/openai/config.yaml
# Then, run the examples in this PR
```
Fixed missing "content" field in azure.
Added a check for "content" in _dict (missing for azure
api=2023-07-01-preview)
@baskaryan
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: VectorStoreRetriever->similarity_score_threshold with
search_type of "similarity_score_threshold" not working with the
following two minor issues,
- Issue: 1. In line 237 of `vectorstores/base.py`, "score_threshold" is
passed to `_similarity_search_with_relevance_scores` as in the kwargs,
while score_threshold is not a valid argument of this method. As a fix,
before calling `_similarity_search_with_relevance_scores`,
score_threshold is popped from kwargs. 2. In line 596 to 607 of
`vectorstores/pgvector.py`, it's checking the distance_strategy against
the string in Enum. However, self.distance_strategy will get the
property of distance_strategy from line 316, where the callable function
is passed. To solve this issue, self.distance_strategy is changed to
self._distance_strategy to avoid calling the property method.,
- Dependencies: No,
- Tag maintainer: @rlancemartin, @eyurtsev,
- Twitter handle: No
---------
Co-authored-by: Bin Wang <bin@arcanum.ai>
- Description: exposes the ResultItem DocumentAttributes as document
metadata with key 'document_attributes' and refactors
AmazonKendraRetriever by providing a ResultItem base class in order to
avoid duplicate code;
- Tag maintainer: @3coins @hupe1980 @dev2049 @baskaryan
- Twitter handle: wilsonleao
### Why?
Some use cases depend on specific document attributes returned by the
retriever in order to improve the quality of the overall completion and
adjust what will be displayed to the user. For the sake of consistency,
we need to expose the DocumentAttributes as document metadata so we are
sure that we are using the values returned by the kendra request issued
by langchain.
I would appreciate your review @3coins @hupe1980 @dev2049. Thank you in
advance!
### References
- [Amazon Kendra
DocumentAttribute](https://docs.aws.amazon.com/kendra/latest/APIReference/API_DocumentAttribute.html)
- [Amazon Kendra
DocumentAttributeValue](https://docs.aws.amazon.com/kendra/latest/APIReference/API_DocumentAttributeValue.html)
---------
Co-authored-by: Piyush Jain <piyushjain@duck.com>
- Description: check title and excerpt separately for page_content so
that if title is empty but excerpt is present, the page_content will
only contain the excerpt
- Issue: #7782
- Tag maintainer: @3coins @baskaryan
- Twitter handle: wilsonleao
** This should land Monday the 17th **
Chroma is upgrading from `0.3.29` to `0.4.0`. `0.4.0` is easier to
build, more durable, faster, smaller, and more extensible. This comes
with a few changes:
1. A simplified and improved client setup. Instead of having to remember
weird settings, users can just do `EphemeralClient`, `PersistentClient`
or `HttpClient` (the underlying direct `Client` implementation is also
still accessible)
2. We migrated data stores away from `duckdb` and `clickhouse`. This
changes the api for the `PersistentClient` that used to reference
`chroma_db_impl="duckdb+parquet"`. Now we simply set
`is_persistent=true`. `is_persistent` is set for you to `true` if you
use `PersistentClient`.
3. Because we migrated away from `duckdb` and `clickhouse` - this also
means that users need to migrate their data into the new layout and
schema. Chroma is committed to providing extension notification and
tooling around any schema and data migrations (for example - this PR!).
After upgrading to `0.4.0` - if users try to access their data that was
stored in the previous regime, the system will throw an `Exception` and
instruct them how to use the migration assistant to migrate their data.
The migration assitant is a pip installable CLI: `pip install
chroma_migrate`. And is runnable by calling `chroma_migrate`
-- TODO ADD here is a short video demonstrating how it works.
Please reference the readme at
[chroma-core/chroma-migrate](https://github.com/chroma-core/chroma-migrate)
to see a full write-up of our philosophy on migrations as well as more
details about this particular migration.
Please direct any users facing issues upgrading to our Discord channel
called
[#get-help](https://discord.com/channels/1073293645303795742/1129200523111841883).
We have also created a [email
listserv](https://airtable.com/shrHaErIs1j9F97BE) to notify developers
directly in the future about breaking changes.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: version check to make sure chromadb >=0.4.0 does not
throw an error, and uses the default sqlite persistence engine when the
directory is set,
- Issue: the issue #7887
For attention of
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This PR
- fixes the `similarity_search_by_vector` example, makes the code run
and adds the example to mirror `similarity_search`
- reverts back to chroma from faiss to remove sharp edges / create a
happy path for new developers. (1) real metadata filtering, (2) expected
functionality like `update`, `delete`, etc to serve beyond the most
trivial use cases
@hwchase17
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Replace this comment with:
- Description: Modified the code to return the document id from the
redis document search as metadata.
- Issue: the issue # it fixes retrieval of id as metadata as string
- Tag maintainer: @rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: This is an update to a previously published notebook.
Sales Agent now has access to tools, and this notebook shows how to use
a Product Knowledge base
to reduce hallucinations and act as a better sales person!
- Issue: N/A
- Dependencies: `chromadb openai tiktoken`
- Tag maintainer: @baskaryan @hinthornw
- Twitter handle: @FilipMichalsky
Moving to the latest non-preview Azure OpenAI API version=2023-05-15.
The previous 2023-03-15-preview doesn't have support, SLA etc. For
instance, OpenAI SDK has moved to this version
https://github.com/openai/openai-python/releases/tag/v0.27.7
@baskaryan
Description:
Currently, Zilliz only support dedicated clusters using a pair of
username and password for connection. Regarding serverless clusters,
they can connect to them by using API keys( [ see official note
detail](https://docs.zilliz.com/docs/manage-cluster-credentials)), so I
add API key(token) description in Zilliz docs to make it more obvious
and convenient for this group of users to better utilize Zilliz. No
changes done to code.
---------
Co-authored-by: Robin.Wang <3Jg$94sbQ@q1>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Azure GPT-4 models can't be accessed via LLM model. It's easy to miss
that and a lot of discussions about that are on the Internet. Therefore
I added a comment in Azure LLM docs that mentions that and points to
Azure Chat OpenAI docs.
@baskaryan
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description: This PR adds the option to retrieve scores and explanations
in the WeaviateHybridSearchRetriever. This feature improves the
usability of the retriever by allowing users to understand the scoring
logic behind the search results and further refine their search queries.
Issue: This PR is a solution to the issue #7855
Dependencies: This PR does not introduce any new dependencies.
Tag maintainer: @rlancemartin, @eyurtsev
I have included a unit test for the added feature, ensuring that it
retrieves scores and explanations correctly. I have also included an
example notebook demonstrating its use.
Here I am adding documentation for the `PromptLayerCallbackHandler`.
When we created the initial PR for the callback handler the docs were
causing issues, so we merged without the docs.
1. Add the metadata filter of documents.
2. Add the text page_content filter of documents
3. fix the bug of similarity_search_with_score
Improvement and fix bug of AwaDB
Fix the conflict https://github.com/hwchase17/langchain/pull/7840
@rlancemartin @eyurtsev Thanks!
---------
Co-authored-by: vincent <awadb.vincent@gmail.com>
Motivation, it seems that when dealing with a long context and "big"
number of relevant documents we must avoid using out of the box score
ordering from vector stores.
See: https://arxiv.org/pdf/2306.01150.pdf
So, I added an additional parameter that allows you to reorder the
retrieved documents so we can work around this performance degradation.
The relevance respect the original search score but accommodates the
lest relevant document in the middle of the context.
Extract from the paper (one image speaks 1000 tokens):

This seems to be common to all diff arquitectures. SO I think we need a
good generic way to implement this reordering and run some test in our
already running retrievers.
It could be that my approach is not the best one from the architecture
point of view, happy to have a discussion about that.
For me this was the best place to introduce the change and start
retesting diff implementations.
@rlancemartin, @eyurtsev
---------
Co-authored-by: Lance Martin <lance@langchain.dev>
Still don't have good "how to's", and the guides / examples section
could be further pruned and improved, but this PR adds a couple examples
for each of the common evaluator interfaces.
- [x] Example docs for each implemented evaluator
- [x] "how to make a custom evalutor" notebook for each low level APIs
(comparison, string, agent)
- [x] Move docs to modules area
- [x] Link to reference docs for more information
- [X] Still need to finish the evaluation index page
- ~[ ] Don't have good data generation section~
- ~[ ] Don't have good how to section for other common scenarios / FAQs
like regression testing, testing over similar inputs to measure
sensitivity, etc.~
This new version fixes the"Verified Sources" display that got broken.
Instead of displaying the full URL, it shows the title of the page the
source is from.
- Description: Add a BM25 Retriever that do not need Elastic search
- Dependencies: rank_bm25(if it is not installed it will be install by
using pip, just like TFIDFRetriever do)
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: DayuanJian21687
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description:
Add LLM for ChatGLM-6B & ChatGLM2-6B API
Related Issue:
Will the langchain support ChatGLM? #4766
Add support for selfhost models like ChatGLM or transformer models #1780
Dependencies:
No extra library install required.
It wraps api call to a ChatGLM(2)-6B server(start with api.py), so api
endpoint is required to run.
Tag maintainer: @mlot
Any comments on this PR would be appreciated.
---------
Co-authored-by: mlot <limpo2000@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
# Support Redis Sentinel database connections
This PR adds the support to connect not only to Redis standalone servers
but High Availability Replication sets too
(https://redis.io/docs/management/sentinel/)
Redis Replica Sets have on Master allowing to write data and 2+ replicas
with read-only access to the data. The additional Redis Sentinel
instances monitor all server and reconfigure the RW-Master on the fly if
it comes unavailable.
Therefore all connections must be made through the Sentinels the query
the current master for a read-write connection. This PR adds basic
support to also allow a redis connection url specifying a Sentinel as
Redis connection.
Redis documentation and Jupyter notebook with Redis examples are updated
to mention how to connect to a redis Replica Set with Sentinels
-
Remark - i did not found test cases for Redis server connections to add
new cases here. Therefor i tests the new utility class locally with
different kind of setups to make sure different connection urls are
working as expected. But no test case here as part of this PR.
- [Xorbits](https://doc.xorbits.io/en/latest/) is an open-source
computing framework that makes it easy to scale data science and machine
learning workloads in parallel. Xorbits can leverage multi cores or GPUs
to accelerate computation on a single machine, or scale out up to
thousands of machines to support processing terabytes of data.
- This PR added support for the Xorbits agent, which allows langchain to
interact with Xorbits Pandas dataframe and Xorbits Numpy array.
- Dependencies: This change requires the Xorbits library to be installed
in order to be used.
`pip install xorbits`
- Request for review: @hinthornw
- Twitter handle: https://twitter.com/Xorbitsio
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
- Update the negative criterion descriptions to prevent bad predictions
- Add support for normalizing the string distance
- Fix potential json deserializing into float issues in the example
mapper
Starting over from #5654 because I utterly borked the poetry.lock file.
Adds new paramerters for to the MWDumpLoader class:
* skip_redirecst (bool) Tells the loader to skip articles that redirect
to other articles. False by default.
* stop_on_error (bool) Tells the parser to skip any page that causes a
parse error. True by default.
* namespaces (List[int]) Tells the parser which namespaces to parse.
Contains namespaces from -2 to 15 by default.
Default values are chosen to preserve backwards compatibility.
Sample dump XML and full unit test coverage (with extended tests that
pass!) also included!
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
**Issue**
When I use conda to install langchain, a dependency error throwed -
"ModuleNotFoundError: No module named 'langsmith'"
**Updated**
Run `pip install langsmith` when install langchain with conda
Co-authored-by: xaver.xu <xavier.xu@batechworks.com>
- New pin-to-side (button). This functionality allows you to search the
docs while asking the AI for questions
- Fixed the search bar in Firefox that won't detect a mouse click
- Fixes and improvements overall in the model's performance
Description: Added debugging output in DirectoryLoader to identify the
file being processed.
Issue: [Need a trace or debug feature in Lanchain DirectoryLoader
#7725](https://github.com/hwchase17/langchain/issues/7725)
Dependencies: No additional dependencies are required.
Tag maintainer: @rlancemartin, @eyurtsev
This PR enhances the DirectoryLoader with debugging output to help
diagnose issues when loading documents. This new feature does not add
any dependencies and has been tested on a local machine.
Inspired by #5550, I implemented full async API support in Qdrant. The
docs were extended to mention the existence of asynchronous operations
in Langchain. I also used that chance to restructure the tests of Qdrant
and provided a suite of tests for the async version. Async API requires
the GRPC protocol to be enabled. Thus, it doesn't work on local mode
yet, but we're considering including the support to be consistent.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Integrate [Rockset](https://rockset.com/docs/) as a document loader.
Issue: None
Dependencies: Nothing new (rockset's dependency was already added
[here](https://github.com/hwchase17/langchain/pull/6216))
Tag maintainer: @rlancemartin
I have added a test for the integration and an example notebook showing
its use. I ran `make lint` and everything looks good.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This pull request adds a ElasticsearchDatabaseChain chain for
interacting with analytics database, in the manner of the
SQLDatabaseChain.
Maintainer: @samber
Twitter handler: samuelberthe
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: This allows passing auth objects in request wrappers.
Currently, we can handle auth by editing headers in the
RequestsWrappers, but more complex auth methods, such as Kerberos, could
be handled better by using existing functionality within the requests
library. There are many authentication options supported both natively
and by extensions, such as requests-kerberos or requests-ntlm.
- Issue: Fixes#7542
- Dependencies: none
Co-authored-by: eric.speidel@de.bosch.com <eric.speidel@de.bosch.com>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
- Add langchain.llms.Tonyi for text completion, in examples into the
Tonyi Text API,
- Add system tests.
Note async completion for the Text API is not yet supported and will be
included in a future PR.
Dependencies: dashscope. It will be installed manually cause it is not
need by everyone.
Happy for feedback on any aspect of this PR @hwchase17 @baskaryan.
Multiple people have asked in #5081 for a way to limit the documents
returned from an AzureCognitiveSearchRetriever. This PR adds the `top_n`
parameter to allow that.
Twitter handle:
[@UmerHAdil](twitter.com/umerHAdil)
Fix for Serializable class to include name, used in FileCallbackHandler
as same issue #7524
Description: Fixes the Serializable class to include 'name' attribute
(class_name) in the dict created,
This is used in Callbacks, specifically the StdOutCallbackHandler,
FileCallbackHandler.
Issue: As described in issue #7524
Dependencies: None
Tag maintainer: SInce this is related to the callback module, tagging
@agola11 @idoru
Comments:
Glad to see issue #7524 fixed in pull #6124, but you forget to change
the same place in FileCallbackHandler
When a custom Embeddings object is set, embed all given texts in a batch
instead of passing them through individually. Any code calling add_texts
can then appropriately size the chunks of texts that are passed through
to take full advantage of the hardware it's running on.
Fixes#6198
ElasticKnnSearch.from_texts is actually ElasticVectorSearch.from_texts
and throws because it calls ElasticKnnSearch constructor with the wrong
arguments.
Now ElasticKnnSearch has its own from_texts, which constructs a proper
ElasticKnnSearch.
---------
Co-authored-by: Charles Parker <charlesparker@FiltaMacbook.local>
Co-authored-by: Bagatur <baskaryan@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description:
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
## Description
This PR addresses a bug in the RecursiveUrlLoader class where absolute
URLs were being treated as relative URLs, causing malformed URLs to be
produced. The fix involves using the urljoin function from the
urllib.parse module to correctly handle both absolute and relative URLs.
@rlancemartin @eyurtsev
---------
Co-authored-by: Lance Martin <lance@langchain.dev>
Fixes # (issue)
The existing PlaywrightURLLoader load() function uses a synchronous
browser which is not compatible with jupyter.
This PR adds a sister function aload() which can be run insisde a
notebook.
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Mainline the tracer to avoid calling feedback before run is posted.
Chose a bool over `max_workers` arg for configuring since we don't want
to support > 1 for now anyway. At some point may want to manage the pool
ourselves (ordering only really matters within a run and with parent
runs)
# Browserless
Added support for Browserless' `/content` endpoint as a document loader.
### About Browserless
Browserless is a cloud service that provides access to headless Chrome
browsers via a REST API. It allows developers to automate Chromium in a
serverless fashion without having to configure and maintain their own
Chrome infrastructure.
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Lance Martin <lance@langchain.dev>
With AzureOpenAI openai_api_type defaulted to "azure" the logic in
utils' get_from_dict_or_env() function triggered by the root validator
never looks to environment for the user's runtime openai_api_type
values. This inhibits folks using token-based auth, or really any auth
model other than "azure."
By removing the "default" value, this allows environment variables to be
pulled at runtime for the openai_api_type and thus enables the other
api_types which are expected to work.
---------
Co-authored-by: Ebo <mebstyne@microsoft.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
This PR is aimed at enhancing the clarity of the documentation in the
langchain project.
**Description**:
In the graphql.ipynb file, I have removed the unnecessary 'llm' argument
from the initialization process of the GraphQL tool (of type
_EXTRA_OPTIONAL_TOOLS). The 'llm' argument is not required for this
process. Its presence could potentially confuse users. This modification
simplifies the understanding of tool initialization and minimizes
potential confusion.
**Issue**: Not applicable, as this is a documentation improvement.
**Dependencies**: None.
**I kindly request a review from the following maintainer**: @hinthornw,
who is responsible for Agents / Tools / Toolkits.
No new integration is being added in this PR, hence no need for a test
or an example notebook.
Please see the changes for more detail and let me know if any further
modification is necessary.
- Migrate from deprecated langchainplus_sdk to `langsmith` package
- Update the `run_on_dataset()` API to use an eval config
- Update a number of evaluators, as well as the loading logic
- Update docstrings / reference docs
- Update tracer to share single HTTP session
Sometimes the score responded by chatgpt would be like 'Respone
example\nScore: 90 (fully answers the question, but could provide more
detail on the specific error message)'
For the score contains not only numbers, it raise a ValueError like
Update the RegexParser from `.*` to `\d*` would help us to ignore the
text after number.
Co-authored-by: Bagatur <baskaryan@gmail.com>
Fixed#6768.
This is a workaround only. I think a better longer-term solution is for
chains to declare how many input variables they *actually* need (as
opposed to ones that are in the prompt, where some may be satisfied by
the memory). Then, a wrapping chain can check the input match against
the actual input variables.
@hwchase17
Added fix to avoid irrelevant attributes being returned plus an example
of extracting unrelated entities and an exampe of using an 'extra_info'
attribute to extract unstructured data for an entity.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Added an option to trim intermediate steps to last N steps. This is
especially useful for long-running agents. Users can explicitly specify
N or provide a function that does custom trimming/manipulation on
intermediate steps. I've mimicked the API of the `handle_parsing_errors`
parameter.
Converting the Similarity obtained in the
similarity_search_with_score_by_vector method whilst comparing to the
passed
threshold. This is because the passed threshold is a number between 0 to
1 and is already in the relevance_score_fn format.
As of now, the function is comparing two different scoring parameters
and that wouldn't work.
Dependencies
None
Issue:
Different scores being compared in
similarity_search_with_score_by_vector method in FAISS.
Tag maintainer
@hwchase17
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @dev2049
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @dev2049
- Memory: @hwchase17
- Agents / Tools / Toolkits: @vowelparrot
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Adds a new document transformer that automatically extracts metadata for
a document based on an input schema. I also moved
`document_transformers.py` to `document_transformers/__init__.py` to
group it with this new transformer - it didn't seem to cause issues in
the notebook, but let me know if I've done something wrong there.
Also had a linter issue I couldn't figure out:
```
MacBook-Pro:langchain jacoblee$ make lint
poetry run mypy .
docs/dist/conf.py: error: Duplicate module named "conf" (also at "./docs/api_reference/conf.py")
docs/dist/conf.py: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#mapping-file-paths-to-modules for more info
docs/dist/conf.py: note: Common resolutions include: a) using `--exclude` to avoid checking one of them, b) adding `__init__.py` somewhere, c) using `--explicit-package-bases` or adjusting MYPYPATH
Found 1 error in 1 file (errors prevented further checking)
make: *** [lint] Error 2
```
@rlancemartin @baskaryan
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Add two new document transformers that translates
documents into different languages and converts documents into q&a
format to improve vector search results. Uses OpenAI function calling
via the [doctran](https://github.com/psychic-api/doctran/tree/main)
library.
- Issue: N/A
- Dependencies: `doctran = "^0.0.5"`
- Tag maintainer: @rlancemartin @eyurtsev @hwchase17
- Twitter handle: @psychicapi or @jfan001
Notes
- Adheres to the `DocumentTransformer` abstraction set by @dev2049 in
#3182
- refactored `EmbeddingsRedundantFilter` to put it in a file under a new
`document_transformers` module
- Added basic docs for `DocumentInterrogator`, `DocumentTransformer` as
well as the existing `EmbeddingsRedundantFilter`
---------
Co-authored-by: Lance Martin <lance@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
## Summary
This PR corrects the checks for credentials_profile_name, and
region_name attributes. This was causing validation exceptions when
either of these values were missing during creation of the retriever
class.
Fixes#7571
#### Requested reviewers:
@baskaryan
Updates to the WhyLabsCallbackHandler and example notebook
- Update dependency to langkit 0.0.6 which defines new helper methods
for callback integrations
- Update WhyLabsCallbackHandler to use the new `get_callback_instance`
so that the callback is mostly defined in langkit
- Remove much of the implementation of the WhyLabsCallbackHandler here
in favor of the callback instance
This does not change the behavior of the whylabs callback handler
implementation but is a reorganization that moves some of the
implementation externally to our optional dependency package, and should
make future updates easier.
@agola11
- Description: Adds a new chain that acts as a wrapper around Sympy to
give LLMs the ability to do some symbolic math.
- Dependencies: SymPy
---------
Co-authored-by: sreiswig <sreiswig@github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
# Description
This PR adds model architecture to the `WandbTracer` from the Serialized
Run kwargs. This allows visualization of the calling parameters of an
Agent, LLM and Tool in Weights & Biases.
1. Safely serialize the run objects to WBTraceTree model_dict
2. Refactors the run processing logic to be more organized.
- Twitter handle: @parambharat
---------
Co-authored-by: Bharat Ramanathan <ramanathan.parameshwaran@gohuddl.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description: I wanted to be able to redirect debug output to a function,
but it wasn't very easy. I figured it would make sense to implement a
`FunctionCallbackHandler`, and reimplement `ConsoleCallbackHandler` as a
subclass that calls the `print` function. Now I can create a simple
subclass in my project that calls `logging.info` or whatever I need.
Tag maintainer: @agola11
Twitter handle: `@andandaraalex`
Added an _endpoint_url_ attribute to Bedrock(LLM) class - I have access
to Bedrock only via us-west-2 endpoint and needed to change the endpoint
url, this could be useful to other users
This change makes the ecosystem integrations cnosdb documentation more
realistic and easy to understand.
- change examples of question and table
- modify typo and format
When using callbacks, there are times when callbacks can be added
redundantly: for instance sometimes you might need to create an llm with
specific callbacks, but then also create and agent that uses a chain
that has those callbacks already set. This means that "callbacks" might
get passed down again to the llm at predict() time, resulting in
duplicate calls to the `on_llm_start` callback.
For the sake of simplicity, I made it so that langchain never adds an
exact handler/callbacks object in `add_handler`, thus avoiding the
duplicate handler issue.
Tagging @hwchase17 for callback review
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: add wrapper that lets you use KoboldAI api in langchain
- Issue: n/a
- Dependencies: none extra, just what exists in lanchain
- Tag maintainer: @baskaryan
- Twitter handle: @zanzibased
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- **Description**: Current implementation assumes that the length of
`texts` and `ids` should be same but if the passed `ids` length is not
equal to the passed length of `texts`, current code
`dict(zip(index_to_id.values(), documents))` is not failing or giving
any warning and silently creating docstores only for the passed `ids`
i.e. if `ids = ['A']` and `texts=["I love Open Source","I love
langchain"]` then only one `docstore` will be created. But either two
docstores should be created assuming same id value for all the elements
of `texts` or an error should be raised.
- **Issue**: My change fixes this by using dictionary comprehension
instead of `zip`. This was if lengths of `ids` and `texts` mismatches an
explicit `IndexError` will be raised.
@rlancemartin, @eyurtsev
fix#7569
add following properties for Notion DB document loader's metadata
- `unique_id`
- `status`
- `people`
@rlancemartin, @eyurtsev (Since this is a change related to
`DataLoaders`)
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Currently `ChatOutputParser` extracts actions by splitting the text on
"```", and then load the second part as a json string.
But sometimes the LLM will wrap the action in markdown code block like:
````markdown
```json
{
"action": "foo",
"action_input": "bar"
}
```
````
Splitting text on "```" will cause `OutputParserException` in such case.
This PR changes the behaviour to extract the `$JSON_BLOB` by regex, so
that it can handle both ` ``` ``` ` and ` ```json ``` `
@hinthornw
---------
Co-authored-by: Junlin Zhou <jlzhou@zjuici.com>
**Description: a description of the change**
Fixed `make docs_build` and related scripts which caused errors. There
are several changes.
First, I made the build of the documentation and the API Reference into
two separate commands. This is because it takes less time to build. The
commands for documents are `make docs_build`, `make docs_clean`, and
`make docs_linkcheck`. The commands for API Reference are `make
api_docs_build`, `api_docs_clean`, and `api_docs_linkcheck`.
It looked like `docs/.local_build.sh` could be used to build the
documentation, so I used that. Since `.local_build.sh` was also building
API Rerefence internally, I removed that process. `.local_build.sh` also
added some Bash options to stop in error or so. Futher more added `cd
"${SCRIPT_DIR}"` at the beginning so that the script will work no matter
which directory it is executed in.
`docs/api_reference/api_reference.rst` is removed, because which is
generated by `docs/api_reference/create_api_rst.py`, and added it to
.gitignore.
Finally, the description of CONTRIBUTING.md was modified.
**Issue: the issue # it fixes (if applicable)**
https://github.com/hwchase17/langchain/issues/6413
**Dependencies: any dependencies required for this change**
`nbdoc` was missing in group docs so it was added. I installed it with
the `poetry add --group docs nbdoc` command. I am concerned if any
modifications are needed to poetry.lock. I would greatly appreciate it
if you could pay close attention to this file during the review.
**Tag maintainer**
- General / Misc / if you don't know who to tag: @baskaryan
If this PR needs any additional changes, I'll be happy to make them!
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description: Refactor the upsert method in the Pinecone class to allow
for additional keyword arguments. This change adds flexibility and
extensibility to the method, allowing for future modifications or
enhancements. The upsert method now accepts the `**kwargs` parameter,
which can be used to pass any additional arguments to the Pinecone
index. This change has been made in both the `upsert` method in the
`Pinecone` class and the `upsert` method in the
`similarity_search_with_score` class method. Falls in line with the
usage of the upsert method in
[Pinecone-Python-Client](4640c4cf27/pinecone/index.py (L73))
Issue: [This feature request in Pinecone
Repo](https://github.com/pinecone-io/pinecone-python-client/issues/184)
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- Memory: @hwchase17
---------
Co-authored-by: kwesi <22204443+yankskwesi@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Lance Martin <122662504+rlancemartin@users.noreply.github.com>
### Description:
This PR introduces a new option format_diff to the existing Makefile.
This option allows us to apply the formatting tools (Black and isort)
only to the changed Python and ipynb files since the last commit. This
will make our development process more efficient as we only format the
codes that we modify. Along with this change, comments were added to
make the Makefile more understandable and maintainable.
### Issue:
N/A
### Dependencies:
Add dependency to black.
### Tag maintainer:
@baskaryan
### Twitter handle:
[kzk_maeda](https://twitter.com/kzk_maeda)
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description: I added an example of how to reference the OpenAI API
Organization ID, because I couldn't find it before. In the example, it
is mentioned how to achieve this using environment variables as well as
parameters for the OpenAI()-class
Issue: -
Dependencies: -
Twitter @schop-rob
This simply awaits `AsyncRunManager`'s method call in `MulitRouteChain`.
Noticed this while playing around with Langchain's implementation of
`MultiPromptChain`. @baskaryan
cheers
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description: ChatOpenAI model does not return finish_reason in
generation_info.
Issue: #2702
Dependencies: None
Tag maintainer: @baskaryan
Thank you
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Currently there are 4 tools in SQL agent-toolkits, and 2 of them have
reference to the other 2.
This PR change the reference from hard coded string to `{tool.name}`
Co-authored-by: Junlin Zhou <jlzhou@zjuici.com>
Small fix to a link from the Marqo page in the ecosystem.
The link was not updated correctly when the documentation structure
changed to html pages instead of links to notebooks.
I found it unclear, where to get the API keys for JinaChat. Mentioning
this in the docstring should be helpful.
#7490
Twitter handle: benji1a
@delgermurun
@svlandeg gave me a tip for how to improve a bit on
https://github.com/hwchase17/langchain/pull/7442 for some extra speed
and memory gains. The tagger isn't needed for sentencization, so can be
disabled too.
This PR changes the behavior of `Qdrant.from_texts` so the collection is
reused if not requested to recreate it. Previously, calling
`Qdrant.from_texts` or `Qdrant.from_documents` resulted in removing the
old data which was confusing for many.
- Description: Added notebook to LangChain docs that explains how to use
Lemon AI NLP Workflow Automation tool with Langchain
- Issue: not applicable
- Dependencies: not applicable
- Tag maintainer: @agola11
- Twitter handle: felixbrockm
# Causal program-aided language (CPAL) chain
## Motivation
This builds on the recent [PAL](https://arxiv.org/abs/2211.10435) to
stop LLM hallucination. The problem with the
[PAL](https://arxiv.org/abs/2211.10435) approach is that it hallucinates
on a math problem with a nested chain of dependence. The innovation here
is that this new CPAL approach includes causal structure to fix
hallucination.
For example, using the below word problem, PAL answers with 5, and CPAL
answers with 13.
"Tim buys the same number of pets as Cindy and Boris."
"Cindy buys the same number of pets as Bill plus Bob."
"Boris buys the same number of pets as Ben plus Beth."
"Bill buys the same number of pets as Obama."
"Bob buys the same number of pets as Obama."
"Ben buys the same number of pets as Obama."
"Beth buys the same number of pets as Obama."
"If Obama buys one pet, how many pets total does everyone buy?"
The CPAL chain represents the causal structure of the above narrative as
a causal graph or DAG, which it can also plot, as shown below.

.
The two major sections below are:
1. Technical overview
2. Future application
Also see [this jupyter
notebook](https://github.com/borisdev/langchain/blob/master/docs/extras/modules/chains/additional/cpal.ipynb)
doc.
## 1. Technical overview
### CPAL versus PAL
Like [PAL](https://arxiv.org/abs/2211.10435), CPAL intends to reduce
large language model (LLM) hallucination.
The CPAL chain is different from the PAL chain for a couple of reasons.
* CPAL adds a causal structure (or DAG) to link entity actions (or math
expressions).
* The CPAL math expressions are modeling a chain of cause and effect
relations, which can be intervened upon, whereas for the PAL chain math
expressions are projected math identities.
PAL's generated python code is wrong. It hallucinates when complexity
increases.
```python
def solution():
"""Tim buys the same number of pets as Cindy and Boris.Cindy buys the same number of pets as Bill plus Bob.Boris buys the same number of pets as Ben plus Beth.Bill buys the same number of pets as Obama.Bob buys the same number of pets as Obama.Ben buys the same number of pets as Obama.Beth buys the same number of pets as Obama.If Obama buys one pet, how many pets total does everyone buy?"""
obama_pets = 1
tim_pets = obama_pets
cindy_pets = obama_pets + obama_pets
boris_pets = obama_pets + obama_pets
total_pets = tim_pets + cindy_pets + boris_pets
result = total_pets
return result # math result is 5
```
CPAL's generated python code is correct.
```python
story outcome data
name code value depends_on
0 obama pass 1.0 []
1 bill bill.value = obama.value 1.0 [obama]
2 bob bob.value = obama.value 1.0 [obama]
3 ben ben.value = obama.value 1.0 [obama]
4 beth beth.value = obama.value 1.0 [obama]
5 cindy cindy.value = bill.value + bob.value 2.0 [bill, bob]
6 boris boris.value = ben.value + beth.value 2.0 [ben, beth]
7 tim tim.value = cindy.value + boris.value 4.0 [cindy, boris]
query data
{
"question": "how many pets total does everyone buy?",
"expression": "SELECT SUM(value) FROM df",
"llm_error_msg": ""
}
# query result is 13
```
Based on the comments below, CPAL's intended location in the library is
`experimental/chains/cpal` and PAL's location is`chains/pal`.
### CPAL vs Graph QA
Both the CPAL chain and the Graph QA chain extract entity-action-entity
relations into a DAG.
The CPAL chain is different from the Graph QA chain for a few reasons.
* Graph QA does not connect entities to math expressions
* Graph QA does not associate actions in a sequence of dependence.
* Graph QA does not decompose the narrative into these three parts:
1. Story plot or causal model
4. Hypothetical question
5. Hypothetical condition
### Evaluation
Preliminary evaluation on simple math word problems shows that this CPAL
chain generates less hallucination than the PAL chain on answering
questions about a causal narrative. Two examples are in [this jupyter
notebook](https://github.com/borisdev/langchain/blob/master/docs/extras/modules/chains/additional/cpal.ipynb)
doc.
## 2. Future application
### "Describe as Narrative, Test as Code"
The thesis here is that the Describe as Narrative, Test as Code approach
allows you to represent a causal mental model both as code and as a
narrative, giving you the best of both worlds.
#### Why describe a causal mental mode as a narrative?
The narrative form is quick. At a consensus building meeting, people use
narratives to persuade others of their causal mental model, aka. plan.
You can share, version control and index a narrative.
#### Why test a causal mental model as a code?
Code is testable, complex narratives are not. Though fast, narratives
are problematic as their complexity increases. The problem is LLMs and
humans are prone to hallucination when predicting the outcomes of a
narrative. The cost of building a consensus around the validity of a
narrative outcome grows as its narrative complexity increases. Code does
not require tribal knowledge or social power to validate.
Code is composable, complex narratives are not. The answer of one CPAL
chain can be the hypothetical conditions of another CPAL Chain. For
stochastic simulations, a composable plan can be integrated with the
[DoWhy library](https://github.com/py-why/dowhy). Lastly, for the
futuristic folk, a composable plan as code allows ordinary community
folk to design a plan that can be integrated with a blockchain for
funding.
An explanation of a dependency planning application is
[here.](https://github.com/borisdev/cpal-llm-chain-demo)
---
Twitter handle: @boris_dev
---------
Co-authored-by: Boris Dev <borisdev@Boriss-MacBook-Air.local>
This PR proposes an implementation to support `generate` as an
`early_stopping_method` for the new `OpenAIFunctionsAgent` class.
The motivation behind is to facilitate the user to set a maximum number
of actions the agent can take with `max_iterations` and force a final
response with this new agent (as with the `Agent` class).
The following changes were made:
- The `OpenAIFunctionsAgent.return_stopped_response` method was
overwritten to support `generate` as an `early_stopping_method`
- A boolean `with_functions` parameter was added to the
`OpenAIFunctionsAgent.plan` method
This way the `OpenAIFunctionsAgent.return_stopped_response` method can
call the `OpenAIFunctionsAgent.plan` method with `with_function=False`
when the `early_stopping_method` is set to `generate`, making a call to
the LLM with no functions and forcing a final response from the
`"assistant"`.
- Relevant maintainer: @hinthornw
- Twitter handle: @aledelunap
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description: Current `_call` function in the
`langchain.llms.HuggingFaceEndpoint` class truncates response when
`task=text-generation`. Same error discussed a few days ago on Hugging
Face: https://huggingface.co/tiiuae/falcon-40b-instruct/discussions/51
Issue: Fixes#7353
Tag maintainer: @hwchase17 @baskaryan @hinthornw
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description: This pull request aims to support generating the correct
generic relevancy scores for different vector stores by refactoring the
relevance score functions and their selection in the base class and
subclasses of VectorStore. This is especially relevant with VectorStores
that require a distance metric upon initialization. Note many of the
current implenetations of `_similarity_search_with_relevance_scores` are
not technically correct, as they just return
`self.similarity_search_with_score(query, k, **kwargs)` without applying
the relevant score function
Also includes changes associated with:
https://github.com/hwchase17/langchain/pull/6564 and
https://github.com/hwchase17/langchain/pull/6494
See more indepth discussion in thread in #6494
Issue:
https://github.com/hwchase17/langchain/issues/6526https://github.com/hwchase17/langchain/issues/6481https://github.com/hwchase17/langchain/issues/6346
Dependencies: None
The changes include:
- Properly handling score thresholding in FAISS
`similarity_search_with_score_by_vector` for the corresponding distance
metric.
- Refactoring the `_similarity_search_with_relevance_scores` method in
the base class and removing it from the subclasses for incorrectly
implemented subclasses.
- Adding a `_select_relevance_score_fn` method in the base class and
implementing it in the subclasses to select the appropriate relevance
score function based on the distance strategy.
- Updating the `__init__` methods of the subclasses to set the
`relevance_score_fn` attribute.
- Removing the `_default_relevance_score_fn` function from the FAISS
class and using the base class's `_euclidean_relevance_score_fn`
instead.
- Adding the `DistanceStrategy` enum to the `utils.py` file and updating
the imports in the vector store classes.
- Updating the tests to import the `DistanceStrategy` enum from the
`utils.py` file.
---------
Co-authored-by: Hanit <37485638+hanit-com@users.noreply.github.com>
Improve documentation for a central use-case, qa / chat over documents.
This will be merged as an update to `index.mdx`
[here](https://python.langchain.com/docs/use_cases/question_answering/).
Testing w/ local Docusaurus server:
```
From `docs` directory:
mkdir _dist
cp -r {docs_skeleton,snippets} _dist
cp -r extras/* _dist/docs_skeleton/docs
cd _dist/docs_skeleton
yarn install
yarn start
```
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
Fixes # (issue)
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
1. Added use cases of the new features
2. Done some code refactoring
---------
Co-authored-by: Ivo Stranic <istranic@gmail.com>
### Description
Created a Loader to get a list of specific logs from Datadog Logs.
### Dependencies
`datadog_api_client` is required.
### Twitter handle
[kzk_maeda](https://twitter.com/kzk_maeda)
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- [Xorbits](https://doc.xorbits.io/en/latest/) is an open-source
computing framework that makes it easy to scale data science and machine
learning workloads in parallel. Xorbits can leverage multi cores or GPUs
to accelerate computation on a single machine, or scale out up to
thousands of machines to support processing terabytes of data.
- This PR added support for the Xorbits document loader, which allows
langchain to leverage Xorbits to parallelize and distribute the loading
of data.
- Dependencies: This change requires the Xorbits library to be installed
in order to be used.
`pip install xorbits`
- Request for review: @rlancemartin, @eyurtsev
- Twitter handle: https://twitter.com/Xorbitsio
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Adding async method for CTransformers
- Issue: I've found impossible without this code to run Websockets
inside a FastAPI micro service and a CTransformers model.
- Tag maintainer: Not necessary yet, I don't like to mention directly
- Twitter handle: @_semoal
Adding a maximal_marginal_relevance method to the
MongoDBAtlasVectorSearch vectorstore enhances the user experience by
providing more diverse search results
Issue: #7304
### Summary
Adds an `UnstructuredTSVLoader` for TSV files. Also updates the doc
strings for `UnstructuredCSV` and `UnstructuredExcel` loaders.
### Testing
```python
from langchain.document_loaders.tsv import UnstructuredTSVLoader
loader = UnstructuredTSVLoader(
file_path="example_data/mlb_teams_2012.csv", mode="elements"
)
docs = loader.load()
```
### Description
argument variable client is marked as required in commit
81e5b1ad36 which breaks the default way of
initialization providing only index_id. This commit avoid KeyError
exception when it is initialized without a client variable
### Dependencies
no dependency required
`SpacyTextSplitter` currently uses spacy's statistics-based
`en_core_web_sm` model for sentence splitting. This is a good splitter,
but it's also pretty slow, and in this case it's doing a lot of work
that's not needed given that the spacy parse is then just thrown away.
However, there is also a simple rules-based spacy sentencizer. Using
this is at least an order of magnitude faster than using
`en_core_web_sm` according to my local tests.
Also, spacy sentence tokenization based on `en_core_web_sm` can be sped
up in this case by not doing the NER stage. This shaves some cycles too,
both when loading the model and when parsing the text.
Consequently, this PR adds the option to use the basic spacy
sentencizer, and it disables the NER stage for the current approach,
*which is kept as the default*.
Lastly, when extracting the tokenized sentences, the `text` attribute is
called directly instead of doing the string conversion, which is IMO a
bit more idiomatic.
Hey @hwchase17 -
This PR adds a `ZepMemory` class, improves handling of Zep's message
metadata, and makes it easier for folks building custom chains to
persist metadata alongside their chat history.
We've had plenty confused users unfamiliar with ChatMessageHistory
classes and how to wrap the `ZepChatMessageHistory` in a
`ConversationBufferMemory`. So we've created the `ZepMemory` class as a
light wrapper for `ZepChatMessageHistory`.
Details:
- add ZepMemory, modify notebook to demo use of ZepMemory
- Modify summary to be SystemMessage
- add metadata argument to add_message; add Zep metadata to
Message.additional_kwargs
- support passing in metadata
- Description: Tiny documentation fix. In Python, when defining function
parameters or providing arguments to a function or class constructor, we
do not use the `:` character.
- Issue: N/A
- Dependencies: N/A,
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @mogaal
I just added a parameter to the method get_format_instructions, to
return directly the JSON instructions without the leading instruction
sentence. I'm planning to use it to define the structure of a JSON
object passed in input, the get_format_instructions().
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Have noticed transient ref example misalignment. I believe this is
caused by the logic of assigning an example within the thread executor
rather than before.
Current problems:
1. Evaluating LLMs or Chat models isn't smooth. Even specifying
'generations' as the output inserts a redundant list into the eval
template
2. Configuring input / prediction / reference keys in the
`get_qa_evaluator` function is confusing. Unless you are using a chain
with the default keys, you have to specify all the variables and need to
reason about whether the key corresponds to the traced run's inputs,
outputs or the examples inputs or outputs.
Proposal:
- Configure the run evaluator according to a model. Use the model type
and input/output keys to assert compatibility where possible. Only need
to specify a reference_key for certain evaluators (which is less
confusing than specifying input keys)
When does this work:
- If you have your langchain model available (assumed always for
run_on_dataset flow)
- If you are evaluating an LLM, Chat model, or chain
- If the LLM or chat models are traced by langchain (wouldn't work if
you add an incompatible schema via the REST API)
When would this fail:
- Currently if you directly create an example from an LLM run, the
outputs are generations with all the extra metadata present. A simple
`example_key` and dumping all to the template could make the evaluations
unreliable
- Doesn't help if you're not using the low level API
- If you want to instantiate the evaluator without instantiating your
chain or LLM (maybe common for monitoring, for instance) -> could also
load from run or run type though
What's ugly:
- Personally think it's better to load evaluators one by one since
passing a config down is pretty confusing.
- Lots of testing needs to be added
- Inconsistent in that it makes a separate run and example input mapper
instead of the original `RunEvaluatorInputMapper`, which maps a run and
example to a single input.
Example usage running the for an LLM, Chat Model, and Agent.
```
# Test running for the string evaluators
evaluator_names = ["qa", "criteria"]
model = ChatOpenAI()
configured_evaluators = load_run_evaluators_for_model(evaluator_names, model=model, reference_key="answer")
run_on_dataset(ds_name, model, run_evaluators=configured_evaluators)
```
<details>
<summary>Full code with dataset upload</summary>
```
## Create dataset
from langchain.evaluation.run_evaluators.loading import load_run_evaluators_for_model
from langchain.evaluation import load_dataset
import pandas as pd
lcds = load_dataset("llm-math")
df = pd.DataFrame(lcds)
from uuid import uuid4
from langsmith import Client
client = Client()
ds_name = "llm-math - " + str(uuid4())[0:8]
ds = client.upload_dataframe(df, name=ds_name, input_keys=["question"], output_keys=["answer"])
## Define the models we'll test over
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from langchain.tools import tool
llm = OpenAI(temperature=0)
chat_model = ChatOpenAI(temperature=0)
@tool
def sum(a: float, b: float) -> float:
"""Add two numbers"""
return a + b
def construct_agent():
return initialize_agent(
llm=chat_model,
tools=[sum],
agent=AgentType.OPENAI_MULTI_FUNCTIONS,
)
agent = construct_agent()
# Test running for the string evaluators
evaluator_names = ["qa", "criteria"]
models = [llm, chat_model, agent]
run_evaluators = []
for model in models:
run_evaluators.append(load_run_evaluators_for_model(evaluator_names, model=model, reference_key="answer"))
# Run on LLM, Chat Model, and Agent
from langchain.client.runner_utils import run_on_dataset
to_test = [llm, chat_model, construct_agent]
for model, configured_evaluators in zip(to_test, run_evaluators):
run_on_dataset(ds_name, model, run_evaluators=configured_evaluators, verbose=True)
```
</details>
---------
Co-authored-by: Nuno Campos <nuno@boringbits.io>
fixes https://github.com/hwchase17/langchain/issues/7289
A simple fix of the buggy output of `graph_qa`. If we have several
entities with triplets then the last entry of `triplets` for a given
entity merges with the first entry of the `triplets` of the next entity.
### Description
Adding a callback handler for Context. Context is a product analytics
platform for AI chat experiences to help you understand how users are
interacting with your product.
I've added the callback library + an example notebook showing its use.
### Dependencies
Requires the user to install the `context-python` library. The library
is lazily-loaded when the callback is instantiated.
### Announcing the feature
We spoke with Harrison a few weeks ago about also doing a blog post
announcing our integration, so will coordinate this with him. Our
Twitter handle for the company is @getcontextai, and the founders are
@_agamble and @HenrySG.
Thanks in advance!
**Title:** Add verbose parameter for llamacpp
**Description:**
This pull request adds a 'verbose' parameter to the llamacpp module. The
'verbose' parameter, when set to True, will enable the output of
detailed logs during the execution of the Llama model. This added
parameter can aid in debugging and understanding the internal processes
of the module.
The verbose parameter is a boolean that prints verbose output to stderr
when set to True. By default, the verbose parameter is set to True but
can be toggled off if less output is desired. This new parameter has
been added to the `validate_environment` method of the `LlamaCpp` class
which initializes the `llama_cpp.Llama` API:
```python
class LlamaCpp(LLM):
...
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
...
model_param_names = [
...
"verbose", # New verbose parameter added
]
...
values["client"] = Llama(model_path, **model_params)
...
```
---------
Signed-off-by: teleprint-me <77757836+teleprint-me@users.noreply.github.com>
At the moment, pinecone vectorStore does not support filters and
namespaces when using similarity_score_threshold search type.
In this PR, I've implemented that. It passes all the kwargs except
"score_threshold" as that is not a supported argument for method
"similarity_search_with_score".
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
## Changes
- [X] Fill the `llm_output` param when there is an output parsing error
in a Pydantic schema so that we can get the original text that failed to
parse when handling the exception
## Background
With this change, we could do something like this:
```
output_parser = PydanticOutputParser(pydantic_object=pydantic_obj)
chain = ConversationChain(..., output_parser=output_parser)
try:
response: PydanticSchema = chain.predict(input=input)
except OutputParserException as exc:
logger.error(
'OutputParserException while parsing chatbot response: %s', exc.llm_output,
)
```
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
hi @rlancemartin ,
We had a new deployment and the `pg_extension` creation command was
updated from `CREATE EXTENSION pg_embedding` to `CREATE EXTENSION
embedding`.
https://github.com/neondatabase/neon/pull/4646
The extension not made public yet. No users will be affected by this.
Will be public next week.
Please let me know if you have any questions.
Thank you in advance 🙏
Continuing with Tolkien inspired series of langchain tools. I bring to
you:
**The Fellowship of the Vectors**, AKA EmbeddingsClusteringFilter.
This document filter uses embeddings to group vectors together into
clusters, then allows you to pick an arbitrary number of documents
vector based on proximity to the cluster centers. That's a
representative sample of the cluster.
The original idea is from [Greg Kamradt](https://github.com/gkamradt)
from this video (Level4):
https://www.youtube.com/watch?v=qaPMdcCqtWk&t=365s
I added few tricks to make it a bit more versatile, so you can
parametrize what to do with duplicate documents in case of cluster
overlap: replace the duplicates with the next closest document or remove
it. This allow you to use it as an special kind of redundant filter too.
Additionally you can choose 2 diff orders: grouped by cluster or
respecting the original retriever scores.
In my use case I was using the docs grouped by cluster to run refine
chains per cluster to generate summarization over a large corpus of
documents.
Let me know if you want to change anything!
@rlancemartin, @eyurtsev, @hwchase17,
---------
Co-authored-by: rlm <pexpresss31@gmail.com>
Change details:
- Description: When calling db.persist(), a check prevents from it
proceeding as the constructor only sets member `_persist_directory` from
parameters. But the ChromaDB client settings also has this parameter,
and if the client_settings parameter is used without passing the
persist_directory (which is optional), the `persist` method raises
`ValueError` for not setting `_persist_directory`. This change fixes it
by setting the member `_persist_directory` variable from client_settings
if it is set, else uses the constructor parameter.
- Issue: I didn't find any github issue of this, but I discovered it
after calling the persist method
- Dependencies: None
- Tag maintainer: vectorstore related change - @rlancemartin, @eyurtsev
- Twitter handle: Don't have one :(
*Additional discussion*: We may need to discuss the way I implemented
the fallback using `or`.
---------
Co-authored-by: rlm <pexpresss31@gmail.com>
This PR improves the example notebook for the Marqo vectorstore
implementation by adding a new RetrievalQAWithSourcesChain example. The
`embedding` parameter in `from_documents` has its type updated to
`Union[Embeddings, None]` and a default parameter of None because this
is ignored in Marqo.
This PR also upgrades the Marqo version to 0.11.0 to remove the device
parameter after a breaking change to the API.
Related to #7068 @tomhamer @hwchase17
---------
Co-authored-by: Tom Hamer <tom@marqo.ai>
Description: Pack of small fixes and refactorings that don't affect
functionality, just making code prettier & fixing some misspelling
(hand-filtered improvements proposed by SeniorAi.online, prototype of
code improving tool based on gpt4), agents and callbacks folders was
covered.
Dependencies: Nothing changed
Twitter: https://twitter.com/nayjest
Co-authored-by: Bagatur <baskaryan@gmail.com>
This PR improves upon the Clarifai LangChain integration with improved docs, errors, args and the addition of embedding model support in LancChain for Clarifai's embedding models and an overview of the various ways you can integrate with Clarifai added to the docs.
---------
Co-authored-by: Matthew Zeiler <zeiler@clarifai.com>
Description: Added number_of_head_rows as a parameter to pandas agent.
number_of_head_rows allows the user to select the number of rows to pass
with the prompt when include_df_in_prompt is True. This gives the
ability to control the token length and can be helpful in dealing with
large dataframe.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Solving, anthropic packaging version issue by clearing
the mixup from package.version that is being confused with version from
- importlib.metadata.version.
- Issue: it fixes the issue #7283
- Maintainer: @hwchase17
The following change has been explained in the comment -
https://github.com/hwchase17/langchain/issues/7283#issuecomment-1624328978
- Description: pydantic's `ModelField.type_` only exposes the native
data type but not complex type hints like `List`. Thus, generating a
Tool with `from_function` through function signature produces incorrect
argument schemas (e.g., `str` instead of `List[str]`)
- Issue: N/A
- Dependencies: N/A
- Tag maintainer: @hinthornw
- Twitter handle: `mapped`
All the unittest (with an additional one in this PR) passed, though I
didn't try integration tests...
- Description: Sometimes there are csv attachments with the media type
"application/vnd.ms-excel". These files failed to be loaded via the xlrd
library. It throws a corrupted file error. I fixed it by separately
processing excel files using pandas. Excel files will be processed just
like before.
- Dependencies: pandas, os, io
---------
Co-authored-by: Chathura <chathurar@yaalalabs.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
In some cases, the OpenAI response is missing the `finish_reason`
attribute. It seems to happen when using Ada or Babbage and
`stream=true`, but I can't always reproduce it. This change just
gracefully handles the missing key.
Introduction of newest function calling feature doesn't work properly
with PromptLayerChatOpenAI model since on the `_generate` method,
functions argument are not even getting passed to the `ChatOpenAI` base
class which results in empty `ai_message.additional_kwargs`
Fixes #6365
updated `tutorials.mdx`:
- added a link to new `Deeplearning AI` course on LangChain
- added links to other tutorial videos
- fixed format
@baskaryan, @hwchase17
#### Description
refactor BedrockEmbeddings class to clean code as below:
1. inline content type and accept
2. rewrite input_body as a dictionary literal
3. no need to declare embeddings variable, so remove it
- Description: Adding to Chroma integration the option to run a
similarity search by a vector with relevance scores. Fixing two minor
typos.
- Issue: The "lambda_mult" typo is related to #4861
- Maintainer: @rlancemartin, @eyurtsev
Based on user feedback, we have improved the Alibaba Cloud OpenSearch
vector store documentation.
Co-authored-by: zhaoshengbo <shengbo.zsb@alibaba-inc.com>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Several updates for the PowerBI tools:
- Handle 0 records returned by requesting redo with different filtering
- Handle too large results by optionally tokenizing the result and
comparing against a max (change in signature, non-breaking)
- Implemented LLMChain with Chat for chat models for the tools.
- Updates to the main prompt including tables
- Update to Tool prompt with TOPN function
- Split the tool prompt to allow the LLMChain with ChatPromptTemplate
Smaller fixes for stability.
For visibility: @hinthornw
Replace this comment with:
- Description: added documentation for a template repo that helps
dockerizing and deploying a LangChain using a Cloud Build CI/CD pipeline
to Google Cloud build serverless
- Issue: None,
- Dependencies: None,
- Tag maintainer: @baskaryan,
- Twitter handle: EdenEmarco177
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
- Description:
- When `keep_separator` is `True` the `_split_text_with_regex()` method
in `text_splitter` uses regex to split, but when `keep_separator` is
`False` it uses `str.split()`. This causes problems when the separator
is a special regex character like `.` or `*`. This PR fixes that by
using `re.split()` in both cases.
- Issue: #7262
- Tag maintainer: @baskaryan
**Description**
In the following page, "Wikipedia" tool is explained.
https://python.langchain.com/docs/modules/agents/tools/integrations/wikipedia
However, the WikipediaAPIWrapper being used is not a tool. This PR
updated the documentation to use a tool WikipediaQueryRun.
**Issue**
None
**Tag maintainer**
Agents / Tools / Toolkits: @hinthornw
- Description: This is a chat model equivalent of HumanInputLLM. An
example notebook is also added.
- Tag maintainer: @hwchase17, @baskaryan
- Twitter handle: N/A
- Description: I have added a `show_progress_bar` parameter (defaults.to
`False`) to the `OpenAIEmbeddings`. If the user sets `show_progress_bar`
to `True`, a progress bar will be displayed.
- Issue: #7246
- Dependencies: N/A
- Tag maintainer: @hwchase17, @baskaryan
- Twitter handle: N/A
Description: `flan-t5-xl` hangs, updated to `flan-t5-xxl`. Tested all
stabilityai LLMs- all hang so removed from tutorial. Temperature > 0 to
prevent unintended determinism.
Issue: #3275
Tag maintainer: @baskaryan
Fix for bug in SitemapLoader
`aiohttp` `get` does not accept `verify` argument, and currently throws
error, so SitemapLoader is not working
This PR fixes it by removing `verify` param for `get` function call
Fixes#6107
#### Who can review?
Tag maintainers/contributors who might be interested:
@eyurtsev
---------
Co-authored-by: techcenary <127699216+techcenary@users.noreply.github.com>
### Description
This pull request introduces the "Cube Semantic Layer" document loader,
which demonstrates the retrieval of Cube's data model metadata in a
format suitable for passing to LLMs as embeddings. This enhancement aims
to provide contextual information and improve the understanding of data.
Twitter handle:
@the_cube_dev
---------
Co-authored-by: rlm <pexpresss31@gmail.com>
This PR brings in a vectorstore interface for
[Marqo](https://www.marqo.ai/).
The Marqo vectorstore exposes some of Marqo's functionality in addition
the the VectorStore base class. The Marqo vectorstore also makes the
embedding parameter optional because inference for embeddings is an
inherent part of Marqo.
Docs, notebook examples and integration tests included.
Related PR:
https://github.com/hwchase17/langchain/pull/2807
---------
Co-authored-by: Tom Hamer <tom@marqo.ai>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
### Summary
Updates the docstrings for the unstructured base loaders so more useful
information appears on the integrations page. If these look good, will
add similar docstrings to the other loaders.
### Reviewers
- @rlancemartin
- @eyurtsev
- @hwchase17
- Description: Allow `InMemoryDocstore` to be created without passing a
dict to the constructor; the constructor can create a dict at runtime if
one isn't provided.
- Tag maintainer: @dev2049
- Description: At the moment, inserting new embeddings to pgvector is
querying all embeddings every time as the defined `embeddings`
relationship is using the default params, which sets `lazy="select"`.
This change drastically improves the performance and adds a few
additional cleanups:
* remove `collection.embeddings.append` as it was querying all
embeddings on insert, replace with `collection_id` param
* centralize storing logic in add_embeddings function to reduce
duplication
* remove boilerplate
- Issue: No issue was opened.
- Dependencies: None.
- Tag maintainer: this is a vectorstore update, so I think
@rlancemartin, @eyurtsev
- Twitter handle: @falmannaa
Hi, there
This pull request contains two commit:
**1. Implement delete interface with optional ids parameter on
AnalyticDB.**
**2. Allow customization of database connection behavior by exposing
engine_args parameter in interfaces.**
- This commit adds the `engine_args` parameter to the interfaces,
allowing users to customize the behavior of the database connection. The
`engine_args` parameter accepts a dictionary of additional arguments
that will be passed to the create_engine function. Users can now modify
various aspects of the database connection, such as connection pool size
and recycle time. This enhancement provides more flexibility and control
to users when interacting with the database through the exposed
interfaces.
This commit is related to VectorStores @rlancemartin @eyurtsev
Thank you for your attention and consideration.
- Description: This allows parameters such as `relevance_score_fn` to be
passed to the `FAISS` constructor via the `load_local()` class method.
- Tag maintainer: @rlancemartin @eyurtsev
This fixes#4833 and the critical vulnerability
https://nvd.nist.gov/vuln/detail/CVE-2023-34540
Previously, the JIRA API Wrapper had a mode that simply pipelined user
input into an `exec()` function.
[The intended use of the 'other' mode is to cover any of Atlassian's API
that don't have an existing
interface](cc33bde74f/langchain/tools/jira/prompt.py (L24))
Fortunately all of the [Atlassian JIRA API methods are subfunctions of
their `Jira`
class](https://atlassian-python-api.readthedocs.io/jira.html), so this
implementation calls these subfunctions directly.
As well as passing a string representation of the function to call, the
implementation flexibly allows for optionally passing args and/or
keyword-args. These are given as part of the dictionary input. Example:
```
{
"function": "update_issue_field", #function to execute
"args": [ #list of ordered args similar to other examples in this JiraAPIWrapper
"key",
{"summary": "New summary"}
],
"kwargs": {} #dict of key value keyword-args pairs
}
```
the above is equivalent to `self.jira.update_issue_field("key",
{"summary": "New summary"})`
Alternate query schema designs are welcome to make querying easier
without passing and evaluating arbitrary python code. I considered
parsing (without evaluating) input python code and extracting the
function, args, and kwargs from there and then pipelining them into the
callable function via `*f(args, **kwargs)` - but this seemed more
direct.
@vowelparrot @dev2049
---------
Co-authored-by: Jamal Rahman <jamal.rahman@builder.ai>
added tutorials.mdx; updated youtube.mdx
Rationale: the Tutorials section in the documentation is top-priority.
(for example, https://pytorch.org/docs/stable/index.html) Not every
project has resources to make tutorials. We have such a privilege.
Community experts created several tutorials on YouTube. But the tutorial
links are now hidden on the YouTube page and not easily discovered by
first-time visitors.
- Added new videos and tutorials that were created since the last
update.
- Made some reprioritization between videos on the base of the view
numbers.
#### Who can review?
- @hwchase17
- @dev2049
## Description
Added Office365 tool modules to `__init__.py` files
## Issue
As described in Issue
https://github.com/hwchase17/langchain/issues/6936, the Office365
toolkit can't be loaded easily because it is not included in the
`__init__.py` files.
## Reviewer
@dev2049
Description:
The OpenAI "embeddings" API intermittently falls into a failure state
where an embedding is returned as [ Nan ], rather than the expected 1536
floats. This patch checks for that state (specifically, for an embedding
of length 1) and if it occurs, throws an ApiError, which will cause the
chunk to be retried.
Issue:
I have been unable to find an official langchain issue for this problem,
but it is discussed (by another user) at
https://stackoverflow.com/questions/76469415/getting-embeddings-of-length-1-from-langchain-openaiembeddings
Maintainer: @dev2049
Testing:
Since this is an intermittent OpenAI issue, I have not provided a unit
or integration test. The provided code has, though, been run
successfully over several million tokens.
---------
Co-authored-by: William Webber <william@williamwebber.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @dev2049
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @dev2049
- Memory: @hwchase17
- Agents / Tools / Toolkits: @vowelparrot
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
- [x] wire up tools
- [x] wire up retrievers
- [x] add integration test
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Description: Fix steamship import error
When running multi_modal_output_agent:
field "steamship" not yet prepared so type is still a ForwardRef, you
might need to call SteamshipImageGenerationTool.update_forward_refs().
Tag maintainer: @hinthornw
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
- Description: If their are missing or extra variables when validating
Jinja 2 template then a warning is issued rather than raising an
exception. This allows for better flexibility for the developer as
described in #7044. Also changed the relevant test so pytest is checking
for raised warnings rather than exceptions.
- Issue: #7044
- Tag maintainer: @hwchase17, @baskaryan
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
This PR makes the `textstat` library optional in the Flyte callback
handler.
@hinthornw, would you mind reviewing this PR since you merged the flyte
callback handler code previously?
---------
Signed-off-by: Samhita Alla <aallasamhita@gmail.com>
- Description: added some documentation to the Pinecone vector store
docs page.
- Issue: #7126
- Dependencies: None
- Tag maintainer: @baskaryan
I can add more documentation on the Pinecone integration functions as I
am going to go in great depth into this area. Just wanted to check with
the maintainers is if this is all good.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @dev2049
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @dev2049
- Memory: @hwchase17
- Agents / Tools / Toolkits: @vowelparrot
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Replace this comment with:
- Description: Replace `if var is not None:` with `if var:`, a concise
and pythonic alternative
- Issue: N/A
- Dependencies: None
- Tag maintainer: Unsure
- Twitter handle: N/A
Signed-off-by: serhatgktp <efkan@ibm.com>
- Description: Modify the code for
AsyncIteratorCallbackHandler.on_llm_new_token to ensure that it does not
add an empty string to the result queue.
- Tag maintainer: @agola11
When using AsyncIteratorCallbackHandler with OpenAIFunctionsAgent, if
the LLM response function_call instead of direct answer, the
AsyncIteratorCallbackHandler.on_llm_new_token would be called with empty
string.
see also: langchain.chat_models.openai.ChatOpenAI._generate
An alternative solution is to modify the
langchain.chat_models.openai.ChatOpenAI._generate and do not call the
run_manager.on_llm_new_token when the token is empty string.
I am not sure which solution is better.
@hwchase17
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
# [SPARQL](https://www.w3.org/TR/rdf-sparql-query/) for
[LangChain](https://github.com/hwchase17/langchain)
## Description
LangChain support for knowledge graphs relying on W3C standards using
RDFlib: SPARQL/ RDF(S)/ OWL with special focus on RDF \
* Works with local files, files from the web, and SPARQL endpoints
* Supports both SELECT and UPDATE queries
* Includes both a Jupyter notebook with an example and integration tests
## Contribution compared to related PRs and discussions
* [Wikibase agent](https://github.com/hwchase17/langchain/pull/2690) -
uses SPARQL, but specifically for wikibase querying
* [Cypher qa](https://github.com/hwchase17/langchain/pull/5078) - graph
DB question answering for Neo4J via Cypher
* [PR 6050](https://github.com/hwchase17/langchain/pull/6050) - tries
something similar, but does not cover UPDATE queries and supports only
RDF
* Discussions on [w3c mailing list](mailto:semantic-web@w3.org) related
to the combination of LLMs (specifically ChatGPT) and knowledge graphs
## Dependencies
* [RDFlib](https://github.com/RDFLib/rdflib)
## Tag maintainer
Graph database related to memory -> @hwchase17
Update in_memory.py to fix "TypeError: keywords must be strings" on
certain dictionaries
Simple fix to prevent a "TypeError: keywords must be strings" error I
encountered in my use case.
@baskaryan
Thanks! Hope useful!
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Fix for typos in MongoDB Atlas Vector Search documentation
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
- Description: rename the invalid function name of GoogleSerperResults
Tool for OpenAIFunctionCall
- Tag maintainer: @hinthornw
When I use the GoogleSerperResults in OpenAIFunctionCall agent, the
following error occurs:
```shell
openai.error.InvalidRequestError: 'Google Serrper Results JSON' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'functions.0.name'
```
So I rename the GoogleSerperResults's property "name" from "Google
Serrper Results JSON" to "google_serrper_results_json" just like
GoogleSerperRun's name: "google_serper", and it works.
I guess this should be reasonable.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
@hinthornw
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Hi @rlancemartin, @eyurtsev!
- Description: Adding HNSW extension support for Postgres. Similar to
pgvector vectorstore, with 3 differences
1. it uses HNSW extension for exact and ANN searches,
2. Vectors are of type array of real
3. Only supports L2
- Dependencies: [HNSW](https://github.com/knizhnik/hnsw) extension for
Postgres
- Example:
```python
db = HNSWVectoreStore.from_documents(
embedding=embeddings,
documents=docs,
collection_name=collection_name,
connection_string=connection_string
)
query = "What did the president say about Ketanji Brown Jackson"
docs_with_score: List[Tuple[Document, float]] =
db.similarity_search_with_score(query)
```
The example notebook is in the PR too.
- correct `endpoint_name` to `api_url`
- add `headers`
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Minor change to the SingleStoreVectorStore:
Updated connection attributes names according to the SingleStoreDB
recommendations
@rlancemartin, @eyurtsev
---------
Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
Description: doc string suggests `from langchain.llms import
LlamaCppEmbeddings` under `LlamaCpp()` class example but
`LlamaCppEmbeddings` is not in `langchain.llms`
Issue: None open
Tag maintainer: @baskaryan
Documentation update for [Jina
ecosystem](https://python.langchain.com/docs/ecosystem/integrations/jina)
and `langchain-serve` in the deployments section to latest features.
@hwchase17
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
[Apache HugeGraph](https://github.com/apache/incubator-hugegraph) is a
convenient, efficient, and adaptable graph database, compatible with the
Apache TinkerPop3 framework and the Gremlin query language.
In this PR, the HugeGraph and HugeGraphQAChain provide the same
functionality as the existing integration with Neo4j and enables query
generation and question answering over HugeGraph database. The
difference is that the graph query language supported by HugeGraph is
not cypher but another very popular graph query language
[Gremlin](https://tinkerpop.apache.org/gremlin.html).
A notebook example and a simple test case have also been added.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @dev2049
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @dev2049
- Memory: @hwchase17
- Agents / Tools / Toolkits: @vowelparrot
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
This PR introduces a new Mendable UI tailored to a better search
experience.
We're more closely integrating our traditional search with our AI
generation.
With this change, you won't have to tab back and forth between the
mendable bot and the keyword search. Both types of search are handled in
the same bar. This should make the docs easier to navigate. while still
letting users get code generations or AI-summarized answers if they so
wish. Also, it should reduce the cost.
Would love to hear your feedback :)
Cc: @dev2049 @hwchase17
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @dev2049
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @dev2049
- Memory: @hwchase17
- Agents / Tools / Toolkits: @vowelparrot
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @dev2049
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @dev2049
- Memory: @hwchase17
- Agents / Tools / Toolkits: @vowelparrot
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
## Description
The type hint for `FAISS.__init__()`'s `relevance_score_fn` parameter
allowed the parameter to be set to `None`. However, a default function
is provided by the constructor. This led to an unnecessary check in the
code, as well as a test to verify this check.
**ASSUMPTION**: There's no reason to ever set `relevance_score_fn` to
`None`.
This PR changes the type hint and removes the unnecessary code.
- Description: Added a new SpacyEmbeddings class for generating
embeddings using the Spacy library.
- Issue: Sentencebert/Bert/Spacy/Doc2vec embedding support #6952
- Dependencies: This change requires the Spacy library and the
'en_core_web_sm' Spacy model.
- Tag maintainer: @dev2049
- Twitter handle: N/A
This change includes a new SpacyEmbeddings class, but does not include a
test or an example notebook.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
**Description**:
The JSON Lines format is used by some services such as OpenAI and
HuggingFace. It's also a convenient alternative to CSV.
This PR adds JSON Lines support to `JSONLoader` and also updates related
tests.
**Tag maintainer**: @rlancemartin, @eyurtsev.
PS I was not able to build docs locally so didn't update related
section.
Update to Vectara integration
- By user request added "add_files" to take advantage of Vectara
capabilities to process files on the backend, without the need for
separate loading of documents and chunking in the chain.
- Updated vectara.ipynb example notebook to be broader and added testing
of add_file()
@hwchase17 - project lead
---------
Co-authored-by: rlm <pexpresss31@gmail.com>
### Description:
Updated the delete function in the Pinecone integration to allow for
deletion of vectors by specifying a filter condition, and to delete all
vectors in a namespace.
Made the ids parameter optional in the delete function in the base
VectorStore class and allowed for additional keyword arguments.
Updated the delete function in several classes (Redis, Chroma, Supabase,
Deeplake, Elastic, Weaviate, and Cassandra) to match the changes made in
the base VectorStore class. This involved making the ids parameter
optional and allowing for additional keyword arguments.
Retrying with the same improvements as in #6772, this time trying not to
mess up with branches.
@rlancemartin doing a fresh new PR from a branch with a new name. This
should do. Thank you for your help!
---------
Co-authored-by: Jonathan Ellis <jbellis@datastax.com>
Co-authored-by: rlm <pexpresss31@gmail.com>
should be no functional changes
also keep __init__ exposing a lot for backwards compat
---------
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
### Summary
Updates `UnstructuredEmailLoader` so that it can process attachments in
addition to the e-mail content. The loader will process attachments if
the `process_attachments` kwarg is passed when the loader is
instantiated.
### Testing
```python
file_path = "fake-email-attachment.eml"
loader = UnstructuredEmailLoader(
file_path, mode="elements", process_attachments=True
)
docs = loader.load()
docs[-1]
```
### Reviewers
- @rlancemartin
- @eyurtsev
- @hwchase17
Handle the new retriever events in a way that (I think) is entirely
backwards compatible? Needs more testing for some of the chain changes
and all.
This creates an entire new run type, however. We could also just treat
this as an event within a chain run presumably (same with memory)
Adds a subclass initializer that upgrades old retriever implementations
to the new schema, along with tests to ensure they work.
First commit doesn't upgrade any of our retriever implementations (to
show that we can pass the tests along with additional ones testing the
upgrade logic).
Second commit upgrades the known universe of retrievers in langchain.
- [X] Add callback handling methods for retriever start/end/error (open
to renaming to 'retrieval' if you want that)
- [X] Update BaseRetriever schema to support callbacks
- [X] Tests for upgrading old "v1" retrievers for backwards
compatibility
- [X] Update existing retriever implementations to implement the new
interface
- [X] Update calls within chains to .{a]get_relevant_documents to pass
the child callback manager
- [X] Update the notebooks/docs to reflect the new interface
- [X] Test notebooks thoroughly
Not handled:
- Memory pass throughs: retrieval memory doesn't have a parent callback
manager passed through the method
---------
Co-authored-by: Nuno Campos <nuno@boringbits.io>
Co-authored-by: William Fu-Hinthorn <13333726+hinthornw@users.noreply.github.com>
when running AsyncCallbackManagerForChainRun (from
langchain.callbacks.manager import AsyncCallbackManagerForChainRun),
provided default values for tags and inheritable_tages of empty lists in
manager.py BaseRunManager.
- Description: In manager.py, `BaseRunManager`, default values were
provided for the `__init__` args `tags` and `inheritable_tags`. They
default to empty lists (`[]`).
- Issue: When trying to use Nvidia NeMo Guardrails with LangChain, the
following exception was raised:
If you create a dataset from runs and run the same chain or llm on it
later, it usually works great.
If you have an agent dataset and want to run a different agent on it, or
have more complex schema, it's hard for us to automatically map these
values every time. This PR lets you pass in an input_mapper function
that converts the example inputs to whatever format your model expects
Support `max_chunk_bytes` kwargs to pass down to `buik` helper, in order
to support the request limits in Opensearch locally and in AWS.
@rlancemartin, @eyurtsev
Description: `all_metadatas` was not defined, `OpenAIEmbeddings` was not
imported,
Issue: #6723 the issue # it fixes (if applicable),
Dependencies: lark,
Tag maintainer: @vowelparrot , @dev2049
---------
Co-authored-by: rlm <pexpresss31@gmail.com>
# Description
This PR makes it possible to use named vectors from Qdrant in Langchain.
That was requested multiple times, as people want to reuse externally
created collections in Langchain. It doesn't change anything for the
existing applications. The changes were covered with some integration
tests and included in the docs.
## Example
```python
Qdrant.from_documents(
docs,
embeddings,
location=":memory:",
collection_name="my_documents",
vector_name="custom_vector",
)
```
### Issue: #2594
Tagging @rlancemartin & @eyurtsev. I'd appreciate your review.
Support for SQLAlchemy 1.3 was removed in version 0.0.203 by change
#6086. Re-adding support.
- Description: Imports SQLAlchemy Row at class creation time instead of
at init to support SQLAlchemy <1.4. This is the only breaking change and
was introduced in version 0.0.203 #6086.
A similar change was merged before:
https://github.com/hwchase17/langchain/pull/4647
- Dependencies: Reduces SQLAlchemy dependency to > 1.3
- Tag maintainer: @rlancemartin, @eyurtsev, @hwchase17, @wangxuqi
---------
Co-authored-by: rlm <pexpresss31@gmail.com>
### Scientific Article PDF Parsing via Grobid
`Description:`
This change adds the GrobidParser class, which uses the Grobid library
to parse scientific articles into a universal XML format containing the
article title, references, sections, section text etc. The GrobidParser
uses a local Grobid server to return PDFs document as XML and parses the
XML to optionally produce documents of individual sentences or of whole
paragraphs. Metadata includes the text, paragraph number, pdf relative
bboxes, pages (text may overlap over two pages), section title
(Introduction, Methodology etc), section_number (i.e 1.1, 2.3), the
title of the paper and finally the file path.
Grobid parsing is useful beyond standard pdf parsing as it accurately
outputs sections and paragraphs within them. This allows for
post-fitering of results for specific sections i.e. limiting results to
the methodology section or results. While sections are split via
headings, ideally they could be classified specifically into
introduction, methodology, results, discussion, conclusion. I'm
currently experimenting with chatgpt-3.5 for this function, which could
later be implemented as a textsplitter.
`Dependencies:`
For use, the grobid repo must be cloned and Java must be installed, for
colab this is:
```
!apt-get install -y openjdk-11-jdk -q
!update-alternatives --set java /usr/lib/jvm/java-11-openjdk-amd64/bin/java
!git clone https://github.com/kermitt2/grobid.git
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64"
os.chdir('grobid')
!./gradlew clean install
```
Once installed the server is ran on localhost:8070 via
```
get_ipython().system_raw('nohup ./gradlew run > grobid.log 2>&1 &')
```
@rlancemartin, @eyurtsev
Twitter Handle: @Corranmac
Grobid Demo Notebook is
[here](https://colab.research.google.com/drive/1X-St_mQRmmm8YWtct_tcJNtoktbdGBmd?usp=sharing).
---------
Co-authored-by: rlm <pexpresss31@gmail.com>
Add API Headers support for Amazon API Gateway to enable Authentication
using DynamoDB.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @dev2049
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @dev2049
- Memory: @hwchase17
- Agents / Tools / Toolkits: @vowelparrot
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Was preparing for a demo project of NebulaGraphQAChain to find out the
prompt needed to be optimized a little bit.
Please @hwchase17 kindly help review.
Thanks!
### Overview
This PR aims at building on #4378, expanding the capabilities and
building on top of the `cassIO` library to interface with the database
(as opposed to using the core drivers directly).
Usage of `cassIO` (a library abstracting Cassandra access for
ML/GenAI-specific purposes) is already established since #6426 was
merged, so no new dependencies are introduced.
In the same spirit, we try to uniform the interface for using Cassandra
instances throughout LangChain: all our appreciation of the work by
@jj701 notwithstanding, who paved the way for this incremental work
(thank you!), we identified a few reasons for changing the way a
`CassandraChatMessageHistory` is instantiated. Advocating a syntax
change is something we don't take lighthearted way, so we add some
explanations about this below.
Additionally, this PR expands on integration testing, enables use of
Cassandra's native Time-to-Live (TTL) features and improves the phrasing
around the notebook example and the short "integrations" documentation
paragraph.
We would kindly request @hwchase to review (since this is an elaboration
and proposed improvement of #4378 who had the same reviewer).
### About the __init__ breaking changes
There are
[many](https://docs.datastax.com/en/developer/python-driver/3.28/api/cassandra/cluster/)
options when creating the `Cluster` object, and new ones might be added
at any time. Choosing some of them and exposing them as `__init__`
parameters `CassandraChatMessageHistory` will prove to be insufficient
for at least some users.
On the other hand, working through `kwargs` or adding a long, long list
of arguments to `__init__` is not a desirable option either. For this
reason, (as done in #6426), we propose that whoever instantiates the
Chat Message History class provide a Cassandra `Session` object, ready
to use. This also enables easier injection of mocks and usage of
Cassandra-compatible connections (such as those to the cloud database
DataStax Astra DB, obtained with a different set of init parameters than
`contact_points` and `port`).
We feel that a breaking change might still be acceptable since LangChain
is at `0.*`. However, while maintaining that the approach we propose
will be more flexible in the future, room could be made for a
"compatibility layer" that respects the current init method. Honestly,
we would to that only if there are strong reasons for it, as that would
entail an additional maintenance burden.
### Other changes
We propose to remove the keyspace creation from the class code for two
reasons: first, production Cassandra instances often employ RBAC so that
the database user reading/writing from tables does not necessarily (and
generally shouldn't) have permission to create keyspaces, and second
that programmatic keyspace creation is not a best practice (it should be
done more or less manually, with extra care about schema mismatched
among nodes, etc). Removing this (usually unnecessary) operation from
the `__init__` path would also improve initialization performance
(shorter time).
We suggest, likewise, to remove the `__del__` method (which would close
the database connection), for the following reason: it is the
recommended best practice to create a single Cassandra `Session` object
throughout an application (it is a resource-heavy object capable to
handle concurrency internally), so in case Cassandra is used in other
ways by the app there is the risk of truncating the connection for all
usages when the history instance is destroyed. Moreover, the `Session`
object, in typical applications, is best left to garbage-collect itself
automatically.
As mentioned above, we defer the actual database I/O to the `cassIO`
library, which is designed to encode practices optimized for LLM
applications (among other) without the need to expose LangChain
developers to the internals of CQL (Cassandra Query Language). CassIO is
already employed by the LangChain's Vector Store support for Cassandra.
We added a few more connection options in the companion notebook example
(most notably, Astra DB) to encourage usage by anyone who cannot run
their own Cassandra cluster.
We surface the `ttl_seconds` option for automatic handling of an
expiration time to chat history messages, a likely useful feature given
that very old messages generally may lose their importance.
We elaborated a bit more on the integration testing (Time-to-live,
separation of "session ids", ...).
### Remarks from linter & co.
We reinstated `cassio` as a dependency both in the "optional" group and
in the "integration testing" group of `pyproject.toml`. This might not
be the right thing do to, in which case the author of this PR offer his
apologies (lack of confidence with Poetry - happy to be pointed in the
right direction, though!).
During linter tests, we were hit by some errors which appear unrelated
to the code in the PR. We left them here and report on them here for
awareness:
```
langchain/vectorstores/mongodb_atlas.py:137: error: Argument 1 to "insert_many" of "Collection" has incompatible type "List[Dict[str, Sequence[object]]]"; expected "Iterable[Union[MongoDBDocumentType, RawBSONDocument]]" [arg-type]
langchain/vectorstores/mongodb_atlas.py:186: error: Argument 1 to "aggregate" of "Collection" has incompatible type "List[object]"; expected "Sequence[Mapping[str, Any]]" [arg-type]
langchain/vectorstores/qdrant.py:16: error: Name "grpc" is not defined [name-defined]
langchain/vectorstores/qdrant.py:19: error: Name "grpc" is not defined [name-defined]
langchain/vectorstores/qdrant.py:20: error: Name "grpc" is not defined [name-defined]
langchain/vectorstores/qdrant.py:22: error: Name "grpc" is not defined [name-defined]
langchain/vectorstores/qdrant.py:23: error: Name "grpc" is not defined [name-defined]
```
In the same spirit, we observe that to even get `import langchain` run,
it seems that a `pip install bs4` is missing from the minimal package
installation path.
Thank you!
If I upload a dataset with a single input and output column, we should
be able to let the chain prepare the input without having to maintain a
strict dataset format.
# Adding support for async (_acall) for VertexAICommon LLM
This PR implements the `_acall` method under `_VertexAICommon`. Because
VertexAI itself does not provide an async interface, I implemented it
via a ThreadPoolExecutor that can delegate execution of VertexAI calls
to other threads.
Twitter handle: @polecitoem : )
## Who can review?
Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
fyi - @agola11 for async functionality
fyi - @Ark-kun from VertexAI
## Description
Tag maintainer: @rlancemartin, @eyurtsev
### log_and_data_dir
`AwaDB.__init__()` accepts a parameter named `log_and_data_dir`. But
`AwaDB.from_texts()` and `AwaDB.from_documents()` accept a parameter
named `logging_and_data_dir`. This inconsistency in this parameter name
can lead to confusion on the part of the caller.
This PR renames `logging_and_data_dir` to `log_and_data_dir` to make all
functions consistent with the constructor.
### embedding
`AwaDB.__init__()` accepts a parameter named `embedding_model`. But
`AwaDB.from_texts()` and `AwaDB.from_documents()` accept a parameter
named `embeddings`. This inconsistency in this parameter name can lead
to confusion on the part of the caller.
This PR renames `embedding_model` to `embeddings` to make AwaDB's
constructor consistent with the classmethod "constructors" as specified
by `VectorStore` abstract base class.
A user has been testing the Apify integration inside langchain and he
was not able to run saved Actor tasks.
This PR adds support for calling saved Actor tasks on the Apify platform
to the existing integration. The structure of very similar to the one of
calling Actors.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @dev2049
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @dev2049
- Memory: @hwchase17
- Agents / Tools / Toolkits: @vowelparrot
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
### Adding the functionality to return the scores with retrieved
documents when using the max marginal relevance
- Description: Add the method
`max_marginal_relevance_search_with_score_by_vector` to the FAISS
wrapper. Functionality operates the same as
`similarity_search_with_score_by_vector` except for using the max
marginal relevance retrieval framework like is used in the
`max_marginal_relevance_search_by_vector` method.
- Dependencies: None
- Tag maintainer: @rlancemartin @eyurtsev
- Twitter handle: @RianDolphin
---------
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @dev2049
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @dev2049
- Memory: @hwchase17
- Agents / Tools / Toolkits: @vowelparrot
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
- Description:
- The current code uses `PydanticSchema.schema()` and
`_get_extraction_function` at the same time. As a result, a response
from OpenAI has two nested `info`, and
`PydanticAttrOutputFunctionsParser` fails to parse it. This PR will use
the pydantic class given as an arg instead.
- Issue: no related issue yet
- Dependencies: no dependency change
- Tag maintainer: @dev2049
- Twitter handle: @shotarok28
Description: Adds a brief example of using an OAuth access token with
the Zapier wrapper. Also links to the Zapier documentation to learn more
about OAuth flows.
---------
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
### Summary
This PR adds a LarkSuite (FeiShu) document loader.
> [LarkSuite](https://www.larksuite.com/) is an enterprise collaboration
platform developed by ByteDance.
### Tests
- an integration test case is added
- an example notebook showing usage is added. [Notebook
preview](https://github.com/yaohui-wyh/langchain/blob/master/docs/extras/modules/data_connection/document_loaders/integrations/larksuite.ipynb)
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
### Who can review?
- PTAL @eyurtsev @hwchase17
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
---------
Co-authored-by: Yaohui Wang <wangyaohui.01@bytedance.com>
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
- add tencent cos directory and file support for document-loader
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
@eyurtsev
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
#### Add streaming only final async iterator of agent
This callback returns an async iterator and only streams the final
output of an agent.
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested: @agola11
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @dev2049
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @dev2049
- Memory: @hwchase17
- Agents / Tools / Toolkits: @vowelparrot
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Distance-based vector database retrieval embeds (represents) queries in
high-dimensional space and finds similar embedded documents based on
"distance". But, retrieval may produce difference results with subtle
changes in query wording or if the embeddings do not capture the
semantics of the data well. Prompt engineering / tuning is sometimes
done to manually address these problems, but can be tedious.
The `MultiQueryRetriever` automates the process of prompt tuning by
using an LLM to generate multiple queries from different perspectives
for a given user input query. For each query, it retrieves a set of
relevant documents and takes the unique union across all queries to get
a larger set of potentially relevant documents. By generating multiple
perspectives on the same question, the `MultiQueryRetriever` might be
able to overcome some of the limitations of the distance-based retrieval
and get a richer set of results.
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Proxies are helpful, especially when you start querying against more
anti-bot websites.
[Proxy
services](https://developers.oxylabs.io/advanced-proxy-solutions/web-unblocker/making-requests)
(of which there are many) and `requests` make it easy to rotate IPs to
prevent banning by just passing along a simple dict to `requests`.
CC @rlancemartin, @eyurtsev
### Summary
The Unstructured API will soon begin requiring API keys. This PR updates
the Unstructured integrations docs with instructions on how to generate
Unstructured API keys.
### Reviewers
@rlancemartin
@eyurtsev
@hwchase17
Replace this comment with:
- Description: Add Async functionality to Zapier NLA Tools
- Issue: n/a
- Dependencies: n/a
- Tag maintainer:
Maintainer responsibilities:
- Agents / Tools / Toolkits: @vowelparrot
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
Added parentheses to ensure the division operation is performed before
multiplication. This now correctly calculates the cost by dividing the
number of tokens by 1000 first (to get the cost per token), and then
multiplies it with the model's cost per 1k tokens @agola11
- **Description**: this PR adds the possibility to raise an exception in
the case the http request did not return a 2xx status code. This is
particularly useful in the situation when the url points to a
non-existent web page, the server returns a http status of 404 NOT
FOUND, but WebBaseLoader anyway parses and returns the http body of the
error message.
- **Dependencies**: none,
- **Tag maintainer**: @rlancemartin, @eyurtsev,
- **Twitter handle**: jtolgyesi
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @dev2049
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @dev2049
- Memory: @hwchase17
- Agents / Tools / Toolkits: @vowelparrot
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Adds a way to create the guardrails output parser from a pydantic model.
Description: When a 401 response is given back by Zapier, hint to the
end user why that may have occurred
- If an API Key was initialized with the wrapper, ask them to check
their API Key value
- if an access token was initialized with the wrapper, ask them to check
their access token or verify that it doesn't need to be refreshed.
Tag maintainer: @dev2049
#### Summary
A new approach to loading source code is implemented:
Each top-level function and class in the code is loaded into separate
documents. Then, an additional document is created with the top-level
code, but without the already loaded functions and classes.
This could improve the accuracy of QA chains over source code.
For instance, having this script:
```
class MyClass:
def __init__(self, name):
self.name = name
def greet(self):
print(f"Hello, {self.name}!")
def main():
name = input("Enter your name: ")
obj = MyClass(name)
obj.greet()
if __name__ == '__main__':
main()
```
The loader will create three documents with this content:
First document:
```
class MyClass:
def __init__(self, name):
self.name = name
def greet(self):
print(f"Hello, {self.name}!")
```
Second document:
```
def main():
name = input("Enter your name: ")
obj = MyClass(name)
obj.greet()
```
Third document:
```
# Code for: class MyClass:
# Code for: def main():
if __name__ == '__main__':
main()
```
A threshold parameter is added to control whether small scripts are
split in this way or not.
At this moment, only Python and JavaScript are supported. The
appropriate parser is determined by examining the file extension.
#### Tests
This PR adds:
- Unit tests
- Integration tests
#### Dependencies
Only one dependency was added as optional (needed for the JavaScript
parser).
#### Documentation
A notebook is added showing how the loader can be used.
#### Who can review?
@eyurtsev @hwchase17
---------
Co-authored-by: rlm <pexpresss31@gmail.com>
Description: Update documentation to
1) point to updated documentation links at Zapier.com (we've revamped
our help docs and paths), and
2) To provide clarity how to use the wrapper with an access token for
OAuth support
Demo:
Initializing the Zapier Wrapper with an OAuth Access Token
`ZapierNLAWrapper(zapier_nla_oauth_access_token="<redacted>")`
Using LangChain to resolve the current weather in Vancouver BC
leveraging Zapier NLA to lookup weather by coords.
```
> Entering new chain...
I need to use a tool to get the current weather.
Action: The Weather: Get Current Weather
Action Input: Get the current weather for Vancouver BC
Observation: {"coord__lon": -123.1207, "coord__lat": 49.2827, "weather": [{"id": 802, "main": "Clouds", "description": "scattered clouds", "icon": "03d", "icon_url": "http://openweathermap.org/img/wn/03d@2x.png"}], "weather[]icon_url": ["http://openweathermap.org/img/wn/03d@2x.png"], "weather[]icon": ["03d"], "weather[]id": [802], "weather[]description": ["scattered clouds"], "weather[]main": ["Clouds"], "base": "stations", "main__temp": 71.69, "main__feels_like": 71.56, "main__temp_min": 67.64, "main__temp_max": 76.39, "main__pressure": 1015, "main__humidity": 64, "visibility": 10000, "wind__speed": 3, "wind__deg": 155, "wind__gust": 11.01, "clouds__all": 41, "dt": 1687806607, "sys__type": 2, "sys__id": 2011597, "sys__country": "CA", "sys__sunrise": 1687781297, "sys__sunset": 1687839730, "timezone": -25200, "id": 6173331, "name": "Vancouver", "cod": 200, "summary": "scattered clouds", "_zap_search_was_found_status": true}
Thought: I now know the current weather in Vancouver BC.
Final Answer: The current weather in Vancouver BC is scattered clouds with a temperature of 71.69 and wind speed of 3
```
**Description:** Add a documentation page for the Streamlit Callback
Handler integration (#6315)
Notes:
- Implemented as a markdown file instead of a notebook since example
code runs in a Streamlit app (happy to discuss / consider alternatives
now or later)
- Contains an embedded Streamlit app ->
https://mrkl-minimal.streamlit.app/ Currently this app is hosted out of
a Streamlit repo but we're working to migrate the code to a LangChain
owned repo

cc @dev2049 @tconkling
Notebook shows preference scoring between two chains and reports wilson
score interval + p value
I think I'll add the option to insert ground truth labels but doesn't
have to be in this PR
- Description: Bug Fix - Added a step variable to keep track of prompts
- Issue: Bug from internal Arize testing - The prompts and responses
that are ingested were not mapped correctly
- Dependencies: N/A
fix the Chinese characters in the solution content will be converted to
ascii encoding, resulting in an abnormally long number of tokens
Co-authored-by: qixin <qixin@fintec.ai>
allows for where filtering on collection via get
- Description: aligns langchain chroma vectorstore get with underlying
[chromadb collection
get](https://github.com/chroma-core/chroma/blob/main/chromadb/api/models/Collection.py#L103)
allowing for where filtering, etc.
- Issue: NA
- Dependencies: none
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @pappanaka
#### Background
With the development of [structured
tools](https://blog.langchain.dev/structured-tools/), the LangChain team
expanded the platform's functionality to meet the needs of new
applications. The GMail tool, empowered by structured tools, now
supports multiple arguments and powerful search capabilities,
demonstrating LangChain's ability to interact with dynamic data sources
like email servers.
#### Challenge
The current GMail tool only supports GMail, while users often utilize
other email services like Outlook in Office365. Additionally, the
proposed calendar tool in PR
https://github.com/hwchase17/langchain/pull/652 only works with Google
Calendar, not Outlook.
#### Changes
This PR implements an Office365 integration for LangChain, enabling
seamless email and calendar functionality with a single authentication
process.
#### Future Work
With the core Office365 integration complete, future work could include
integrating other Office365 tools such as Tasks and Address Book.
#### Who can review?
@hwchase17 or @vowelparrot can review this PR
#### Appendix
@janscas, I utilized your [O365](https://github.com/O365/python-o365)
library extensively. Given the rising popularity of LangChain and
similar AI frameworks, the convergence of libraries like O365 and tools
like this one is likely. So, I wanted to keep you updated on our
progress.
---------
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
When the tool requires no input, the LLM often gives something like
this:
```json
{
"action": "just_do_it"
}
```
I have attempted to enhance the prompt, but it doesn't appear to be
functioning effectively. Therefore, I believe we should consider easing
the check a little bit.
Signed-off-by: Xiaochao Dong (@damnever) <the.xcdong@gmail.com>
Adding Confluence to Jira tool. Can create a page in Confluence with
this PR. If accepted, will extend functionality to Bitbucket and
additional Confluence features.
---------
Co-authored-by: Ethan Bowen <ethan.bowen@slalom.com>
Since this model name is not there in the list MODEL_COST_PER_1K_TOKENS,
when we use get_openai_callback(), for gpt 3.5 model in Azure AI, we do
not get the cost of the tokens. This will fix this issue
#### Who can review?
@hwchase17
@agola11
Co-authored-by: rajib76 <rajib76@yahoo.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
- Fixed an issue where some caching types check the wrong types, hence
not allowing caching to work
Maintainer responsibilities:
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
MHTML is a very interesting format since it's used both for emails but
also for archived webpages. Some scraping projects want to store pages
in disk to process them later, mhtml is perfect for that use case.
This is heavily inspired from the beautifulsoup html loader, but
extracting the html part from the mhtml file.
---------
Co-authored-by: rlm <pexpresss31@gmail.com>
# beautifulsoup get_text kwargs in WebBaseLoader
- Description: this PR introduces an optional `bs_get_text_kwargs`
parameter to `WebBaseLoader` constructor. It can be used to pass kwargs
to the downstream BeautifulSoup.get_text call. The most common usage
might be to pass a custom text separator, as seen also in
`BSHTMLLoader`.
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: jtolgyesi
- Description: Adds a simple progress bar with tqdm when using
UnstructuredURLLoader. Exposes new paramater `show_progress_bar`. Very
simple PR.
- Issue: N/A
- Dependencies: N/A
- Tag maintainer: @rlancemartin @eyurtsev
---------
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
- Description: Updated regex to support a new format that was observed
when whatsapp chat was exported.
- Issue: #6654
- Dependencies: No new dependencies
- Tag maintainer: @rlancemartin, @eyurtsev
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @dev2049
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @dev2049
- Memory: @hwchase17
- Agents / Tools / Toolkits: @vowelparrot
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
- Description: Fix Typo in LangChain MyScale Integration Doc
@hwchase17
# Add caching to BaseChatModel
Fixes#1644
(Sidenote: While testing, I noticed we have multiple implementations of
Fake LLMs, used for testing. I consolidated them.)
## Who can review?
Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
Models
- @hwchase17
- @agola11
Twitter: [@UmerHAdil](https://twitter.com/@UmerHAdil) | Discord:
RicChilligerDude#7589
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Motorhead Memory module didn't support deletion of a session. Added a
method to enable deletion.
---------
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
This PR adds a new LLM class for the Amazon API Gateway hosted LLM. The
PR also includes example notebooks for using the LLM class in an Agent
chain.
---------
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
### Just corrected a small inconsistency on a doc page (not exactly a
typo, per se)
- Description: There was inconsistency due to the use of single quotes
at one place on the [Squential
Chains](https://python.langchain.com/docs/modules/chains/foundational/sequential_chains)
page of the docs,
- Issue: NA,
- Dependencies: NA,
- Tag maintainer: @dev2049,
- Twitter handle: kambleakash0
This PR targets the `API Reference` documentation.
- Several classes and functions missed `docstrings`. These docstrings
were created.
- In several places this
```
except ImportError:
raise ValueError(
```
was replaced to
```
except ImportError:
raise ImportError(
```
# Description
It adds a new initialization param in `WikipediaLoader` so we can
override the `doc_content_chars_max` param used in `WikipediaAPIWrapper`
under the hood, e.g:
```python
from langchain.document_loaders import WikipediaLoader
# doc_content_chars_max is the new init param
loader = WikipediaLoader(query="python", doc_content_chars_max=90000)
```
## Decisions
`doc_content_chars_max` default value will be 4000, because it's the
current value
I have added pycode comments
# Issue
#6639
# Dependencies
None
# Twitter handle
[@elafo](https://twitter.com/elafo)
- Description: The aviary integration has changed url link. This PR
provide fix for those changes and also it makes providing the input URL
optional to the API (since they can be set via env variables).
- Issue: N/A
- Dependencies: N/A
- Twitter handle: N/A
---------
Signed-off-by: Kourosh Hakhamaneshi <kourosh@anyscale.com>
Fix a typo in
`langchain/experimental/plan_and_execute/planners/base.py`, by changing
"Given input, decided what to do." to "Given input, decide what to do."
This is in the docstring for functions running LLM chains which shall
create a plan, "decided" does not make any sense in this context.
This link for the notebook of OpenLLM is not migrated to the new format
Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @dev2049
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @dev2049
- Memory: @hwchase17
- Agents / Tools / Toolkits: @vowelparrot
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
vertex Ai chat is broken right now. That is because context is in params
and chat.send_message doesn't accept that as a params.
- Closes issue [ChatVertexAI Error: _ChatSessionBase.send_message() got
an unexpected keyword argument 'context'
#6610](https://github.com/hwchase17/langchain/issues/6610)
We may want to process load all URLs under a root directory.
For example, let's look at the [LangChain JS
documentation](https://js.langchain.com/docs/).
This has many interesting child pages that we may want to read in bulk.
Of course, the `WebBaseLoader` can load a list of pages.
But, the challenge is traversing the tree of child pages and actually
assembling that list!
We do this using the `RecusiveUrlLoader`.
This also gives us the flexibility to exclude some children (e.g., the
`api` directory with > 800 child pages).
## Goal
We want to ensure consistency across vectordbs:
1/ add `delete` by ID method to the base vectorstore class
2/ ensure `add_texts` performs `upsert` with ID optionally passed
## Testing
- [x] Pinecone: notebook test w/ `langchain_test` vectorstore.
- [x] Chroma: Review by @jeffchuber, notebook test w/ in memory
vectorstore.
- [x] Supabase: Review by @copple, notebook test w/ `langchain_test`
table.
- [x] Weaviate: Notebook test w/ `langchain_test` index.
- [x] Elastic: Revied by @vestal. Notebook test w/ `langchain_test`
table.
- [ ] Redis: Asked for review from owner of recent `delete` method
https://github.com/hwchase17/langchain/pull/6222
Fixes#5456
This PR removes the `callbacks` argument from a tool's schema when
creating a `Tool` or `StructuredTool` with the `from_function` method
and `infer_schema` is set to `True`. The `callbacks` argument is now
removed in the `create_schema_from_function` and `_get_filtered_args`
methods. As suggested by @vowelparrot, this fix provides a
straightforward solution that minimally affects the existing
implementation.
A test was added to verify that this change enables the expected use of
`Tool` and `StructuredTool` when using a `CallbackManager` and inferring
the tool's schema.
- @hwchase17
Many cities have open data portals for events like crime, traffic, etc.
Socrata provides an API for many, including SF (e.g., see
[here](https://dev.socrata.com/foundry/data.sfgov.org/tmnf-yvry)).
This is a new data loader for city data that uses Socrata API.
A new implementation of `StreamlitCallbackHandler`. It formats Agent
thoughts into Streamlit expanders.
You can see the handler in action here:
https://langchain-mrkl.streamlit.app/
Per a discussion with Harrison, we'll be adding a
`StreamlitCallbackHandler` implementation to an upcoming
[Streamlit](https://github.com/streamlit/streamlit) release as well, and
will be updating it as we add new LLM- and LangChain-specific features
to Streamlit.
The idea with this PR is that the LangChain `StreamlitCallbackHandler`
will "auto-update" in a way that keeps it forward- (and backward-)
compatible with Streamlit. If the user has an older Streamlit version
installed, the LangChain `StreamlitCallbackHandler` will be used; if
they have a newer Streamlit version that has an updated
`StreamlitCallbackHandler`, that implementation will be used instead.
(I'm opening this as a draft to get the conversation going and make sure
we're on the same page. We're really excited to land this into
LangChain!)
#### Who can review?
@agola11, @hwchase17
# Changes
This PR adds [Clarifai](https://www.clarifai.com/) integration to
Langchain. Clarifai is an end-to-end AI Platform. Clarifai offers user
the ability to use many types of LLM (OpenAI, cohere, ect and other open
source models). As well, a clarifai app can be treated as a vector
database to upload and retrieve data. The integrations includes:
- Clarifai LLM integration: Clarifai supports many types of language
model that users can utilize for their application
- Clarifai VectorDB: A Clarifai application can hold data and
embeddings. You can run semantic search with the embeddings
#### Before submitting
- [x] Added integration test for LLM
- [x] Added integration test for VectorDB
- [x] Added notebook for LLM
- [x] Added notebook for VectorDB
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
### Description
We have added a new LLM integration `azureml_endpoint` that allows users
to leverage models from the AzureML platform. Microsoft recently
announced the release of [Azure Foundation
Models](https://learn.microsoft.com/en-us/azure/machine-learning/concept-foundation-models?view=azureml-api-2)
which users can find in the AzureML Model Catalog. The Model Catalog
contains a variety of open source and Hugging Face models that users can
deploy on AzureML. The `azureml_endpoint` allows LangChain users to use
the deployed Azure Foundation Models.
### Dependencies
No added dependencies were required for the change.
### Tests
Integration tests were added in
`tests/integration_tests/llms/test_azureml_endpoint.py`.
### Notebook
A Jupyter notebook demonstrating how to use `azureml_endpoint` was added
to `docs/modules/llms/integrations/azureml_endpoint_example.ipynb`.
### Twitters
[Prakhar Gupta](https://twitter.com/prakhar_in)
[Matthew DeGuzman](https://twitter.com/matthew_d13)
---------
Co-authored-by: Matthew DeGuzman <91019033+matthewdeguzman@users.noreply.github.com>
Co-authored-by: prakharg-msft <75808410+prakharg-msft@users.noreply.github.com>
Since it seems like #6111 will be blocked for a bit, I've forked
@tyree731's fork and implemented the requested changes.
This change adds support to the base Embeddings class for two methods,
aembed_query and aembed_documents, those two methods supporting async
equivalents of embed_query and
embed_documents respectively. This ever so slightly rounds out async
support within langchain, with an initial implementation of this
functionality being implemented for openai.
Implements https://github.com/hwchase17/langchain/issues/6109
---------
Co-authored-by: Stephen Tyree <tyree731@gmail.com>
1. upgrade the version of AwaDB
2. add some new interfaces
3. fix bug of packing page content error
@dev2049 please review, thanks!
---------
Co-authored-by: vincent <awadb.vincent@gmail.com>
Everything needed to support sending messages over WhatsApp Business
Platform (GA), Facebook Messenger (Public Beta) and Google Business
Messages (Private Beta) was present. Just added some details on
leveraging it.
Description:
Update the artifact name of the xml file and the namespaces. Co-authored
with @tjaffri
Co-authored-by: Kenzie Mihardja <kenzie@docugami.com>
### Feature
Using FAISS on a retrievalQA task, I found myself wanting to allow in
multiple sources. From what I understood, the filter feature takes in a
dict of form {key: value} which then will check in the metadata for the
exact value linked to that key.
I added some logic to be able to pass a list which will be checked
against instead of an exact value. Passing an exact value will also
work.
Here's an example of how I could then use it in my own project:
```
pdfs_to_filter_in = ["file_A", "file_B"]
filter_dict = {
"source": [f"source_pdfs/{pdf_name}.pdf" for pdf_name in pdfs_to_filter_in]
}
retriever = db.as_retriever()
retriever.search_kwargs = {"filter": filter_dict}
```
I added an integration test based on the other ones I found in
`tests/integration_tests/vectorstores/test_faiss.py` under
`test_faiss_with_metadatas_and_list_filter()`.
It doesn't feel like this is worthy of its own notebook or doc, but I'm
open to suggestions if needed.
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
Just some grammar fixes: I found "retriver" instead of "retriever" in
several comments across the documentation and in the comments. I fixed
it.
Co-authored-by: andrey.vedishchev <andrey.vedishchev@rgigroup.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
Fixes # (issue)
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
Here are some examples to use StarRocks as vectordb
```
from langchain.vectorstores import StarRocks
from langchain.vectorstores.starrocks import StarRocksSettings
embeddings = OpenAIEmbeddings()
# conifgure starrocks settings
settings = StarRocksSettings()
settings.port = 41003
settings.host = '127.0.0.1'
settings.username = 'root'
settings.password = ''
settings.database = 'zya'
# to fill new embeddings
docsearch = StarRocks.from_documents(split_docs, embeddings, config = settings)
# or to use already-built embeddings in database.
docsearch = StarRocks(embeddings, settings)
```
#### Who can review?
Tag maintainers/contributors who might be interested:
@dev2049
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
---------
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
### Integration of Infino with LangChain for Enhanced Observability
This PR aims to integrate [Infino](https://github.com/infinohq/infino),
an open source observability platform written in rust for storing
metrics and logs at scale, with LangChain, providing users with a
streamlined and efficient method of tracking and recording LangChain
experiments. By incorporating Infino into LangChain, users will be able
to gain valuable insights and easily analyze the behavior of their
language models.
#### Please refer to the following files related to integration:
- `InfinoCallbackHandler`: A [callback
handler](https://github.com/naman-modi/langchain/blob/feature/infino-integration/langchain/callbacks/infino_callback.py)
specifically designed for storing chain responses within Infino.
- Example `infino.ipynb` file: A comprehensive notebook named
[infino.ipynb](https://github.com/naman-modi/langchain/blob/feature/infino-integration/docs/extras/modules/callbacks/integrations/infino.ipynb)
has been included to guide users on effectively leveraging Infino for
tracking LangChain requests.
- [Integration
Doc](https://github.com/naman-modi/langchain/blob/feature/infino-integration/docs/extras/ecosystem/integrations/infino.mdx)
for Infino integration.
By integrating Infino, LangChain users will gain access to powerful
visualization and debugging capabilities. Infino enables easy tracking
of inputs, outputs, token usage, execution time of LLMs. This
comprehensive observability ensures a deeper understanding of individual
executions and facilitates effective debugging.
Co-authors: @vinaykakade @savannahar68
---------
Co-authored-by: Vinay Kakade <vinaykakade@gmail.com>
This PR adds Rockset as a vectorstore for langchain.
[Rockset](https://rockset.com/blog/introducing-vector-search-on-rockset/)
is a real time OLAP database which provides a fast and efficient vector
search functionality. Further since it is entirely schemaless, it can
store metadata in separate columns thereby allowing fast metadata
filters during vector similarity search (as opposed to storing the
entire metadata in a single JSON column). It currently supports three
distance functions: `COSINE_SIMILARITY`, `EUCLIDEAN_DISTANCE`, and
`DOT_PRODUCT`.
This PR adds `rockset` client as an optional dependency.
We would love a twitter shoutout, our handle is
https://twitter.com/RocksetCloud
---------
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
This pull request introduces a new feature to the LangChain QA Retrieval
Chains with Structures. The change involves adding a prompt template as
an optional parameter for the RetrievalQA chains that utilize the
recently implemented OpenAI Functions.
The main purpose of this enhancement is to provide users with the
ability to input a more customizable prompt to the chain. By introducing
a prompt template as an optional parameter, users can tailor the prompt
to their specific needs and context, thereby improving the flexibility
and effectiveness of the RetrievalQA chains.
## Changes Made
- Created a new optional parameter, "prompt", for the RetrievalQA with
structure chains.
- Added an example to the RetrievalQA with sources notebook.
My twitter handle is @El_Rey_Zero
---------
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
Added the functionality to leverage 3 new Codey models from Vertex AI:
- code-bison - Code generation using the existing LLM integration
- code-gecko - Code completion using the existing LLM integration
- codechat-bison - Code chat using the existing chat_model integration
---------
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
This PR adds `KuzuGraph` and `KuzuQAChain` for interacting with [Kùzu
database](https://github.com/kuzudb/kuzu). Kùzu is an in-process
property graph database management system (GDBMS) built for query speed
and scalability. The `KuzuGraph` and `KuzuQAChain` provide the same
functionality as the existing integration with NebulaGraph and Neo4j and
enables query generation and question answering over Kùzu database.
A notebook example and a simple test case have also been added.
---------
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
#### Fix
Added the mention of "store" amongst the tasks that the data connection
module can perform aside from the existing 3 (load, transform and
query). Particularly, this implies the generation of embeddings vectors
and the creation of vector stores.
This addresses #6291 adding support for using Cassandra (and compatible
databases, such as DataStax Astra DB) as a [Vector
Store](https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-30%3A+Approximate+Nearest+Neighbor(ANN)+Vector+Search+via+Storage-Attached+Indexes).
A new class `Cassandra` is introduced, which complies with the contract
and interface for a vector store, along with the corresponding
integration test, a sample notebook and modified dependency toml.
Dependencies: the implementation relies on the library `cassio`, which
simplifies interacting with Cassandra for ML- and LLM-oriented
workloads. CassIO, in turn, uses the `cassandra-driver` low-lever
drivers to communicate with the database. The former is added as
optional dependency (+ in `extended_testing`), the latter was already in
the project.
Integration testing relies on a locally-running instance of Cassandra.
[Here](https://cassio.org/more_info/#use-a-local-vector-capable-cassandra)
a detailed description can be found on how to compile and run it (at the
time of writing the feature has not made it yet to a release).
During development of the integration tests, I added a new "fake
embedding" class for what I consider a more controlled way of testing
the MMR search method. Likewise, I had to amend what looked like a
glitch in the behaviour of `ConsistentFakeEmbeddings` whereby an
`embed_query` call would have bypassed storage of the requested text in
the class cache for use in later repeated invocations.
@dev2049 might be the right person to tag here for a review. Thank you!
---------
Co-authored-by: rlm <pexpresss31@gmail.com>
Hello Folks,
Thanks for creating and maintaining this great project. I'm excited to
submit this PR to add Alibaba Cloud OpenSearch as a new vector store.
OpenSearch is a one-stop platform to develop intelligent search
services. OpenSearch was built based on the large-scale distributed
search engine developed by Alibaba. OpenSearch serves more than 500
business cases in Alibaba Group and thousands of Alibaba Cloud
customers. OpenSearch helps develop search services in different search
scenarios, including e-commerce, O2O, multimedia, the content industry,
communities and forums, and big data query in enterprises.
OpenSearch provides the vector search feature. In specific scenarios,
especially test question search and image search scenarios, you can use
the vector search feature together with the multimodal search feature to
improve the accuracy of search results.
This PR includes:
A AlibabaCloudOpenSearch class that can connect to the Alibaba Cloud
OpenSearch instance.
add embedings and metadata into a opensearch datasource.
querying by squared euclidean and metadata.
integration tests.
ipython notebook and docs.
I have read your contributing guidelines. And I have passed the tests
below
- [x] make format
- [x] make lint
- [x] make coverage
- [x] make test
---------
Co-authored-by: zhaoshengbo <shengbo.zsb@alibaba-inc.com>
Already supported in the reverse operation in
`_convert_message_to_dict()`, this just provides parity.
@hwchase17
@agola11
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Fix issue #6380
<!-- Remove if not applicable -->
Fixes#6380 (issue)
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
@hwchase17
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
---------
Co-authored-by: HubertKl <HubertKl>
Support baidu list type answer_box
From [this document](https://serpapi.com/baidu-answer-box), we can know
that the answer_box attribute returned by the Baidu interface is a list,
and the list contains only one Object, but an error will occur when the
current code is executed.
So when answer_box is a list, we reset res["answer_box"] so that the
code can execute successfully.
Caching wasn't accounting for which model was used so a result for the
first executed model would return for the same prompt on a different
model.
This was because `Replicate._identifying_params` did not include the
`model` parameter.
FYI
- @cbh123
- @hwchase17
- @agola11
# Provider the latest duckduckgo_search API
The Git commit contents involve two files related to some DuckDuckGo
query operations, and an upgrade of the DuckDuckGo module to version
3.8.3. A suitable commit message could be "Upgrade DuckDuckGo module to
version 3.8.3, including query operations". Specifically, in the
duckduckgo_search.py file, a DDGS() class instance is newly added to
replace the previous ddg() function, and the time parameter name in the
get_snippets() and results() methods is changed from "time" to
"timelimit" to accommodate recent changes. In the pyproject.toml file,
the duckduckgo-search module is upgraded to version 3.8.3.
[duckduckgo_search readme
attention](https://github.com/deedy5/duckduckgo_search): Versions before
v2.9.4 no longer work as of May 12, 2023
## Who can review?
@vowelparrot
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Trying to use OpenAI models like 'text-davinci-002' or
'text-davinci-003' the agent doesn't work and the message is 'Only
supported with OpenAI models.' The error message should be 'Only
supported with ChatOpenAI models.'
My Twitter handle is @alonsosilva
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
Fixes # (issue)
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
@hwchase17
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
Co-authored-by: SILVA Alonso <alonso.silva@nokia-bell-labs.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
I apologize for the error: the 'ANTHROPIC_API_URL' environment variable
doesn't take effect if the 'anthropic_api_url' parameter has a default
value.
#### Who can review?
Models
- @hwchase17
- @agola11
1. Introduced new distance strategies support: **DOT_PRODUCT** and
**EUCLIDEAN_DISTANCE** for enhanced flexibility.
2. Implemented a feature to filter results based on metadata fields.
3. Incorporated connection attributes specifying "langchain python sdk"
usage for enhanced traceability and debugging.
4. Expanded the suite of integration tests for improved code
reliability.
5. Updated the existing notebook with the usage example
@dev2049
---------
Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
W.r.t recent changes, ChatPromptTemplate does not accepting partial
variables. This PR should fix that issue.
Fixes#6431
#### Who can review?
@hwchase17
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Throwing ToolException when incorrect arguments are passed to tools so
that that agent can course correct them.
# Incorrect argument count handling
I was facing an error where the agent passed incorrect arguments to
tools. As per the discussions going around, I started throwing
ToolException to allow the model to course correct.
## Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
## Who can review?
Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @vowelparrot
VectorStores / Retrievers / Memory
- @dev2049
-->
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
Fixes a link typo from `/-/route` to `/-/routes`.
and change endpoint format
from `f"{self.anyscale_service_url}/{self.anyscale_service_route}"` to
`f"{self.anyscale_service_url}{self.anyscale_service_route}"`
Also adding documentation about the format of the endpoint
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Fixed several inconsistencies:
- file names and notebook titles should be similar otherwise ToC on the
[retrievers
page](https://python.langchain.com/en/latest/modules/indexes/retrievers.html)
and on the left ToC tab are different. For example, now, `Self-querying
with Chroma` is not correctly alphabetically sorted because its file
named `chroma_self_query.ipynb`
- `Stringing compressors and document transformers...` demoted from `#`
to `##`. Otherwise, it appears in Toc.
- several formatting problems
#### Who can review?
@hwchase17
@dev2049
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
The `CustomOutputParser` needs to throw `OutputParserException` when it
fails to parse the response from the agent, so that the executor can
[catch it and
retry](be9371ca8f/langchain/agents/agent.py (L767))
when `handle_parsing_errors=True`.
<!-- Remove if not applicable -->
#### Who can review?
Tag maintainers/contributors who might be interested: @hwchase17
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
#### Description
- Removed two backticks surrounding the phrase "chat messages as"
- This phrase stood out among other formatted words/phrases such as
`prompt`, `role`, `PromptTemplate`, etc., which all seem to have a clear
function.
- `chat messages as`, formatted as such, confused me while reading,
leading me to believe the backticks were misplaced.
#### Who can review?
@hwchase17
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
Minor new line character in the markdown.
Also, this option is not yet in the latest version of LangChain
(0.0.190) from Conda. Maybe in the next update.
@eyurtsev
@hwchase17
Just so it is consistent with other `VectorStore` classes.
This is a follow-up of #6056 which also discussed the potential of
adding `similarity_search_by_vector_returning_embeddings` that we will
continue the discussion here.
potentially related: #6286
#### Who can review?
Tag maintainers/contributors who might be interested: @rlancemartin
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
This PR adds an example of doing question answering over documents using
OpenAI Function Agents.
#### Who can review?
@hwchase17
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
Fixes: ChatAnthropic was mutating the input message list during
formatting which isn't ideal bc you could be changing the behavior for
other chat models when using the same input
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
Arize released a new Generative LLM Model Type, adjusting the callback
function to new logging.
Added arize imports, please delete if not necessary.
Specifically, this change makes sure that the prompt and response pairs
from LangChain agents are logged into Arize as a Generative LLM model,
instead of our previous categorical model. In order to do this, the
callback functions collects the necessary data and passes the data into
Arize using Python Pandas SDK.
Arize library, specifically pandas.logger is an additional dependency.
Notebook For Test:
https://docs.arize.com/arize/resources/integrations/langchain
Who can review?
Tag maintainers/contributors who might be interested:
@hwchase17 - project lead
Tracing / Callbacks
@agola11
- return raw and full output (but keep run shortcut method functional)
- change output parser to take in generations (good for working with
messages)
- add output parser to base class, always run (default to same as
current)
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
#### Before submitting
Add memory support for `OpenAIFunctionsAgent` like
`StructuredChatAgent`.
#### Who can review?
@hwchase17
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
A must-include for SiteMap Loader to avoid the SSL verification error.
Setting the 'verify' to False by ``` sitemap_loader.requests_kwargs =
{"verify": False}``` does not bypass the SSL verification in some
websites.
There are websites (https:// researchadmin.asu.edu/ sitemap.xml) where
setting "verify" to False as shown below would not work:
sitemap_loader.requests_kwargs = {"verify": False}
We need this merge to tell the Session to use a connector with a
specific argument about SSL:
\# For SiteMap SSL verification
if not self.request_kwargs['verify']:
connector = aiohttp.TCPConnector(ssl=False)
else:
connector = None
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
Fixes#5483
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
@hwchase17
@eyurtsev
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
@agola11
Issue
#6193
I added the new pricing for the new models.
Also, now gpt-3.5-turbo got split into "input" and "output" pricing. It
currently does not support that.
can't pass system_message argument, the prompt always show default
message "System: You are a helpful AI assistant."
```
system_message = SystemMessage(
content="You are an AI that provides information to Human regarding documentation."
)
agent = initialize_agent(
tools,
llm=openai_llm_chat,
agent=AgentType.OPENAI_FUNCTIONS,
system_message=system_message,
agent_kwargs={
"system_message": system_message,
},
verbose=False,
)
```
#### Who can review?
Tag maintainers/contributors who might be interested:
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
To bypass SSL verification errors during fetching, you can include the
`verify=False` parameter. This markdown proves useful, especially for
beginners in the field of web scraping.
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
Fixes#6079
#### Who can review?
Tag maintainers/contributors who might be interested:
@hwchase17
@eyurtsev
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
To bypass SSL verification errors during web scraping, you can include
the ssl_verify=False parameter along with the headers parameter. This
combination of arguments proves useful, especially for beginners in the
field of web scraping.
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
Fixes#1829
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
@hwchase17 @eyurtsev
-->
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Hi, I make a small improvement for BaseOpenAI.
I added a max_context_size attribute to BaseOpenAI so that we can get
the max context size directly instead of only getting the maximum token
size of the prompt through the max_tokens_for_prompt method.
Who can review?
@hwchase17 @agola11
I followed the [Common
Tasks](c7db9febb0/.github/CONTRIBUTING.md),
the test is all passed.
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
LLM configurations can be loaded from a Python dict (or JSON file
deserialized as dict) using the
[load_llm_from_config](8e1a7a8646/langchain/llms/loading.py (L12))
function.
However, the type string in the `type_to_cls_dict` lookup dict differs
from the type string defined in some LLM classes. This means that the
LLM object can be saved, but not loaded again, because the type strings
differ.
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
The current version of chat history with DynamoDB doesn't handle the
case correctly when a table has no chat history. This change solves this
error handling.
<!-- Remove if not applicable -->
Fixes https://github.com/hwchase17/langchain/issues/6088
#### Who can review?
Tag maintainers/contributors who might be interested:
@hwchase17
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
Fixes#6131
Simply passes kwargs forward from similarity_search to helper functions
so that search_kwargs are applied to search as originally intended. See
bug for repro steps.
#### Who can review?
@hwchase17
@dev2049
Twitter: poshporcupine
Very small typo in the Constitutional AI critique default prompt. The
negation "If there is *no* material critique of ..." is used two times,
should be used only on the first one.
Cheers,
Pierre
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
Fixes https://github.com/hwchase17/langchain/issues/6208
<!-- Remove if not applicable -->
Fixes # (issue)
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
VectorStores / Retrievers / Memory
- @dev2049
Hot Fixes for Deep Lake [would highly appreciate expedited review]
* deeplake version was hardcoded and since deeplake upgraded the
integration fails with confusing error
* an additional integration test fixed due to embedding function
* Additionally fixed docs for code understanding links after docs
upgraded
* notebook removal of public parameter to make sure code understanding
notebook works
#### Who can review?
@hwchase17 @dev2049
---------
Co-authored-by: Davit Buniatyan <d@activeloop.ai>
Fixes#5807 (issue)
#### Who can review?
Tag maintainers/contributors who might be interested: @dev2049
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
Related to this https://github.com/hwchase17/langchain/issues/6225
Just copied the implementation from `generate` function to `agenerate`
and tested it.
Didn't run any official tests thought
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
Fixes#6225
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
@hwchase17, @agola11
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
The LLM integration
[HuggingFaceTextGenInference](https://github.com/hwchase17/langchain/blob/master/langchain/llms/huggingface_text_gen_inference.py)
already has streaming support.
However, when streaming is enabled, it always returns an empty string as
the final output text when the LLM is finished. This is because `text`
is instantiated with an empty string and never updated.
This PR fixes the collection of the final output text by concatenating
new tokens.
Similar as https://github.com/hwchase17/langchain/pull/5818
Added the functionality to save/load Graph Cypher QA Chain due to a user
reporting the following error
> raise NotImplementedError("Saving not supported for this chain
type.")\nNotImplementedError: Saving not supported for this chain
type.\n'
In LangChain, all module classes are enumerated in the `__init__.py`
file of the correspondent module. But some classes were missed and were
not included in the module `__init__.py`
This PR:
- added the missed classes to the module `__init__.py` files
- `__init__.py:__all_` variable value (a list of the class names) was
sorted
- `langchain.tools.sql_database.tool.QueryCheckerTool` was renamed into
the `QuerySQLCheckerTool` because it conflicted with
`langchain.tools.spark_sql.tool.QueryCheckerTool`
- changes to `pyproject.toml`:
- added `pgvector` to `pyproject.toml:extended_testing`
- added `pandas` to
`pyproject.toml:[tool.poetry.group.test.dependencies]`
- commented out the `streamlit` from `collbacks/__init__.py`, It is
because now the `streamlit` requires Python >=3.7, !=3.9.7
- fixed duplicate names in `tools`
- fixed correspondent ut-s
#### Who can review?
@hwchase17
@dev2049
Fixed PermissionError that occurred when downloading PDF files via http
in BasePDFLoader on windows.
When downloading PDF files via http in BasePDFLoader, NamedTemporaryFile
is used.
This function cannot open the file again on **Windows**.[Python
Doc](https://docs.python.org/3.9/library/tempfile.html#tempfile.NamedTemporaryFile)
So, we created a **temporary directory** with TemporaryDirectory and
placed the downloaded file there.
temporary directory is deleted in the deconstruct.
Fixes#2698
#### Who can review?
Tag maintainers/contributors who might be interested:
- @eyurtsev
- @hwchase17
This will add the ability to add an AsyncCallbackManager (handler) for
the reducer chain, which would be able to stream the tokens via the
`async def on_llm_new_token` callback method
Fixes # (issue)
[5532](https://github.com/hwchase17/langchain/issues/5532)
@hwchase17 @agola11
The following code snippet explains how this change would be used to
enable `reduce_llm` with streaming support in a `map_reduce` chain
I have tested this change and it works for the streaming use-case of
reducer responses. I am happy to share more information if this makes
solution sense.
```
AsyncHandler
..........................
class StreamingLLMCallbackHandler(AsyncCallbackHandler):
"""Callback handler for streaming LLM responses."""
def __init__(self, websocket):
self.websocket = websocket
# This callback method is to be executed in async
async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
resp = ChatResponse(sender="bot", message=token, type="stream")
await self.websocket.send_json(resp.dict())
Chain
..........
stream_handler = StreamingLLMCallbackHandler(websocket)
stream_manager = AsyncCallbackManager([stream_handler])
streaming_llm = ChatOpenAI(
streaming=True,
callback_manager=stream_manager,
verbose=False,
temperature=0,
)
main_llm = OpenAI(
temperature=0,
verbose=False,
)
doc_chain = load_qa_chain(
llm=main_llm,
reduce_llm=streaming_llm,
chain_type="map_reduce",
callback_manager=manager
)
qa_chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
combine_docs_chain=doc_chain,
question_generator=question_generator,
callback_manager=manager,
)
# Here `acall` will trigger `acombine_docs` on `map_reduce` which should then call `_aprocess_result` which in turn will call `self.combine_document_chain.arun` hence async callback will be awaited
result = await qa_chain.acall(
{"question": question, "chat_history": chat_history}
)
```
Hi again @agola11! 🤗
## What's in this PR?
After playing around with different chains we noticed that some chains
were using different `output_key`s and we were just handling some, so
we've extended the support to any output, either if it's a Python list
or a string.
Kudos to @dvsrepo for spotting this!
---------
Co-authored-by: Daniel Vila Suero <daniel@argilla.io>
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
Fixes https://github.com/ShreyaR/guardrails/issues/155
Enables guardrails reasking by specifying an LLM api in the output
parser.
skip building preview of docs for anything branch that doesn't start
with `__docs__`. will eventually update to look at code diff directories
but patching for now
We propose an enhancement to the web-based loader initialize method by
introducing a "verify" option. This enhancement addresses the issue of
SSL verification errors encountered on certain web pages. By providing
users with the option to set the verify parameter to False, we offer
greater flexibility and control.
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
### Fixes#6079
#### Who can review?
@eyurtsev @hwchase17
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
Fixes # (issue)
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
[Feature] User can custom the Anthropic API URL
#### Who can review?
Tag maintainers/contributors who might be interested:
Models
- @hwchase17
- @agola11
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
Added support to `search_by_vector` to Qdrant Vector store.
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
### Who can review
VectorStores / Retrievers / Memory
- @dev2049
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
-->
@eyurtsev
The existing GoogleDrive implementation always needs a service account
to be available at the credentials location. When running on GCP
services such as Cloud Run, a service account already exists in the
metadata of the service, so no physical key is necessary. This change
adds a check to see if it is running in such an environment, and uses
that authentication instead.
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Add oobabooga/text-generation-webui support as an LLM. Currently,
supports using text-generation-webui's non-streaming API interface.
Allows users who already have text-gen running to use the same models
with langchain.
#### Before submitting
Simple usage, similar to existing LLM supported:
```
from langchain.llms import TextGen
llm = TextGen(model_url = "http://localhost:5000")
```
#### Who can review?
@hwchase17 - project lead
---------
Co-authored-by: Hien Ngo <Hien.Ngo@adia.ae>
Hi there:
As I implement the AnalyticDB VectorStore use two table to store the
document before. It seems just use one table is a better way. So this
commit is try to improve AnalyticDB VectorStore implementation without
affecting user behavior:
**1. Streamline the `post_init `behavior by creating a single table with
vector indexing.
2. Update the `add_texts` API for document insertion.
3. Optimize `similarity_search_with_score_by_vector` to retrieve results
directly from the table.
4. Implement `_similarity_search_with_relevance_scores`.
5. Add `embedding_dimension` parameter to support different dimension
embedding functions.**
Users can continue using the API as before.
Test cases added before is enough to meet this commit.
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
Fixes ##6039
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests,
lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
@hwchase17 @agola11
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
-->
## DocArray as a Retriever
[DocArray](https://github.com/docarray/docarray) is an open-source tool
for managing your multi-modal data. It offers flexibility to store and
search through your data using various document index backends. This PR
introduces `DocArrayRetriever` - which works with any available backend
and serves as a retriever for Langchain apps.
Also, I added 2 notebooks:
DocArray Backends - intro to all 5 currently supported backends, how to
initialize, index, and use them as a retriever
DocArray Usage - showcasing what additional search parameters you can
pass to create versatile retrievers
Example:
```python
from docarray.index import InMemoryExactNNIndex
from docarray import BaseDoc, DocList
from docarray.typing import NdArray
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.retrievers import DocArrayRetriever
# define document schema
class MyDoc(BaseDoc):
description: str
description_embedding: NdArray[1536]
embeddings = OpenAIEmbeddings()
# create documents
descriptions = ["description 1", "description 2"]
desc_embeddings = embeddings.embed_documents(texts=descriptions)
docs = DocList[MyDoc](
[
MyDoc(description=desc, description_embedding=embedding)
for desc, embedding in zip(descriptions, desc_embeddings)
]
)
# initialize document index with data
db = InMemoryExactNNIndex[MyDoc](docs)
# create a retriever
retriever = DocArrayRetriever(
index=db,
embeddings=embeddings,
search_field="description_embedding",
content_field="description",
)
# find the relevant document
doc = retriever.get_relevant_documents("action movies")
print(doc)
```
#### Who can review?
@dev2049
---------
Signed-off-by: jupyterjazz <saba.sturua@jina.ai>
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.
Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.
After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
Fixes #
links to prompt templates and example selectors on the
[Prompts](https://python.langchain.com/docs/modules/model_io/prompts/)
page are invalid.
#### Before submitting
Just a small note that I tried to run `make docs_clean` and other
related commands before PR written
[here](https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md#build-documentation-locally),
it gives me an error:
```bash
langchain % make docs_clean
Traceback (most recent call last):
File "/Users/masafumi/Downloads/langchain/.venv/bin/make", line 5, in <module>
from scripts.proto import main
ModuleNotFoundError: No module named 'scripts'
make: *** [docs_clean] Error 1
# Poetry (version 1.5.1)
# Python 3.9.13
```
I couldn't figure out how to fix this, so I didn't run those command.
But links should work.
#### Who can review?
Tag maintainers/contributors who might be interested:
@hwchase17
Similar issue #6323
Co-authored-by: masafumimori <m.masafumimori@outlook.com>
# Handle Managed Motorhead Data Key
Managed motorhead will return a payload with a `data` key. we need to
handle this to properly access messages from the server.
Just adds some comments and docstring improvements.
There was some behaviour that was quite unclear to me at first like:
- "when do things get updated?"
- "why are there only entity names and no summaries?"
- "why do the entity names disappear?"
Now it can be much more obvious to many.
I am lukestanley on Twitter.
1. Changed the implementation of add_texts interface for the AwaDB
vector store in order to improve the performance
2. Upgrade the AwaDB from 0.3.2 to 0.3.3
---------
Co-authored-by: vincent <awadb.vincent@gmail.com>
Fixes https://github.com/hwchase17/langchain/issues/6172
As described in https://github.com/hwchase17/langchain/issues/6172, I'd
love to help update the dev container in this project.
**Summary of changes:**
- Dev container now builds (the current container in this repo won't
build for me)
- Dockerfile updates
- Update image to our [currently-maintained Python
image](https://github.com/devcontainers/images/tree/main/src/python/.devcontainer)
(`mcr.microsoft.com/devcontainers/python`) rather than the deprecated
image from vscode-dev-containers
- Move Dockerfile to root of repo - in order for `COPY` to work
properly, it needs the files (in this case, `pyproject.toml` and
`poetry.toml`) in the same directory
- devcontainer.json updates
- Removed `customizations` and `remoteUser` since they should be covered
by the updated image in the Dockerfile
- Update comments
- Update docker-compose.yaml to properly point to updated Dockerfile
- Add a .gitattributes to avoid line ending conversions, which can
result in hundreds of pending changes
([info](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files))
- Add a README in the .devcontainer folder and info on the dev container
in the contributing.md
**Outstanding questions:**
- Is it expected for `poetry install` to take some time? It takes about
30 minutes for this dev container to finish building in a Codespace, but
a user should only have to experience this once. Through some online
investigation, this doesn't seem unusual
- Versions of poetry newer than 1.3.2 failed every time - based on some
of the guidance in contributing.md and other online resources, it seemed
changing poetry versions might be a good solution. 1.3.2 is from Jan
2023
---------
Co-authored-by: bamurtaugh <brmurtau@microsoft.com>
Co-authored-by: Samruddhi Khandale <samruddhikhandale@github.com>
This project includes a [dev container](https://containers.dev/), which lets you use a container as a full-featured dev environment.
You can use the dev container configuration in this folder to build and run the app without needing to install any of its tools locally! You can use it in [GitHub Codespaces](https://github.com/features/codespaces) or the [VS Code Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers).
## GitHub Codespaces
[](https://codespaces.new/hwchase17/langchain)
You may use the button above, or follow these steps to open this repo in a Codespace:
1. Click the **Code** drop-down menu at the top of https://github.com/hwchase17/langchain.
1. Click on the **Codespaces** tab.
1. Click **Create codespace on master** .
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).
## VS Code Dev Containers
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
Note: If you click this link you will open the main repo and not your local cloned repo, you can use this link and replace with your username and cloned repo name:
If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
You can also follow these steps to open this repo in a container using the VS Code Dev Containers extension:
1. If this is your first time using a development container, please ensure your system meets the pre-reqs (i.e. have Docker installed) in the [getting started steps](https://aka.ms/vscode-remote/containers/getting-started).
2. Open a locally cloned copy of the code:
- Fork and Clone this repository to your local filesystem.
- Press <kbd>F1</kbd> and select the **Dev Containers: Open Folder in Container...** command.
- Select the cloned copy of this folder, wait for the container to start, and try things out!
You can learn more in the [Dev Containers documentation](https://code.visualstudio.com/docs/devcontainers/containers).
## Tips and tricks
* If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info.
* If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.
Our [issues](https://github.com/hwchase17/langchain/issues) page is kept up to date
with bugs, improvements, and feature requests.
with bugs, improvements, and feature requests.
There is a taxonomy of labels to help with sorting and discovery of issues of interest. Please use these to help
organize issues.
@@ -44,7 +44,7 @@ If you are adding an issue, please try to keep it focused on a single, modular b
If two issues are related, or blocking, please link them rather than combining them.
We will try to keep these issues as up to date as possible, though
with the rapid rate of develop in this field some may get out of date.
with the rapid rate of development in this field some may get out of date.
If you notice this happening, please let us know.
### 🙋Getting Help
@@ -59,25 +59,35 @@ we do not want these to get in the way of getting good code into the codebase.
## 🚀 Quick Start
This project uses [Poetry](https://python-poetry.org/) as a dependency manager. Check out Poetry's [documentation on how to install it](https://python-poetry.org/docs/#installation) on your system before proceeding.
> **Note:** You can run this repository locally (which is described below) or in a [development container](https://containers.dev/) (which is described in the [.devcontainer folder](https://github.com/hwchase17/langchain/tree/master/.devcontainer)).
This project uses [Poetry](https://python-poetry.org/) v1.5.1 as a dependency manager. Check out Poetry's [documentation on how to install it](https://python-poetry.org/docs/#installation) on your system before proceeding.
❗Note: If you use `Conda` or `Pyenv` as your environment / package manager, avoid dependency conflicts by doing the following first:
1.*Before installing Poetry*, create and activate a new Conda env (e.g. `conda create -n langchain python=3.9`)
2. Install Poetry (see above)
2. Install Poetry v1.5.1 (see above)
3. Tell Poetry to use the virtualenv python environment (`poetry config virtualenvs.prefer-active-python true`)
4. Continue with the following steps.
There are two separate projects in this repository:
-`langchain`: core langchain code, abstractions, and use cases
-`langchain.experimental`: more experimental code
Each of these has their OWN development environment.
In order to run any of the commands below, please move into their respective directories.
For example, to contribute to `langchain` run `cd libs/langchain` before getting started with the below.
To install requirements:
```bash
poetry install -E all
poetry install --with test
```
This will install all requirements for running the package, examples, linting, formatting, tests, and coverage. Note the `-E all` flag will install all optional dependencies necessary for integration testing.
This will install all requirements for running the package, examples, linting, formatting, tests, and coverage.
❗Note: If you're running Poetry 1.4.1 and receive a `WheelFileValidationError` for `debugpy` during installation, you can try either downgrading to Poetry 1.4.0 or disabling "modern installation" (`poetry config installer.modern-installation false`) and re-install requirements. See [this `debugpy` issue](https://github.com/microsoft/debugpy/issues/1246) for more details.
❗Note: If during installation you receive a `WheelFileValidationError` for `debugpy`, please make sure you are running Poetry v1.5.1. This bug was present in older versions of Poetry (e.g. 1.4.1) and has been resolved in newer releases. If you are still seeing this bug on v1.5.1, you may also try disabling "modern installation" (`poetry config installer.modern-installation false`) and re-installing requirements. See [this `debugpy` issue](https://github.com/microsoft/debugpy/issues/1246) for more details.
Now, you should be able to run the common tasks in the following section. To double check, run `make test`, all tests should pass. If they don't you may need to pip install additional dependencies, such as `numexpr` and `openapi_schema_pydantic`.
Now assuming `make` and `pytest` are installed, you should be able to run the common tasks in the following section. To double check, run `make test` under `libs/langchain`, all tests should pass. If they don't, you may need to pip install additional dependencies, such as `numexpr` and `openapi_schema_pydantic`.
## ✅ Common Tasks
@@ -93,6 +103,14 @@ To run formatting for this project:
make format
```
Additionally, you can run the formatter only on the files that have been modified in your current branch as compared to the master branch using the format_diff command:
```bash
make format_diff
```
This is especially useful when you have made changes to a subset of the project and want to ensure your changes are properly formatted without affecting the rest of the codebase.
### Linting
Linting for this project is done via a combination of [Black](https://black.readthedocs.io/en/stable/), [isort](https://pycqa.github.io/isort/), [flake8](https://flake8.pycqa.org/en/latest/), and [mypy](http://mypy-lang.org/).
@@ -103,8 +121,42 @@ To run linting for this project:
make lint
```
In addition, you can run the linter only on the files that have been modified in your current branch as compared to the master branch using the lint_diff command:
```bash
make lint_diff
```
This can be very helpful when you've made changes to only certain parts of the project and want to ensure your changes meet the linting standards without having to check the entire codebase.
We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
### Spellcheck
Spellchecking for this project is done via [codespell](https://github.com/codespell-project/codespell).
Note that `codespell` finds common typos, so it could have false-positive (correctly spelled but rarely used) and false-negatives (not finding misspelled) words.
To check spelling for this project:
```bash
make spell_check
```
To fix spelling in place:
```bash
make spell_fix
```
If codespell is incorrectly flagging a word, you can skip spellcheck for that word by adding it to the codespell config in the `pyproject.toml` file.
Code coverage (i.e. the amount of code that is covered by unit tests) helps identify areas of the code that are potentially more or less brittle.
@@ -123,9 +175,9 @@ If you're adding a new dependency to Langchain, assume that it will be an option
that most users won't have it installed.
Users that do not have the dependency installed should be able to **import** your code without
any side effects (no warnings, no errors, no exceptions).
any side effects (no warnings, no errors, no exceptions).
To introduce the dependency to the pyproject.toml file correctly, please do the following:
To introduce the dependency to the pyproject.toml file correctly, please do the following:
1. Add the dependency to the main group as an optional dependency
```bash
@@ -168,7 +220,7 @@ If you add new logic, please add a unit test.
Integration tests cover logic that requires making calls to outside APIs (often integration with other services).
**warning** Almost no tests should be integration tests.
**warning** Almost no tests should be integration tests.
Tests that require making network connections make it difficult for other
developers to test the code.
@@ -204,32 +256,43 @@ When you run `poetry install`, the `langchain` package is installed as editable
## Documentation
While the code is split between `langchain` and `langchain.experimental`, the documentation is one holistic thing.
This covers how to get started contributing to documentation.
### Contribute Documentation
Docs are largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) from the code.
The docs directory contains Documentation and API Reference.
Documentation is built using [Docusaurus 2](https://docusaurus.io/).
API Reference are largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) from the code.
For that reason, we ask that you add good documentation to all classes and methods.
Similar to linting, we recognize documentation can be annoying. If you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
### Build Documentation Locally
In the following commands, the prefix `api_` indicates that those are operations for the API Reference.
Before building the documentation, it is always a good idea to clean the build directory:
```bash
make docs_clean
make api_docs_clean
```
Next, you can run the linkchecker to make sure all links are valid:
```bash
make docs_linkcheck
```
Finally, you can build the documentation as outlined below:
Next, you can build the documentation as outlined below:
```bash
make docs_build
make api_docs_build
```
Finally, you can run the linkchecker to make sure all links are valid:
```bash
make docs_linkcheck
make api_docs_linkcheck
```
## 🏭 Release Process
@@ -244,4 +307,3 @@ even patch releases may contain [non-backwards-compatible changes](https://semve
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
If you have a Twitter account you would like us to mention, please let us know in the PR or in another manner.
Thank you for contributing to LangChain! Your PR will appear in our release under the title you set. Please make sure it highlights your valuable contribution.
<!-- Thank you for contributing to LangChain!
Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change.
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer (see below),
- Twitter handle: we announce bigger features on Twitter. If your PR gets announced and you'd like a mention, we'll gladly shout you out!
After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost.
Finally, we'd love to show appreciation for your contribution - if you'd like us to shout you out on Twitter, please also include your handle!
-->
<!-- Remove if not applicable -->
Fixes # (issue)
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests, lint
etc:
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc:
Alternatively, if you are just interested in using the query generation part of the SQL chain, you can check out [`create_sql_query_chain`](https://github.com/langchain-ai/langchain/blob/master/docs/extras/use_cases/tabular/sql_query.ipynb)
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/hwchase17/langchain)
[](https://codespaces.new/hwchase17/langchain)
[](https://star-history.com/#hwchase17/langchain)
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[](https://codespaces.new/langchain-ai/langchain)
[](https://star-history.com/#langchain-ai/langchain)
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/hwchase17/langchainjs).
**Production Support:** As you move your LangChains into production, we'd love to offer more comprehensive support.
Please fill out [this form](https://forms.gle/57d8AmXBYp8PP8tZA) and we'll set up a dedicated support Slack channel.
**Production Support:** As you move your LangChains into production, we'd love to offer more hands-on support.
Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to share more about what you're building, and our team will get in touch.
## 🚨Breaking Changes for select chains (SQLDatabase) on 7/28/23
In an effort to make `langchain` leaner and safer, we are moving select chains to `langchain_experimental`.
This migration has already started, but we are remaining backwards compatible until 7/28.
On that date, we will remove functionality from `langchain`.
Read more about the motivation and the progress [here](https://github.com/hwchase17/langchain/discussions/8043).
@@ -87,7 +94,7 @@ Memory refers to persisting state between calls of a chain/agent. LangChain prov
[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
For more information on these concepts, please see our [full documentation](https://langchain.readthedocs.io/en/latest/).
For more information on these concepts, please see our [full documentation](https://python.langchain.com).
In today's fast-paced technological landscape, the use of Large Language Models (LLMs) is rapidly expanding. As a result, it's crucial for developers to understand how to effectively deploy these models in production environments. LLM interfaces typically fall into two categories:
In this scenario, most of the computational burden is handled by the LLM providers, while LangChain simplifies the implementation of business logic around these services. This approach includes features such as prompt templating, chat message generation, caching, vector embedding database creation, preprocessing, etc.
-**Case 2: Self-hosted Open-Source Models**
Alternatively, developers can opt to use smaller, yet comparably capable, self-hosted open-source LLM models. This approach can significantly decrease costs, latency, and privacy concerns associated with transferring data to external LLM providers.
Regardless of the framework that forms the backbone of your product, deploying LLM applications comes with its own set of challenges. It's vital to understand the trade-offs and key considerations when evaluating serving frameworks.
Outline
=======
This guide aims to provide a comprehensive overview of the requirements for deploying LLMs in a production setting, focusing on:
-`Designing a Robust LLM Application Service <#robust>`_
-`Maintaining Cost-Efficiency <#cost>`_
-`Ensuring Rapid Iteration <#iteration>`_
Understanding these components is crucial when assessing serving systems. LangChain integrates with several open-source projects designed to tackle these issues, providing a robust framework for productionizing your LLM applications. Some notable frameworks include:
These links will provide further information on each ecosystem, assisting you in finding the best fit for your LLM deployment needs.
Designing a Robust LLM Application Service
===========================================
.._robust:
When deploying an LLM service in production, it's imperative to provide a seamless user experience free from outages. Achieving 24/7 service availability involves creating and maintaining several sub-systems surrounding your application.
Monitoring
----------
Monitoring forms an integral part of any system running in a production environment. In the context of LLMs, it is essential to monitor both performance and quality metrics.
**Performance Metrics:** These metrics provide insights into the efficiency and capacity of your model. Here are some key examples:
- Query per second (QPS): This measures the number of queries your model processes in a second, offering insights into its utilization.
- Latency: This metric quantifies the delay from when your client sends a request to when they receive a response.
- Tokens Per Second (TPS): This represents the number of tokens your model can generate in a second.
**Quality Metrics:** These metrics are typically customized according to the business use-case. For instance, how does the output of your system compare to a baseline, such as a previous version? Although these metrics can be calculated offline, you need to log the necessary data to use them later.
Fault tolerance
---------------
Your application may encounter errors such as exceptions in your model inference or business logic code, causing failures and disrupting traffic. Other potential issues could arise from the machine running your application, such as unexpected hardware breakdowns or loss of spot-instances during high-demand periods. One way to mitigate these risks is by increasing redundancy through replica scaling and implementing recovery mechanisms for failed replicas. However, model replicas aren't the only potential points of failure. It's essential to build resilience against various failures that could occur at any point in your stack.
Zero down time upgrade
----------------------
System upgrades are often necessary but can result in service disruptions if not handled correctly. One way to prevent downtime during upgrades is by implementing a smooth transition process from the old version to the new one. Ideally, the new version of your LLM service is deployed, and traffic gradually shifts from the old to the new version, maintaining a constant QPS throughout the process.
Load balancing
--------------
Load balancing, in simple terms, is a technique to distribute work evenly across multiple computers, servers, or other resources to optimize the utilization of the system, maximize throughput, minimize response time, and avoid overload of any single resource. Think of it as a traffic officer directing cars (requests) to different roads (servers) so that no single road becomes too congested.
There are several strategies for load balancing. For example, one common method is the *Round Robin* strategy, where each request is sent to the next server in line, cycling back to the first when all servers have received a request. This works well when all servers are equally capable. However, if some servers are more powerful than others, you might use a *Weighted Round Robin* or *Least Connections* strategy, where more requests are sent to the more powerful servers, or to those currently handling the fewest active requests. Let's imagine you're running a LLM chain. If your application becomes popular, you could have hundreds or even thousands of users asking questions at the same time. If one server gets too busy (high load), the load balancer would direct new requests to another server that is less busy. This way, all your users get a timely response and the system remains stable.
Maintaining Cost-Efficiency and Scalability
============================================
.._cost:
Deploying LLM services can be costly, especially when you're handling a large volume of user interactions. Charges by LLM providers are usually based on tokens used, making a chat system inference on these models potentially expensive. However, several strategies can help manage these costs without compromising the quality of the service.
Self-hosting models
-------------------
Several smaller and open-source LLMs are emerging to tackle the issue of reliance on LLM providers. Self-hosting allows you to maintain similar quality to LLM provider models while managing costs. The challenge lies in building a reliable, high-performing LLM serving system on your own machines.
Resource Management and Auto-Scaling
------------------------------------
Computational logic within your application requires precise resource allocation. For instance, if part of your traffic is served by an OpenAI endpoint and another part by a self-hosted model, it's crucial to allocate suitable resources for each. Auto-scaling—adjusting resource allocation based on traffic—can significantly impact the cost of running your application. This strategy requires a balance between cost and responsiveness, ensuring neither resource over-provisioning nor compromised application responsiveness.
Utilizing Spot Instances
------------------------
On platforms like AWS, spot instances offer substantial cost savings, typically priced at about a third of on-demand instances. The trade-off is a higher crash rate, necessitating a robust fault-tolerance mechanism for effective use.
Independent Scaling
-------------------
When self-hosting your models, you should consider independent scaling. For example, if you have two translation models, one fine-tuned for French and another for Spanish, incoming requests might necessitate different scaling requirements for each.
Batching requests
-----------------
In the context of Large Language Models, batching requests can enhance efficiency by better utilizing your GPU resources. GPUs are inherently parallel processors, designed to handle multiple tasks simultaneously. If you send individual requests to the model, the GPU might not be fully utilized as it's only working on a single task at a time. On the other hand, by batching requests together, you're allowing the GPU to work on multiple tasks at once, maximizing its utilization and improving inference speed. This not only leads to cost savings but can also improve the overall latency of your LLM service.
In summary, managing costs while scaling your LLM services requires a strategic approach. Utilizing self-hosting models, managing resources effectively, employing auto-scaling, using spot instances, independently scaling models, and batching requests are key strategies to consider. Open-source libraries such as Ray Serve and BentoML are designed to deal with these complexities.
Ensuring Rapid Iteration
========================
.._iteration:
The LLM landscape is evolving at an unprecedented pace, with new libraries and model architectures being introduced constantly. Consequently, it's crucial to avoid tying yourself to a solution specific to one particular framework. This is especially relevant in serving, where changes to your infrastructure can be time-consuming, expensive, and risky. Strive for infrastructure that is not locked into any specific machine learning library or framework, but instead offers a general-purpose, scalable serving layer. Here are some aspects where flexibility plays a key role:
Model composition
-----------------
Deploying systems like LangChain demands the ability to piece together different models and connect them via logic. Take the example of building a natural language input SQL query engine. Querying an LLM and obtaining the SQL command is only part of the system. You need to extract metadata from the connected database, construct a prompt for the LLM, run the SQL query on an engine, collect and feed back the response to the LLM as the query runs, and present the results to the user. This demonstrates the need to seamlessly integrate various complex components built in Python into a dynamic chain of logical blocks that can be served together.
Cloud providers
---------------
Many hosted solutions are restricted to a single cloud provider, which can limit your options in today's multi-cloud world. Depending on where your other infrastructure components are built, you might prefer to stick with your chosen cloud provider.
Infrastructure as Code (IaC)
---------------------------
Rapid iteration also involves the ability to recreate your infrastructure quickly and reliably. This is where Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Kubernetes YAML files come into play. They allow you to define your infrastructure in code files, which can be version controlled and quickly deployed, enabling faster and more reliable iterations.
CI/CD
-----
In a fast-paced environment, implementing CI/CD pipelines can significantly speed up the iteration process. They help automate the testing and deployment of your LLM applications, reducing the risk of errors and enabling faster feedback and iteration.
When you first access the UI, you should see a page with your tracing sessions.
An initial one "default" should already be created for you.
A session is just a way to group traces together.
If you click on a session, it will take you to a page with no recorded traces that says "No Runs."
You can create a new session with the new session form.

If we click on the `default` session, we can see that to start we have no traces stored.

If we now start running chains and agents with tracing enabled, we will see data show up here.
To do so, we can run [this notebook](../tracing/agent_with_tracing.ipynb) as an example.
After running it, we will see an initial trace show up.

From here we can explore the trace at a high level by clicking on the arrow to show nested runs.
We can keep on clicking further and further down to explore deeper and deeper.

We can also click on the "Explore" button of the top level run to dive even deeper.
Here, we can see the inputs and outputs in full, as well as all the nested traces.

We can keep on exploring each of these nested traces in more detail.
For example, here is the lowest level trace with the exact inputs/outputs to the LLM.

## Changing Sessions
1. To initially record traces to a session other than `"default"`, you can set the `LANGCHAIN_SESSION` environment variable to the name of the session you want to record to:
```python
importos
os.environ["LANGCHAIN_TRACING"]="true"
os.environ["LANGCHAIN_SESSION"]="my_session"# Make sure this session actually exists. You can create a new session in the UI.
```
2. To switch sessions mid-script or mid-notebook, do NOT set the `LANGCHAIN_SESSION` environment variable. Instead: `langchain.set_tracing_callback_manager(session_name="my_session")`
### Introduction to LangChain with Harrison Chase, creator of LangChain
- [Building the Future with LLMs, `LangChain`, & `Pinecone`](https://youtu.be/nMniwlGyX-c) by [Pinecone](https://www.youtube.com/@pinecone-io)
- [LangChain and Weaviate with Harrison Chase and Bob van Luijt - Weaviate Podcast #36](https://youtu.be/lhby7Ql7hbk) by [Weaviate • Vector Database](https://www.youtube.com/@Weaviate)
- [LangChain Demo + Q&A with Harrison Chase](https://youtu.be/zaYTXQFR0_s?t=788) by [Full Stack Deep Learning](https://www.youtube.com/@FullStackDeepLearning)
- [LangChain Agents: Build Personal Assistants For Your Data (Q&A with Harrison Chase and Mayo Oshin)](https://youtu.be/gVkF8cwfBLI) by [Chat with data](https://www.youtube.com/@chatwithdata)
- ⛓️ [LangChain "Agents in Production" Webinar](https://youtu.be/k8GNCCs16F4) by [LangChain](https://www.youtube.com/@LangChain)
## Videos (sorted by views)
- [Building AI LLM Apps with LangChain (and more?) - LIVE STREAM](https://www.youtube.com/live/M-2Cj_2fzWI?feature=share) by [Nicholas Renotte](https://www.youtube.com/@NicholasRenotte)
- [First look - `ChatGPT` + `WolframAlpha` (`GPT-3.5` and Wolfram|Alpha via LangChain by James Weaver)](https://youtu.be/wYGbY811oMo) by [Dr Alan D. Thompson](https://www.youtube.com/@DrAlanDThompson)
- [LangChain explained - The hottest new Python framework](https://youtu.be/RoR4XJw8wIc) by [AssemblyAI](https://www.youtube.com/@AssemblyAI)
- [Chatbot with INFINITE MEMORY using `OpenAI` & `Pinecone` - `GPT-3`, `Embeddings`, `ADA`, `Vector DB`, `Semantic`](https://youtu.be/2xNzB7xq8nk) by [David Shapiro ~ AI](https://www.youtube.com/@DavidShapiroAutomator)
- [LangChain for LLMs is... basically just an Ansible playbook](https://youtu.be/X51N9C-OhlE) by [David Shapiro ~ AI](https://www.youtube.com/@DavidShapiroAutomator)
- [Build your own LLM Apps with LangChain & `GPT-Index`](https://youtu.be/-75p09zFUJY) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- [`BabyAGI` - New System of Autonomous AI Agents with LangChain](https://youtu.be/lg3kJvf1kXo) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- [Run `BabyAGI` with Langchain Agents (with Python Code)](https://youtu.be/WosPGHPObx8) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- [How to Use Langchain With `Zapier` | Write and Send Email with GPT-3 | OpenAI API Tutorial](https://youtu.be/p9v2-xEa9A0) by [StarMorph AI](https://www.youtube.com/@starmorph)
- [Use Your Locally Stored Files To Get Response From GPT - `OpenAI` | Langchain | Python](https://youtu.be/NC1Ni9KS-rk) by [Shweta Lodha](https://www.youtube.com/@shweta-lodha)
- [`Langchain JS` | How to Use GPT-3, GPT-4 to Reference your own Data | `OpenAI Embeddings` Intro](https://youtu.be/veV2I-NEjaM) by [StarMorph AI](https://www.youtube.com/@starmorph)
- [The easiest way to work with large language models | Learn LangChain in 10min](https://youtu.be/kmbS6FDQh7c) by [Sophia Yang](https://www.youtube.com/@SophiaYangDS)
- [4 Autonomous AI Agents: “Westworld” simulation `BabyAGI`, `AutoGPT`, `Camel`, `LangChain`](https://youtu.be/yWbnH6inT_U) by [Sophia Yang](https://www.youtube.com/@SophiaYangDS)
- [AI CAN SEARCH THE INTERNET? Langchain Agents + OpenAI ChatGPT](https://youtu.be/J-GL0htqda8) by [tylerwhatsgood](https://www.youtube.com/@tylerwhatsgood)
- [Query Your Data with GPT-4 | Embeddings, Vector Databases | Langchain JS Knowledgebase](https://youtu.be/jRnUPUTkZmU) by [StarMorph AI](https://www.youtube.com/@starmorph)
- [`Weaviate` + LangChain for LLM apps presented by Erika Cardenas](https://youtu.be/7AGj4Td5Lgw) by [`Weaviate` • Vector Database](https://www.youtube.com/@Weaviate)
- [Langchain Overview — How to Use Langchain & `ChatGPT`](https://youtu.be/oYVYIq0lOtI) by [Python In Office](https://www.youtube.com/@pythoninoffice6568)
- [Langchain Overview - How to Use Langchain & `ChatGPT`](https://youtu.be/oYVYIq0lOtI) by [Python In Office](https://www.youtube.com/@pythoninoffice6568)
- [Custom langchain Agent & Tools with memory. Turn any `Python function` into langchain tool with Gpt 3](https://youtu.be/NIG8lXk0ULg) by [echohive](https://www.youtube.com/@echohive)
- [LangChain: Run Language Models Locally - `Hugging Face Models`](https://youtu.be/Xxxuw4_iCzw) by [Prompt Engineering](https://www.youtube.com/@engineerprompt)
- [`ChatGPT` with any `YouTube` video using langchain and `chromadb`](https://youtu.be/TQZfB2bzVwU) by [echohive](https://www.youtube.com/@echohive)
- [How to Talk to a `PDF` using LangChain and `ChatGPT`](https://youtu.be/v2i1YDtrIwk) by [Automata Learning Lab](https://www.youtube.com/@automatalearninglab)
- [Langchain Document Loaders Part 1: Unstructured Files](https://youtu.be/O5C0wfsen98) by [Merk](https://www.youtube.com/@merksworld)
- [LangChain - Prompt Templates (what all the best prompt engineers use)](https://youtu.be/1aRu8b0XNOQ) by [Nick Daigler](https://www.youtube.com/@nick_daigs)
- [LangChain. Crear aplicaciones Python impulsadas por GPT](https://youtu.be/DkW_rDndts8) by [Jesús Conde](https://www.youtube.com/@0utKast)
- [Easiest Way to Use GPT In Your Products | LangChain Basics Tutorial](https://youtu.be/fLy0VenZyGc) by [Rachel Woods](https://www.youtube.com/@therachelwoods)
- [`BabyAGI` + `GPT-4` Langchain Agent with Internet Access](https://youtu.be/wx1z_hs5P6E) by [tylerwhatsgood](https://www.youtube.com/@tylerwhatsgood)
- [Learning LLM Agents. How does it actually work? LangChain, AutoGPT & OpenAI](https://youtu.be/mb_YAABSplk) by [Arnoldas Kemeklis](https://www.youtube.com/@processusAI)
- [Get Started with LangChain in `Node.js`](https://youtu.be/Wxx1KUWJFv4) by [Developers Digest](https://www.youtube.com/@DevelopersDigest)
- [LangChain + `OpenAI` tutorial: Building a Q&A system w/ own text data](https://youtu.be/DYOU_Z0hAwo) by [Samuel Chan](https://www.youtube.com/@SamuelChan)
- [Langchain + `Zapier` Agent](https://youtu.be/yribLAb-pxA) by [Merk](https://www.youtube.com/@merksworld)
- [Connecting the Internet with `ChatGPT` (LLMs) using Langchain And Answers Your Questions](https://youtu.be/9Y0TBC63yZg) by [Kamalraj M M](https://www.youtube.com/@insightbuilder)
- [Build More Powerful LLM Applications for Business’s with LangChain (Beginners Guide)](https://youtu.be/sp3-WLKEcBg) by[ No Code Blackbox](https://www.youtube.com/@nocodeblackbox)
- ⛓️ [LangFlow LLM Agent Demo for 🦜🔗LangChain](https://youtu.be/zJxDHaWt-6o) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA)
- ⛓️ [Chatbot Factory: Streamline Python Chatbot Creation with LLMs and Langchain](https://youtu.be/eYer3uzrcuM) by [Finxter](https://www.youtube.com/@CobusGreylingZA)
- ⛓️ [LangChain Tutorial - ChatGPT mit eigenen Daten](https://youtu.be/0XDLyY90E2c) by [Coding Crashkurse](https://www.youtube.com/@codingcrashkurse6429)
- ⛓️ [Chat with a `CSV` | LangChain Agents Tutorial (Beginners)](https://youtu.be/tjeti5vXWOU) by [GoDataProf](https://www.youtube.com/@godataprof)
- ⛓️ [Introdução ao Langchain - #Cortes - Live DataHackers](https://youtu.be/fw8y5VRei5Y) by [Prof. João Gabriel Lima](https://www.youtube.com/@profjoaogabriellima)
- ⛓️ [LangChain: Level up `ChatGPT` !? | LangChain Tutorial Part 1](https://youtu.be/vxUGx8aZpDE) by [Code Affinity](https://www.youtube.com/@codeaffinitydev)
- ⛓️ [Chat with Audio: Langchain, `Chroma DB`, OpenAI, and `Assembly AI`](https://youtu.be/Kjy7cx1r75g) by [AI Anytime](https://www.youtube.com/@AIAnytime)
- ⛓️ [QA over documents with Auto vector index selection with Langchain router chains](https://youtu.be/9G05qybShv8) by [echohive](https://www.youtube.com/@echohive)
- ⛓️ [Build your own custom LLM application with `Bubble.io` & Langchain (No Code & Beginner friendly)](https://youtu.be/O7NhQGu1m6c) by [No Code Blackbox](https://www.youtube.com/@nocodeblackbox)
- ⛓️ [Simple App to Question Your Docs: Leveraging `Streamlit`, `Hugging Face Spaces`, LangChain, and `Claude`!](https://youtu.be/X4YbNECRr7o) by [Chris Alexiuk](https://www.youtube.com/@chrisalexiuk)
- ⛓️ [LANGCHAIN AI- `ConstitutionalChainAI` + Databutton AI ASSISTANT Web App](https://youtu.be/5zIU6_rdJCU) by [Avra](https://www.youtube.com/@Avra_b)
- ⛓️ [LANGCHAIN AI AUTONOMOUS AGENT WEB APP - 👶 `BABY AGI` 🤖 with EMAIL AUTOMATION using `DATABUTTON`](https://youtu.be/cvAwOGfeHgw) by [Avra](https://www.youtube.com/@Avra_b)
- ⛓️ [The Future of Data Analysis: Using A.I. Models in Data Analysis (LangChain)](https://youtu.be/v_LIcVyg5dk) by [Absent Data](https://www.youtube.com/@absentdata)
- ⛓️ [Memory in LangChain | Deep dive (python)](https://youtu.be/70lqvTFh_Yg) by [Eden Marco](https://www.youtube.com/@EdenMarco)
- ⛓️ [Use Large Language Models in Jupyter Notebook | LangChain | Agents & Indexes](https://youtu.be/JSe11L1a_QQ) by [Abhinaw Tiwari](https://www.youtube.com/@AbhinawTiwariAT)
- ⛓️ [How to Talk to Your Langchain Agent | `11 Labs` + `Whisper`](https://youtu.be/N4k459Zw2PU) by [VRSEN](https://www.youtube.com/@vrsen)
- ⛓️ [LangChain Deep Dive: 5 FUN AI App Ideas To Build Quickly and Easily](https://youtu.be/mPYEPzLkeks) by [James NoCode](https://www.youtube.com/@jamesnocode)
- ⛓️ [BEST OPEN Alternative to OPENAI's EMBEDDINGs for Retrieval QA: LangChain](https://youtu.be/ogEalPMUCSY) by [Prompt Engineering](https://www.youtube.com/@engineerprompt)
- ⛓️ [LangChain 101: Models](https://youtu.be/T6c_XsyaNSQ) by [Mckay Wrigley](https://www.youtube.com/@realmckaywrigley)
- ⛓️ [LangChain with JavaScript Tutorial #1 | Setup & Using LLMs](https://youtu.be/W3AoeMrg27o) by [Leon van Zyl](https://www.youtube.com/@leonvanzyl)
- ⛓️ [LangChain Overview & Tutorial for Beginners: Build Powerful AI Apps Quickly & Easily (ZERO CODE)](https://youtu.be/iI84yym473Q) by [James NoCode](https://www.youtube.com/@jamesnocode)
- ⛓️ [LangChain In Action: Real-World Use Case With Step-by-Step Tutorial](https://youtu.be/UO699Szp82M) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
- ⛓️ [Summarizing and Querying Multiple Papers with LangChain](https://youtu.be/p_MQRWH5Y6k) by [Automata Learning Lab](https://www.youtube.com/@automatalearninglab)
- ⛓️ [Using Langchain (and `Replit`) through `Tana`, ask `Google`/`Wikipedia`/`Wolfram Alpha` to fill out a table](https://youtu.be/Webau9lEzoI) by [Stian Håklev](https://www.youtube.com/@StianHaklev)
- ⛓️ [Langchain PDF App (GUI) | Create a ChatGPT For Your `PDF` in Python](https://youtu.be/wUAUdEw5oxM) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
- ⛓️ [Auto-GPT with LangChain 🔥 | Create Your Own Personal AI Assistant](https://youtu.be/imDfPmMKEjM) by [Data Science Basics](https://www.youtube.com/@datasciencebasics)
- ⛓️ [Create Your OWN Slack AI Assistant with Python & LangChain](https://youtu.be/3jFXRNn2Bu8) by [Dave Ebbelaar](https://www.youtube.com/@daveebbelaar)
- ⛓️ [How to Create LOCAL Chatbots with GPT4All and LangChain [Full Guide]](https://youtu.be/4p1Fojur8Zw) by [Liam Ottley](https://www.youtube.com/@LiamOttley)
- ⛓️ [Build a `Multilingual PDF` Search App with LangChain, `Cohere` and `Bubble`](https://youtu.be/hOrtuumOrv8) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
- ⛓️ [Building a LangChain Agent (code-free!) Using `Bubble` and `Flowise`](https://youtu.be/jDJIIVWTZDE) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
- ⛓️ [Build a LangChain-based Semantic PDF Search App with No-Code Tools Bubble and Flowise](https://youtu.be/s33v5cIeqA4) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
- ⛓️ [LangChain Memory Tutorial | Building a ChatGPT Clone in Python](https://youtu.be/Cwq91cj2Pnc) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
- ⛓️ [ChatGPT For Your DATA | Chat with Multiple Documents Using LangChain](https://youtu.be/TeDgIDqQmzs) by [Data Science Basics](https://www.youtube.com/@datasciencebasics)
- ⛓️ [`Llama Index`: Chat with Documentation using URL Loader](https://youtu.be/XJRoDEctAwA) by [Merk](https://www.youtube.com/@merksworld)
- ⛓️ [Using OpenAI, LangChain, and `Gradio` to Build Custom GenAI Applications](https://youtu.be/1MsmqMg3yUc) by [David Hundley](https://www.youtube.com/@dkhundley)
This website is built using [Docusaurus 2](https://docusaurus.io/), a modern static website generator.
### Installation
```
$ yarn
```
### Local Development
```
$ yarn start
```
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
### Build
```
$ yarn build
```
This command generates static content into the `build` directory and can be served using any static contents hosting service.
### Deployment
Using SSH:
```
$ USE_SSH=true yarn deploy
```
Not using SSH:
```
$ GIT_USER=<Your GitHub username> yarn deploy
```
If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch.
### Continuous Integration
Some common defaults for linting/formatting have been set for you. If you integrate your project with an open source Continuous Integration system (e.g. Travis CI, CircleCI), you may check for issues using the following command.
Hi! Thanks for being here. We’re lucky to have a community of so many passionate developers building with LangChain–we have so much to teach and learn from each other. Community members contribute code, host meetups, write blog posts, amplify each other’s work, become each other's customers and collaborators, and so much more.
Whether you’re new to LangChain, looking to go deeper, or just want to get more exposure to the world of building with LLMs, this page can point you in the right direction.
- **🦜 Contribute to LangChain**
- **🌍Meetups, Events, and Hackathons**
- **📣 Help Us Amplify Your Work**
- **💬 Stay in the loop**
# 🦜 Contribute to LangChain
LangChain is the product of over 5,000+ contributions by 1,500+ contributors, and there is ******still****** so much to do together. Here are some ways to get involved:
- **[Open a pull request](https://github.com/langchain-ai/langchain/issues):** we’d appreciate all forms of contributions–new features, infrastructure improvements, better documentation, bug fixes, etc. If you have an improvement or an idea, we’d love to work on it with you.
- **[Read our contributor guidelines:](https://github.com/langchain-ai/langchain/blob/bbd22b9b761389a5e40fc45b0570e1830aabb707/.github/CONTRIBUTING.md)** We ask contributors to follow a["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects)workflow, run a few local checks for formatting, linting, and testing before submitting, and follow certain documentation and testing conventions.
- **First time contributor?** [Try one of these PRs with the “good first issue” tag](https://github.com/langchain-ai/langchain/contribute).
- **Become an expert:** our experts help the community by answering product questions in Discord. If that’s a role you’d like to play, we’d be so grateful! (And we have some special experts-only goodies/perks we can tell you more about). Send us an email to introduce yourself at hello@langchain.dev and we’ll take it from there!
- **Integrate with LangChain:** if your product integrates with LangChain–or aspires to–we want to help make sure the experience is as smooth as possible for you and end users. Send us an email at hello@langchain.dev and tell us what you’re working on.
- **Become an Integration Maintainer:** Partner with our team to ensure your integration stays up-to-date and talk directly with users (and answer their inquiries) in our Discord. Introduce yourself at hello@langchain.dev if you’d like to explore this role.
# 🌍 Meetups, Events, and Hackathons
One of our favorite things about working in AI is how much enthusiasm there is for building together. We want to help make that as easy and impactful for you as possible!
- **Find a meetup, hackathon, or webinar:** you can find the one for you on our [global events calendar](https://mirror-feeling-d80.notion.site/0bc81da76a184297b86ca8fc782ee9a3?v=0d80342540df465396546976a50cfb3f).
- **Submit an event to our calendar:** email us at events@langchain.dev with a link to your event page! We can also help you spread the word with our local communities.
- **Host a meetup:** If you want to bring a group of builders together, we want to help! We can publicize your event on our event calendar/Twitter, share with our local communities in Discord, send swag, or potentially hook you up with a sponsor. Email us at events@langchain.dev to tell us about your event!
- **Become a meetup sponsor:** we often hear from groups of builders that want to get together, but are blocked or limited on some dimension (space to host, budget for snacks, prizes to distribute, etc.). If you’d like to help, send us an email to events@langchain.dev we can share more about how it works!
- **Speak at an event:** meetup hosts are always looking for great speakers, presenters, and panelists. If you’d like to do that at an event, send us an email to hello@langchain.dev with more information about yourself, what you want to talk about, and what city you’re based in and we’ll try to match you with an upcoming event!
- **Tell us about your LLM community:** If you host or participate in a community that would welcome support from LangChain and/or our team, send us an email at hello@langchain.dev and let us know how we can help.
# 📣Help Us Amplify Your Work
If you’re working on something you’re proud of, and think the LangChain community would benefit from knowing about it, we want to help you show it off.
- **Post about your work and mention us:** we love hanging out on Twitter to see what people in the space are talking about and working on. If you tag [@langchainai](https://twitter.com/LangChainAI), we’ll almost certainly see it and can show you some love.
- **Publish something on our blog:** if you’re writing about your experience building with LangChain, we’d love to post (or crosspost) it on our blog! E-mail hello@langchain.dev with a draft of your post! Or even an idea for something you want to write about.
- **Get your product onto our [integrations hub](https://integrations.langchain.com/):** Many developers take advantage of our seamless integrations with other products, and come to our integrations hub to find out who those are. If you want to get your product up there, tell us about it (and how it works with LangChain) at hello@langchain.dev.
# ☀️ Stay in the loop
Here’s where our team hangs out, talks shop, spotlights cool work, and shares what we’re up to. We’d love to see you there too.
- **[Twitter](https://twitter.com/LangChainAI):** we post about what we’re working on and what cool things we’re seeing in the space. If you tag @langchainai in your post, we’ll almost certainly see it, and can show you some love!
- **[Discord](https://discord.gg/6adMQxSpJS):** connect with >30k developers who are building with LangChain
- **[GitHub](https://github.com/langchain-ai/langchain):** open pull requests, contribute to a discussion, and/or contribute
- **[Subscribe to our bi-weekly Release Notes](https://6w1pwbss0py.typeform.com/to/KjZB1auB):** a twice/month email roundup of the coolest things going on in our orbit
- **Slack:** if you’re building an application in production at your company, we’d love to get into a Slack channel together. Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) and we’ll get in touch about setting one up.
**LangChain** is a framework for developing applications powered by language models. It enables applications that are:
- **Data-aware**: connect a language model to other sources of data
- **Agentic**: allow a language model to interact with its environment
The main value props of LangChain are:
1. **Components**: abstractions for working with language models, along with a collection of implementations for each abstraction. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
2. **Off-the-shelf chains**: a structured assembly of components for accomplishing specific higher-level tasks
Off-the-shelf chains make it easy to get started. For more complex applications and nuanced use-cases, components make it easy to customize existing chains or build new ones.
## Get started
[Here’s](/docs/get_started/installation.html) how to install LangChain, set up your environment, and start building.
We recommend following our [Quickstart](/docs/get_started/quickstart.html) guide to familiarize yourself with the framework by building your first LangChain application.
_**Note**: These docs are for the LangChain [Python package](https://github.com/hwchase17/langchain). For documentation on [LangChain.js](https://github.com/hwchase17/langchainjs), the JS/TS version, [head here](https://js.langchain.com/docs)._
## Modules
LangChain provides standard, extendable interfaces and external integrations for the following modules, listed from least to most complex:
#### [Model I/O](/docs/modules/model_io/)
Interface with language models
#### [Retrieval](/docs/modules/data_connection/)
Interface with application-specific data
#### [Chains](/docs/modules/chains/)
Construct sequences of calls
#### [Agents](/docs/modules/agents/)
Let chains choose which tools to use given high-level directives
#### [Memory](/docs/modules/memory/)
Persist application state between runs of a chain
#### [Callbacks](/docs/modules/callbacks/)
Log and stream intermediate steps of any chain
## Examples, ecosystem, and resources
### [Use cases](/docs/use_cases/)
Walkthroughs and best-practices for common end-to-end use cases, like:
- [Chatbots](/docs/use_cases/chatbots)
- [Answering questions using sources](/docs/use_cases/question_answering/)
Learn best practices for developing with LangChain.
### [Ecosystem](/docs/ecosystem/)
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/) and [dependent repos](/docs/additional_resources/dependents).
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/additional_resources/youtube.html) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).
### [Community](/docs/community)
Head to the [Community navigator](/docs/community) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLM’s.
## API reference
Head to the [reference](https://api.python.langchain.com) section for full documentation of all classes and methods in the LangChain Python package.
import Install from "@snippets/get_started/quickstart/installation.mdx"
<Install/>
For more details, see our [Installation guide](/docs/get_started/installation.html).
## Environment setup
Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs.
import OpenAISetup from "@snippets/get_started/quickstart/openai_setup.mdx"
<OpenAISetup/>
## Building an application
Now we can start building our language model application. LangChain provides many modules that can be used to build language model applications.
Modules can be used as stand-alones in simple applications and they can be combined for more complex use cases.
The core building block of LangChain applications is the LLMChain.
This combines three things:
- LLM: The language model is the core reasoning engine here. In order to work with LangChain, you need to understand the different types of language models and how to work with them.
- Prompt Templates: This provides instructions to the language model. This controls what the language model outputs, so understanding how to construct prompts and different prompting strategies is crucial.
- Output Parsers: These translate the raw response from the LLM to a more workable format, making it easy to use the output downstream.
In this getting started guide we will cover those three components by themselves, and then cover the LLMChain which combines all of them.
Understanding these concepts will set you up well for being able to use and customize LangChain applications.
Most LangChain applications allow you to configure the LLM and/or the prompt used, so knowing how to take advantage of this will be a big enabler.
## LLMs
There are two types of language models, which in LangChain are called:
- LLMs: this is a language model which takes a string as input and returns a string
- ChatModels: this is a language model which takes a list of messages as input and returns a message
The input/output for LLMs is simple and easy to understand - a string.
But what about ChatModels? The input there is a list of `ChatMessage`s, and the output is a single `ChatMessage`.
A `ChatMessage` has two required components:
- `content`: This is the content of the message.
- `role`: This is the role of the entity from which the `ChatMessage` is coming from.
LangChain provides several objects to easily distinguish between different roles:
- `HumanMessage`: A `ChatMessage` coming from a human/user.
- `AIMessage`: A `ChatMessage` coming from an AI/assistant.
- `SystemMessage`: A `ChatMessage` coming from the system.
- `FunctionMessage`: A `ChatMessage` coming from a function call.
If none of those roles sound right, there is also a `ChatMessage` class where you can specify the role manually.
For more information on how to use these different messages most effectively, see our prompting guide.
LangChain provides a standard interface for both, but it's useful to understand this difference in order to construct prompts for a given language model.
The standard interface that LangChain provides has two methods:
- `predict`: Takes in a string, returns a string
- `predict_messages`: Takes in a list of messages, returns a message.
Let's see how to work with these different types of models and these different types of inputs.
First, let's import an LLM and a ChatModel.
import ImportLLMs from "@snippets/get_started/quickstart/import_llms.mdx"
<ImportLLMs/>
The `OpenAI` and `ChatOpenAI` objects are basically just configuration objects.
You can initialize them with parameters like `temperature` and others, and pass them around.
Next, let's use the `predict` method to run over a string input.
import InputString from "@snippets/get_started/quickstart/input_string.mdx"
<InputString/>
Finally, let's use the `predict_messages` method to run over a list of messages.
import InputMessages from "@snippets/get_started/quickstart/input_messages.mdx"
<InputMessages/>
For both these methods, you can also pass in parameters as key word arguments.
For example, you could pass in `temperature=0` to adjust the temperature that is used from what the object was configured with.
Whatever values are passed in during run time will always override what the object was configured with.
## Prompt templates
Most LLM applications do not pass user input directly into an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.
In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it'd be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions.
PromptTemplates help with exactly this!
They bundle up all the logic for going from user input into a fully formatted prompt.
This can start off very simple - for example, a prompt to produce the above string would just be:
import PromptTemplateLLM from "@snippets/get_started/quickstart/prompt_templates_llms.mdx"
import PromptTemplateChatModel from "@snippets/get_started/quickstart/prompt_templates_chat_models.mdx"
<PromptTemplateLLM/>
However, the advantages of using these over raw string formatting are several.
You can "partial" out variables - e.g. you can format only some of the variables at a time.
You can compose them together, easily combining different templates into a single prompt.
For explanations of these functionalities, see the [section on prompts](/docs/modules/model_io/prompts) for more detail.
PromptTemplates can also be used to produce a list of messages.
In this case, the prompt not only contains information about the content, but also each message (its role, its position in the list, etc)
Here, what happens most often is a ChatPromptTemplate is a list of ChatMessageTemplates.
Each ChatMessageTemplate contains instructions for how to format that ChatMessage - its role, and then also its content.
Let's take a look at this below:
<PromptTemplateChatModel/>
ChatPromptTemplates can also include other things besides ChatMessageTemplates - see the [section on prompts](/docs/modules/model_io/prompts) for more detail.
## Output parsers
OutputParsers convert the raw output of an LLM into a format that can be used downstream.
There are few main type of OutputParsers, including:
- Convert text from LLM -> structured information (e.g. JSON)
- Convert a ChatMessage into just a string
- Convert the extra information returned from a call besides the message (like OpenAI function invocation) into a string.
For full information on this, see the [section on output parsers](/docs/modules/model_io/output_parsers)
In this getting started guide, we will write our own output parser - one that converts a comma separated list into a list.
import OutputParser from "@snippets/get_started/quickstart/output_parser.mdx"
<OutputParser/>
## LLMChain
We can now combine all these into one chain.
This chain will take input variables, pass those to a prompt template to create a prompt, pass the prompt to an LLM, and then pass the output through an (optional) output parser.
This is a convenient way to bundle up a modular piece of logic.
Let's see it in action!
import LLMChain from "@snippets/get_started/quickstart/llm_chain.mdx"
<LLMChain/>
## Next steps
This is it!
We've now gone over how to create the core building block of LangChain applications - the LLMChains.
There is a lot more nuance in all these components (LLMs, prompts, output parsers) and a lot more different components to learn about as well.
To continue on your journey:
- [Dive deeper](/docs/modules/model_io) into LLMs, prompts, and output parsers
- Learn the other [key components](/docs/modules)
- Check out our [helpful guides](/docs/guides) for detailed walkthroughs on particular topics
Comparison evaluators in LangChain help measure two different chains or LLM outputs. These evaluators are helpful for comparative analyses, such as A/B testing between two language models, or comparing different versions of the same model. They can also be useful for things like generating preference scores for ai-assisted reinforcement learning.
These evaluators inherit from the `PairwiseStringEvaluator` class, providing a comparison interface for two strings - typically, the outputs from two different prompts or models, or two versions of the same model. In essence, a comparison evaluator performs an evaluation on a pair of strings and returns a dictionary containing the evaluation score and other relevant details.
To create a custom comparison evaluator, inherit from the `PairwiseStringEvaluator` class and overwrite the `_evaluate_string_pairs` method. If you require asynchronous evaluation, also overwrite the `_aevaluate_string_pairs` method.
Here's a summary of the key methods and properties of a comparison evaluator:
- `evaluate_string_pairs`: Evaluate the output string pairs. This function should be overwritten when creating custom evaluators.
- `aevaluate_string_pairs`: Asynchronously evaluate the output string pairs. This function should be overwritten for asynchronous evaluation.
- `requires_input`: This property indicates whether this evaluator requires an input string.
- `requires_reference`: This property specifies whether this evaluator requires a reference label.
Detailed information about creating custom evaluators and the available built-in comparison evaluators is provided in the following sections.
Building applications with language models involves many moving parts. One of the most critical components is ensuring that the outcomes produced by your models are reliable and useful across a broad array of inputs, and that they work well with your application's other software components. Ensuring reliability usually boils down to some combination of application design, testing & evaluation, and runtime checks.
The guides in this section review the APIs and functionality LangChain provides to help you better evaluate your applications. Evaluation and testing are both critical when thinking about deploying LLM applications, since production environments require repeatable and useful outcomes.
LangChain offers various types of evaluators to help you measure performance and integrity on diverse data, and we hope to encourage the community to create and share other useful evaluators so everyone can improve. These docs will introduce the evaluator types, how to use them, and provide some examples of their use in real-world scenarios.
Each evaluator type in LangChain comes with ready-to-use implementations and an extensible API that allows for customization according to your unique requirements. Here are some of the types of evaluators we offer:
- [String Evaluators](/docs/guides/evaluation/string/): These evaluators assess the predicted string for a given input, usually comparing it against a reference string.
- [Trajectory Evaluators](/docs/guides/evaluation/trajectory/): These are used to evaluate the entire trajectory of agent actions.
- [Comparison Evaluators](/docs/guides/evaluation/comparison/): These evaluators are designed to compare predictions from two runs on a common input.
These evaluators can be used across various scenarios and can be applied to different chain and LLM implementations in the LangChain library.
We also are working to share guides and cookbooks that demonstrate how to use these evaluators in real-world scenarios, such as:
- [Chain Comparisons](/docs/guides/evaluation/examples/comparisons): This example uses a comparison evaluator to predict the preferred output. It reviews ways to measure confidence intervals to select statistically significant differences in aggregate preference scores across different models or prompts.
## Reference Docs
For detailed information on the available evaluators, including how to instantiate, configure, and customize them, check out the [reference documentation](https://api.python.langchain.com/en/latest/api_reference.html#module-langchain.evaluation) directly.
A string evaluator is a component within LangChain designed to assess the performance of a language model by comparing its generated outputs (predictions) to a reference string or an input. This comparison is a crucial step in the evaluation of language models, providing a measure of the accuracy or quality of the generated text.
In practice, string evaluators are typically used to evaluate a predicted string against a given input, such as a question or a prompt. Often, a reference label or context string is provided to define what a correct or ideal response would look like. These evaluators can be customized to tailor the evaluation process to fit your application's specific requirements.
To create a custom string evaluator, inherit from the `StringEvaluator` class and implement the `_evaluate_strings` method. If you require asynchronous support, also implement the `_aevaluate_strings` method.
Here's a summary of the key attributes and methods associated with a string evaluator:
- `evaluation_name`: Specifies the name of the evaluation.
- `requires_input`: Boolean attribute that indicates whether the evaluator requires an input string. If True, the evaluator will raise an error when the input isn't provided. If False, a warning will be logged if an input _is_ provided, indicating that it will not be considered in the evaluation.
- `requires_reference`: Boolean attribute specifying whether the evaluator requires a reference label. If True, the evaluator will raise an error when the reference isn't provided. If False, a warning will be logged if a reference _is_ provided, indicating that it will not be considered in the evaluation.
String evaluators also implement the following methods:
- `aevaluate_strings`: Asynchronously evaluates the output of the Chain or Language Model, with support for optional input and label.
- `evaluate_strings`: Synchronously evaluates the output of the Chain or Language Model, with support for optional input and label.
The following sections provide detailed information on available string evaluator implementations as well as how to create a custom string evaluator.
Trajectory Evaluators in LangChain provide a more holistic approach to evaluating an agent. These evaluators assess the full sequence of actions taken by an agent and their corresponding responses, which we refer to as the "trajectory". This allows you to better measure an agent's effectiveness and capabilities.
A Trajectory Evaluator implements the `AgentTrajectoryEvaluator` interface, which requires two main methods:
- `evaluate_agent_trajectory`: This method synchronously evaluates an agent's trajectory.
- `aevaluate_agent_trajectory`: This asynchronous counterpart allows evaluations to be run in parallel for efficiency.
Both methods accept three main parameters:
- `input`: The initial input given to the agent.
- `prediction`: The final predicted response from the agent.
- `agent_trajectory`: The intermediate steps taken by the agent, given as a list of tuples.
These methods return a dictionary. It is recommended that custom implementations return a `score` (a float indicating the effectiveness of the agent) and `reasoning` (a string explaining the reasoning behind the score).
You can capture an agent's trajectory by initializing the agent with the `return_intermediate_steps=True` parameter. This lets you collect all intermediate steps without relying on special callbacks.
For a deeper dive into the implementation and use of Trajectory Evaluators, refer to the sections below.
[LangSmith](https://smith.langchain.com) helps you trace and evaluate your language model applications and intelligent agents to help you
move from prototype to production.
Check out the [interactive walkthrough](/docs/guides/langsmith/walkthrough) below to get started.
For more information, please refer to the [LangSmith documentation](https://docs.smith.langchain.com/).
For tutorials and other end-to-end examples demonstrating ways to integrate LangSmith in your workflow,
check out the [LangSmith Cookbook](https://github.com/langchain-ai/langsmith-cookbook). Some of the guides therein include:
- Leveraging user feedback in your JS application ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/nextjs/README.md)).
- Building an automated feedback pipeline ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/algorithmic-feedback/algorithmic_feedback.ipynb)).
- How to evaluate and audit your RAG workflows ([link](https://github.com/langchain-ai/langsmith-cookbook/tree/main/testing-examples/qa-correctness)).
- How to fine-tune a LLM on real usage data ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/fine-tuning-examples/export-to-openai/fine-tuning-on-chat-runs.ipynb)).
- How to use the [LangChain Hub](https://smith.langchain.com/hub) to version your prompts ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/hub-examples/retrieval-qa-chain/retrieval-qa.ipynb))
The ConstitutionalChain is a chain that ensures the output of a language model adheres to a predefined set of constitutional principles. By incorporating specific rules and guidelines, the ConstitutionalChain filters and modifies the generated content to align with these principles, thus providing more controlled, ethical, and contextually appropriate responses. This mechanism helps maintain the integrity of the output while minimizing the risk of generating content that may violate guidelines, be offensive, or deviate from the desired context.
import Example from "@snippets/modules/chains/additional/constitutional_chain.mdx"
One of the key concerns with using LLMs is that they may generate harmful or unethical text. This is an area of active research in the field. Here we present some built-in chains inspired by this research, which are intended to make the outputs of LLMs safer.
- [Moderation chain](/docs/guides/safety/moderation): Explicitly check if any output text is harmful and flag it.
- [Constitutional chain](/docs/guides/safety/constitutional_chain): Prompt the model with a set of principles which should guide it's behavior.
- [Logical Fallacy chain](/docs/guides/safety/logical_fallacy_chain): Checks the model output against logical fallacies to correct any deviation.
- [Amazon Comprehend moderation chain](/docs/guides/safety/amazon_comprehend_chain): Use [Amazon Comprehend](https://aws.amazon.com/comprehend/) to detect and handle PII and toxicity.
Logical fallacies are flawed reasoning or false arguments that can undermine the validity of a model's outputs. Examples include circular reasoning, false
dichotomies, ad hominem attacks, etc. Machine learning models are optimized to perform well on specific metrics like accuracy, perplexity, or loss. However,
optimizing for metrics alone does not guarantee logically sound reasoning.
Language models can learn to exploit flaws in reasoning to generate plausible-sounding but logically invalid arguments. When models rely on fallacies, their outputs become unreliable and untrustworthy, even if they achieve high scores on metrics. Users cannot depend on such outputs. Propagating logical fallacies can spread misinformation, confuse users, and lead to harmful real-world consequences when models are deployed in products or services.
Monitoring and testing specifically for logical flaws is challenging unlike other quality issues. It requires reasoning about arguments rather than pattern matching.
Therefore, it is crucial that model developers proactively address logical fallacies after optimizing metrics. Specialized techniques like causal modeling, robustness testing, and bias mitigation can help avoid flawed reasoning. Overall, allowing logical flaws to persist makes models less safe and ethical. Eliminating fallacies ensures model outputs remain logically valid and aligned with human reasoning. This maintains user trust and mitigates risks.
```python
# Imports
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains.llm import LLMChain
from langchain_experimental.fallacy_removal.base import FallacyChain
```
```python
# Example of a model output being returned with a logical fallacy
misleading_prompt = PromptTemplate(
template="""You have to respond by using only logical fallacies inherent in your answer explanations.
fallacy_chain.run(question="How do I know the earth is round?")
```
<CodeOutputBlock lang="python">
```
> Entering new FallacyChain chain...
Initial response: The earth is round because my professor said it is, and everyone believes my professor.
Applying correction...
Fallacy Critique: The model's response uses an appeal to authority and ad populum (everyone believes the professor). Fallacy Critique Needed.
Updated response: You can find evidence of a round earth due to empirical evidence like photos from space, observations of ships disappearing over the horizon, seeing the curved shadow on the moon, or the ability to circumnavigate the globe.
> Finished chain.
'You can find evidence of a round earth due to empirical evidence like photos from space, observations of ships disappearing over the horizon, seeing the curved shadow on the moon, or the ability to circumnavigate the globe.'
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, [specifically prohibit](https://beta.openai.com/docs/usage-policies/use-case-policy) you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.
If the content passed into the moderation chain is harmful, there is not one best way to handle it, it probably depends on your application. Sometimes you may want to throw an error in the Chain (and have your application handle that). Other times, you may want to return something to the user explaining that the text was harmful. There could even be other ways to handle it! We will cover all these ways in this walkthrough.
import Example from "@snippets/modules/chains/additional/moderation.mdx"
This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.
import Example from "@snippets/modules/agents/agent_types/conversational_agent.mdx"
<Example/>
import ChatExample from "@snippets/modules/agents/agent_types/chat_conversation_agent.mdx"
Plan-and-execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.