Compare commits

..

144 Commits

Author SHA1 Message Date
Bagatur
944321c6ab bump 245 (#8359) 2023-07-27 06:53:24 -07:00
Rubén Barragán
ef6332ead6 Support loading files from Dropbox (#8271)
## Description
This commit introduces the `DropboxLoader` class, a new document loader
that allows loading files from Dropbox into the application. The loader
relies on a Dropbox app, which requires creating an app on Dropbox,
obtaining the necessary scope permissions, and generating an access
token. Additionally, the dropbox Python package is required.

The `DropboxLoader` class is designed to be used as a document loader
for processing various file types, including text files, PDFs, and
Dropbox Paper files.

## Dependencies
`pip install dropbox` and `pip install unstructured` for PDF reading.

## Tag maintainer
@rlancemartin, @eyurtsev (from Data Loaders). I'd appreciate some
feedback here 🙏 .

## Social Networks
https://github.com/rubenbarragan
https://www.linkedin.com/in/rgbarragan/
https://twitter.com/RubenBarraganP

---------

Co-authored-by: Ruben Barragan <rbarragan@Rubens-MacBook-Air.local>
2023-07-27 06:36:08 -07:00
Pranay Chandekar
41bb3a6f9b fixed the bug #8343 (#8345)
- Issue: #8343

Signed-off-by: Pranay Chandekar <pranayc6@gmail.com>
2023-07-27 06:33:15 -07:00
Ikko Eltociear Ashimine
934ea80780 Fix typo in Etherscan.ipynb (#8340)
specifc  -> specific
2023-07-27 01:57:19 -07:00
Martin Krasser
93260a9922 Fix broken make targets format_diff and lint_diff (#8344)
Since the refactoring into sub-projects `libs/langchain` and
`libs/experimental`, the `make` targets `format_diff` and `lint_diff` do
not work anymore when running `make` from these subdirectories. Reason
is that

```
PYTHON_FILES=$(shell git diff --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$')
```

generates paths from the project's root directory instead of the
corresponding subdirectories. This PR fixes this by adding a
`--relative` command line option.

- Tag maintainer: @baskaryan
2023-07-27 01:56:55 -07:00
Harrison Chase
ae78ef7fe6 bump experimental to 005 (#8339) 2023-07-26 21:46:28 -07:00
Vadim Gubergrits
e7e5cb9d08 Tree of Thought introducing a new ToTChain. (#5167)
# [WIP] Tree of Thought introducing a new ToTChain.

This PR adds a new chain called ToTChain that implements the ["Large
Language Model Guided
Tree-of-Though"](https://arxiv.org/pdf/2305.08291.pdf) paper.

There's a notebook example `docs/modules/chains/examples/tot.ipynb` that
shows how to use it.


Implements #4975


## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

- @hwchase17
- @vowelparrot

---------

Co-authored-by: Vadim Gubergrits <vgubergrits@outbox.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-07-26 21:29:39 -07:00
William FH
412e29d436 Fix notebook that 'cannot convert' via nbdoc_build (#8333) 2023-07-26 18:54:23 -07:00
William FH
9eb7e6e27f Delete Old Evals Examples (#8252)
Still retain:
- Comparison Examples
- Data + QA walkthrough
- QA (but really minimize it)
2023-07-26 18:46:54 -07:00
Saurabh Misra
db9d5b213a Optimize the cosine_similarity_top_k function performance (#8151)
Optimizing important numerical code and making it run faster.

Performance went up by 1.48x (148%). Runtime went down from 138715us to
56020us

Optimization explanation:

The `cosine_similarity_top_k` function is where we made the most
significant optimizations.
Instead of sorting the entire score_array which needs considering all
elements, `np.argpartition` is utilized to find the top_k largest scores
indices, this operation has a time complexity of O(n), higher
performance than sorting. Remember, `np.argpartition` doesn't guarantee
the order of the values. So we need to use argsort() to get the indices
that would sort our top-k values after partitioning, which is much more
efficient because it only sorts the top-K elements, not the entire
array. Then to get the row and column indices of sorted top_k scores in
the original score array, we use `np.unravel_index`. This operation is
more efficient and cleaner than a list comprehension.

The code has been tested for correctness by running the following
snippet on both the original function and the optimized function and
averaged over 5 times.
```
def test_cosine_similarity_top_k_large_matrices():
    X = np.random.rand(1000, 1000)
    Y = np.random.rand(1000, 1000)
    top_k = 100
    score_threshold = 0.5
    gc.disable()
    counter = time.perf_counter_ns()
    return_value = cosine_similarity_top_k(X, Y, top_k, score_threshold)
    duration = time.perf_counter_ns() - counter
    gc.enable()
```

@hwaking @hwchase17 @jerwelborn 

Unit tests pass, I also generated more regression tests which all
passed.
2023-07-26 18:03:49 -07:00
Fabrizio Ruocco
ddc353a768 Azure Cognitive Search: Custom index and scoring profile support (#6843)
Description: Adding support for custom index and scoring profile support
in Azure Cognitive Search
@hwchase17

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-26 17:58:01 -07:00
Leonid Ganeline
ed24de8467 removed namespace title (#8208)
This change compacts the left-side Navbar (ToC) of the [API
Reference](https://api.python.langchain.com/en/latest/api_reference.html).
Now almost each namespace item is split into two lines. For example
`langchain.chat_models: Chat Models`
We remove the `Chat Models` and leave one the `langchain.chat_models`. 
This effectively compacts the navbar and increases the main page's
usability. On my screen, it reduces # of lines in Toc from 28 t to 18,
which is huge.

Removing the namespace "title" (like `Chat Models`) does not remove any
information because the title is composed directly from the namespace.
API Reference users are developers. Usability for them is very
important. We see less text => we find faster.
2023-07-26 16:45:23 -07:00
Kacper Łukawski
c5988c1d4b Implement async support for Cohere (#8237)
This PR introduces async API support for Cohere, both LLM and
embeddings. It requires updating `cohere` package to `^4`.

Tagging @hwchase17, @baskaryan, @agola11

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-26 15:51:18 -07:00
Daniel Alexander Brenot
bf1357f584 Added async support to PlanAndExecute Chain (#8239)
- Description: Adds async support to the PlanAndExecute Chain

Maintainer responsibilities:
  - Async: @agola11

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-26 15:16:07 -07:00
Bastin Florian
a3ac9b23eb feat(confluence): add markdown format option (#8246)
# Description:
**Add the possibility to keep text as Markdown in the ConfluenceLoader**
Add a bool variable that allows to keep the Markdown format of the
Confluence pages.
It is useful because it allows to use MarkdownHeaderTextSplitter as a
DataSplitter.
If this variable in set to True in the load() method, the pages are
extracted using the markdownify library.

  # Issue: 
[4407](https://github.com/langchain-ai/langchain/issues/4407)
  # Dependencies: 
Add the markdownify library
  # Tag maintainer:
 @rlancemartin, @eyurtsev
  # Twitter handle:
 FloBastinHeyI - https://twitter.com/FloBastinHeyI

---------

Co-authored-by: Florian Bastin <florian.bastin@octo.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-26 15:00:27 -07:00
Leonid Ganeline
ee6ff96e28 docstrings cleanup (#8311)
- added missed docstrings
 - changed docstrings into consistent format
  
@baskaryan
2023-07-26 14:13:10 -07:00
Bagatur
ceab0a7c1f update api ref style (#8318) 2023-07-26 14:12:44 -07:00
Rohit Gupta
e5dba8978a Avoid re-computation of embedding in weaviate similarity search (#8284)
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-26 13:31:55 -07:00
William FH
01a9b06400 Add api cross ref linking (#8275)
Example of how it would show up in our python docs:


![image](https://github.com/langchain-ai/langchain/assets/13333726/0f0a88cc-ba4a-4778-bc47-118c66807f15)


Examples added to the reference docs:

https://api.python.langchain.com/en/wfh-api_crosslink/vectorstores/langchain.vectorstores.chroma.Chroma.html#langchain.vectorstores.chroma.Chroma


![image](https://github.com/langchain-ai/langchain/assets/13333726/dcd150de-cb56-4d42-b49a-a76a002a5a52)
2023-07-26 12:38:58 -07:00
Nuno Campos
a612800ef0 Runnable single protocol (#7800)
Objects implementing Runnable: BasePromptTemplate, LLM, ChatModel,
Chain, Retriever, OutputParser

- [x] Implement Runnable in base Retriever
- [x] Raise TypeError in operator methods for unsupported things 
- [x] Implement dict which calls values in parallel and outputs dict
with results
- [x] Merge in `+` for prompts
- [x] Confirm precedence order for operators, ideal would be `+` `|`,
https://docs.python.org/3/reference/expressions.html#operator-precedence
- [x] Add support for openai functions, ie. Chat Models must return
messages
- [x] Implement BaseMessageChunk return type for BaseChatModel, a
subclass of BaseMessage which implements __add__ to return
BaseMessageChunk, concatenating all str args
- [x] Update implementation of stream/astream for llm and chat models to
use new `_stream`, `_astream` optional methods, with default
implementation in base class `raise NotImplementedError` use
https://stackoverflow.com/a/59762827 to see if it is implemented in base
class
- [x] Delete the IteratorCallbackHandler (leave the async one because
people using)
- [x] Make BaseLLMOutputParser implement Runnable, accepting either str
or BaseMessage
---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2023-07-26 12:16:46 -07:00
Bharat
04a4d3e312 Fixes #8310 Fix maximum recursion depth exceeded error (#8313)
ElasticsearchVectorStore.as_retriever() method is returning 
`RecursionError: maximum recursion depth exceeded` 
because of incorrect field reference in
 `embeddings()` method

  - Description: Fix RecursionError because of a typo
  - Issue: the issue #8310 
  - Dependencies: None,
  - Tag maintainer: @eyurtsev
  - Twitter handle: bpatel
2023-07-26 12:15:37 -07:00
Caitlin2694
b9db3dd09b Fix "missing key op" RDFGraph OWL serialization (#8276)
Replace this comment with:
- Description: Fix "missing key op" error in RDFGraph OWL Serialization
  - Issue: #8263
  - Dependencies: None
  - Tag maintainer: @baskaryan
2023-07-26 12:14:56 -07:00
Eugene Yurtsev
862e9aed66 ChatPromptTemplate: Update doc-strings, update from_role_strings behavior (#8308)
* Update doc-strings in ChatPromptTemplate
* Update from_role_strings classmethod to use well known roles
2023-07-26 15:02:36 -04:00
Bagatur
2c2fd9ff13 bump 244 (#8314) 2023-07-26 11:58:26 -07:00
Lance Martin
77c0582243 Clean queries prior to search (#8309)
With some search tools, we see no results returned if the query is a
numeric list.

E.g., if we pass:
```
'1. "LangChain vs LangSmith: How do they differ?"'
```

We see:
```
No good Google Search Result was found
```

Local testing w/ Streamlit:

![image](https://github.com/langchain-ai/langchain/assets/122662504/0a7e3dca-59e8-415e-8df6-bd9e4ea962ee)
2023-07-26 11:48:28 -07:00
shibuiwilliam
6b88fbd9bb add test for embedding distance evaluation (#8285)
Add tests for embedding distance evaluation

  - Description: Add tests for embedding distance evaluation
  - Issue: None
  - Dependencies: None
  - Tag maintainer: @baskaryan
  - Twitter handle: @MlopsJ
2023-07-26 11:45:50 -07:00
Riche Akparuorji
f3d2fdd54c Fix for code snippet in documentation (#8290)
- Description: I fixed an issue in the code snippet related to the
variable name and the evaluation of its length. The original code used
the variable "docs," but the correct variable name is "docs_svm" after
using the SVMRetriever.
- maintainer: @baskaryan
- Twitter handle: @iamreechi_

Co-authored-by: iamreechi <richieakparuorji>
2023-07-26 11:31:08 -07:00
Bagatur
f27176930a fix geopandas link (#8305) 2023-07-26 11:30:17 -07:00
Timon Palm
70604e590f DuckDuckGoSearch News Tool (#8292)
Description: 
I wanted to use the DuckDuckGoSearch tool in an agent to let him get the
latest news for a topic. DuckDuckGoSearch has already an implemented
function for retrieving news articles. But there wasn't a tool to use
it. I simply adapted the SearchResult class with an extra argument
"backend". You can set it to "news" to only get news articles.

Furthermore, I added an example to the DuckDuckGo Notebook on how to
further customize the results by using the DuckDuckGoSearchAPIWrapper.

Dependencies: no new dependencies
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-26 11:30:01 -07:00
Aarav Borthakur
8ce661d5a1 Docs: Fix Rockset links (#8214)
Fix broken Rockset links.

Right now links at
https://python.langchain.com/docs/integrations/providers/rockset are
broken.
2023-07-26 10:38:37 -07:00
Byron Saltysiak
61347bd322 giving path to the copy command for *.toml files (#8294)
Description: in the .devcontainer, docker-compose build is currently
failing due to the src paths in the COPY command. This change adds the
full path to the pyproject.toml and poetry.toml to allow the build to
run.
Issue: 

You can see the issue if you try to build the dev docker image with:
```
cd .devcontainer
docker-compose build
```

Dependencies: none
Twitter handle: byronsalty
2023-07-26 10:37:03 -07:00
happyxhw
6384c1ec8f fix: ElasticVectorSearch.from_documents failed #8293 (#8296)
- Description: fix ElasticVectorSearch.from_documents with
elasticsearch_url param,
- Issue: ElasticVectorSearch.from_documents failed #8293 # it fixes (if
applicable),


---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-26 10:33:52 -07:00
Jon Bennion
ad38eb2d50 correction to reference to code (#8301)
- Description: fixes typo referencing code

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-26 10:33:18 -07:00
jacobswe
83a53e2126 Bug Fix: AzureChatOpenAI streaming with function calls (#8300)
- Description: During streaming, the first chunk may only contain the
name of an OpenAI function and not any arguments. In this case, the
current code presumes there is a streaming response and tries to append
to it, but gets a KeyError. This fixes that case by checking if the
arguments key exists, and if not, creates a new entry instead of
appending.
  - Issue: Related to #6462

Sample Code:
```python
llm = AzureChatOpenAI(
    deployment_name=deployment_name,
    model_name=model_name,
    streaming=True
)

tools = [PythonREPLTool()]
callbacks = [StreamingStdOutCallbackHandler()]

agent = initialize_agent(
    tools=tools,
    llm=llm,
    agent=AgentType.OPENAI_FUNCTIONS,
    callbacks=callbacks
)

agent('Run some python code to test your interpreter')
```

Previous Result:
```
File ...langchain/chat_models/openai.py:344, in ChatOpenAI._generate(self, messages, stop, run_manager, **kwargs)
    342         function_call = _function_call
    343     else:
--> 344         function_call["arguments"] += _function_call["arguments"]
    345 if run_manager:
    346     run_manager.on_llm_new_token(token)

KeyError: 'arguments'
```

New Result:
```python
{'input': 'Run some python code to test your interpreter',
 'output': "The Python code `print('Hello, World!')` has been executed successfully, and the output `Hello, World!` has been printed."}
```

Co-authored-by: jswe <jswe@polencapital.com>
2023-07-26 10:11:50 -07:00
German Martin
457a4730b2 Fix the mangling issue on several VectorStores child classes. (#8274)
- Description: Fix mangling issue affecting a couple of VectorStore
classes including Redis.
  - Issue: https://github.com/langchain-ai/langchain/issues/8185
  - @rlancemartin 
  
This is a simple issue but I lack of some context in the original
implementation.
My changes perhaps are not the definitive fix but to start a quick
discussion.

@hinthornw Tagging you since one of your changes introduced this
[here.](c38965fcba)
2023-07-26 09:48:55 -07:00
Alec Flett
4da43f77e5 Add ability to load (deserialize) objects from other namespaces (#7726)
I have some Prompt subclasses in my project that I'd like to be able to
deserialize in callbacks. Right now `loads()`/`load()` will bail when it
encounters my object, but I know I can trust the objects because they're
in my own projects.

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-07-26 16:59:28 +01:00
Bagatur
5c6dcb1960 bump 243 (#8289) 2023-07-26 05:41:56 -07:00
William FH
adf019724f unpack later (#8278)
Fix https://github.com/langchain-ai/langchain/issues/8272
2023-07-26 01:53:22 -07:00
Naveen Tatikonda
9cbefcc56c [ OpenSearch ] : Add AOSS Support to OpenSearch (#8256)
### Description

This PR includes the following changes:

- Adds AOSS (Amazon OpenSearch Service Serverless) support to
OpenSearch. Please refer to the documentation on how to use it.
- While creating an index, AOSS only supports Approximate Search with
`nmslib` and `faiss` engines. During Search, only Approximate Search and
Script Scoring (on doc values) are supported.
- This PR also adds support to `efficient_filter` which can be used with
`faiss` and `lucene` engines.
- The `lucene_filter` is deprecated. Instead please use the
`efficient_filter` for the lucene engine.


Signed-off-by: Naveen Tatikonda <navtat@amazon.com>
2023-07-25 23:59:36 -07:00
Lance Martin
7a00f17033 Web research retriever (#8102)
Given a user question, this will -
* Use LLM to generate a set of queries.
* Query for each.
* The URLs from search results are stored in self.urls.
* A check is performed for any new URLs that haven't been processed yet
(not in self.url_database).
* Only these new URLs are loaded, transformed, and added to the
vectorstore.
* The vectorstore is queried for relevant documents based on the
questions generated by the LLM.
* Only unique documents are returned as the final result.

This code will avoid reprocessing of URLs across multiple runs of
similar queries, which should improve the performance of the retriever.
It also keeps track of all URLs that have been processed, which could be
useful for debugging or understanding the retriever's behavior.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-07-25 19:58:00 -07:00
Rithwik Ediga Lakhamsani
d1d691caa4 Added Databricks support to MLflow Callback (#7906)
Added a quick check to make integration easier with Databricks; another
option would be to make a new class, but this seemed more
straightfoward.

cc: @liangz1 Can this be done in a more straightfoward way?
2023-07-25 18:23:54 -07:00
William FH
479cc086ba Rm Github Import (#8257)
It's not a required dep but would break peoples builds

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-25 18:20:58 -07:00
Byron Saltysiak
68a906bb31 added lxml to the pip install example since it is required (#8260)
- Description: The trello dataloader example didn't work without an
additional dependency installed - lxml
  - Issue: na
2023-07-25 18:16:07 -07:00
Emory Petermann
7734a2b5ab update golden-query notebook and fix typo in golden docs (#8253)
updating the documentation to be consistent for Golden query tool and
have a better introduction to the tool
2023-07-25 18:15:48 -07:00
Erick Friis
c14571ab37 New enterprise support form (#8254) 2023-07-25 15:43:27 -07:00
William FH
dd87275dde Add LLMChain example of memory with chat models (#8250) 2023-07-25 15:20:32 -07:00
William FH
1f40d3e094 Update Broken Links (#8247) 2023-07-25 12:26:39 -07:00
Eugene Yurtsev
ec069381fb Remove operator overloading for BaseMessage (#8245)
This PR removes operator overloading for base message.

Removing the `+` operating from base message will help make sure that:

1) There's no need to re-define `+` for message chunks
2) That there's no unexpected behavior in terms of types changing
(adding two messages yields a ChatPromptTemplate which is not a message)
2023-07-25 20:12:19 +01:00
William FH
30c2d3cd06 Update references (#8243) 2023-07-25 11:49:25 -07:00
jacobswe
0af48b06d0 Bug Fix #6462 (#8241)
- Description: Small change to fix broken Azure streaming. More complete
migration probably still necessary once the new API behavior is
finalized.
- Issue: Implements fix by @rock-you in #6462 
- Dependencies: N/A

There don't seem to be any tests specifically for this, and I was having
some trouble adding some. This is just a small temporary fix to allow
for the new API changes that OpenAI are releasing without breaking any
other code.

---------

Co-authored-by: Jacob Swe <jswe@polencapital.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-25 11:30:22 -07:00
Bagatur
c1ea8da9bc bump 242 (#8238) 2023-07-25 08:01:37 -07:00
shibuiwilliam
af788b7cf0 Add/faiss test score threshold (#8224)
# What
- This is to add test for faiss vector store with score threshold

<!-- Thank you for contributing to LangChain!

Replace this comment with:
- Description: This is to add test for faiss vector store with score
threshold
  - Issue: None
  - Dependencies: None
  - Tag maintainer: @rlancemartin, @eyurtsev
  - Twitter handle: @MlopsJ

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-07-25 09:56:29 -04:00
shibuiwilliam
bed8eb978e use logger instead of logging (#8225)
# What
- Use `logger` instead of using logging directly.

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: Use `logger` instead of using logging directly.
  - Issue: None
  - Dependencies: None
  - Tag maintainer: @baskaryan
  - Twitter handle: @MlopsJ

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-07-25 09:55:30 -04:00
Leonid Ganeline
afc55a4fee Refactored requests (#8203)
Refactored `requests.py`. The same as
https://github.com/langchain-ai/langchain/pull/7961 #8098 #8099
requests.py is in the root code folder. This creates the
`langchain.requests: Requests` group on the API Reference navigation
ToC, on the same level as Chains and Agents which is incorrect.

Refactoring:

- copied requests.py content into utils/requests.py
- I added the backwards compatibility ref in the original requests.py. 
- updated imports to requests objects

@hwchase17, @baskaryan
2023-07-24 21:23:59 -07:00
William FH
0a16b3d84b Update Integrations links (#8206) 2023-07-24 21:20:32 -07:00
Alex Stachowiak
a7efa95775 Update base chain type hints (#7680)
Addresses #7578. `run()` can return dictionaries, Pydantic objects or
strings, so the type hints should reflect that. See the chain from
`create_structured_output_chain` for an example of a non-string return
type from `run()`.

I've updated the BaseLLMChain return type hint from `str` to `Any`.
Although, the differences between `run()` and `__call__()` seem less
clear now.

CC: @baskaryan

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-24 21:16:41 -07:00
Ani peter benjamin
e58b1d7073 feat: temp fixed Could not parse LLM output on agents folder (#7746)
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-24 19:20:37 -07:00
Dayuan Jiang
125ae6d9de add Hybrid retriever that not require any external service (#8108)
- Until now, hybrid search was limited to modules requiring external
services, such as Weaviate/Pinecone Hybrid Search. However, I have
developed a hybrid retriever that can merge a list of retrievers using
the [Reciprocal Rank
Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf)
algorithm. This new approach, similar to Weaviate hybrid search, does
not require the initialization of any external service.
  - Dependencies: No  - Twitter handle: dayuanjian21687

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-24 19:16:10 -07:00
Dario Ruben
04e45f9cde Fixed grammar in LLM models documentation (#8210)
Description: I fixed a typo in the documentation related to LLMs
(https://python.langchain.com/docs/modules/model_io/models/llms/)
2023-07-24 19:14:32 -07:00
earonesty
59a7c5877a Update supabase.py, add filter to query (matches latest supabase docs & js) (#7721)
- Description: Update supabase to support optional filter argument (if
present, used, if not, doesn't break things)
- Tag maintainer: @rlancemartin, @eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-24 19:13:52 -07:00
Aditya S
00de334f81 Fixed sparql SELECT and UPDATE query function (#7758)
- Description: Changed "SELECT" and "UPDTAE" intent check from "=" to
"in",
- Issue: Based on my own testing, most of the LLM (StarCoder, NeoGPT3,
etc..) doesn't return a single word response ("SELECT" / "UPDATE")
through this modification, we can accomplish the same output without
curated prompt engineering.
  - Dependencies: None
  - Tag maintainer: @baskaryan
  - Twitter handle: @aditya_0290


Thank you for maintaining this library, Keep up the good efforts.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-24 18:29:30 -07:00
William FH
3662aca7d4 Add async support for transform chain (#8205) 2023-07-24 17:45:17 -07:00
Taqi Jaffri
8f158b72fc Added stop sequence support to replicate (#8107)
Stop sequences are useful if you are doing long-running completions and
need to early-out rather than running for the full max_length... not
only does this save inference cost on Replicate, it is also much faster
if you are going to truncate the output later anyway.

Other LLMs support stop sequences natively (e.g. OpenAI) but I didn't
see this for Replicate so adding this via their prediction cancel
method.

Housekeeping: I ran `make format` and `make lint`, no issues reported in
the files I touched.

I did update the replicate integration test and ran `poetry run pytest
tests/integration_tests/llms/test_replicate.py` successfully.

Finally, I am @tjaffri https://twitter.com/tjaffri for feature
announcement tweets... or if you could please tag @docugami
https://twitter.com/docugami we would really appreciate that :-)

Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
2023-07-24 17:34:13 -07:00
glaze
f7ad14acfa Add etherscan document loader (#7943)
@rlancemartin 
The modification includes:
* etherscanLoader
* test_etherscan
* document ipynb

I have run the test, lint, format, and spell check. I do encounter a
linting error on ipynb, I am not sure how to address that.
```
docs/extras/modules/data_connection/document_loaders/integrations/Etherscan.ipynb:55: error: Name "null" is not defined  [name-defined]
docs/extras/modules/data_connection/document_loaders/integrations/Etherscan.ipynb:76: error: Name "null" is not defined  [name-defined]
Found 2 errors in 1 file (checked 1 source file)
```
- Description: The Etherscan loader uses etherscan api to load
transaction histories under specific accounts on Ethereum Mainnet.
- No dependency is introduced by this PR.
- Twitter handle: glazecl

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-24 17:09:16 -07:00
Julien Salinas
73d5cba308 Allow user to modify the GPU and language settings when using NLP Cloud (#7985)
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-24 17:08:56 -07:00
Bagatur
483f6c2fe3 mv eval docs (#8209) 2023-07-24 16:31:20 -07:00
Liu Ming
24f889f2bc Change with_history option to False for ChatGLM by default (#8076)
ChatGLM LLM integration will by default accumulate conversation
history(with_history=True) to ChatGLM backend api, which is not expected
in most cases. This PR set with_history=False by default, user should
explicitly set llm.with_history=True to turn this feature on. Related
PR: #8048 #7774

---------

Co-authored-by: mlot <limpo2000@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-24 15:46:02 -07:00
Mahip Soni
1f055775f8 Fixing issue with MSSQL connection (#8040)
My team recently faced an issue while using MSSQL and passing a schema
name.

We noticed that "SET search_path TO {self.schema}" is being called for
us, which is not a valid ms-sql query, and is specific to postgresql
dialect.

We were able to run it locally after this fix.


---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-24 15:45:40 -07:00
Anthony Mahanna
76102971c0 ArangoDB/AQL support for Graph QA Chain (#7880)
**Description**: Serves as an introduction to LangChain's support for
[ArangoDB](https://github.com/arangodb/arangodb), similar to
https://github.com/hwchase17/langchain/pull/7165 and
https://github.com/hwchase17/langchain/pull/4881

**Issue**: No issue has been created for this feature

**Dependencies**: `python-arango` has been added as an optional
dependency via the `CONTRIBUTING.md` guidelines
 
**Twitter handle**: [at]arangodb

- Integration test has been added
- Notebook has been added:
[graph_arangodb_qa.ipynb](https://github.com/amahanna/langchain/blob/master/docs/extras/modules/chains/additional/graph_arangodb_qa.ipynb)

[![Open In
Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/amahanna/langchain/blob/master/docs/extras/modules/chains/additional/graph_arangodb_qa.ipynb)

```
docker run -p 8529:8529 -e ARANGO_ROOT_PASSWORD= arangodb/arangodb
```

```
pip install git+https://github.com/amahanna/langchain.git
```

```python
from arango import ArangoClient

from langchain.chat_models import ChatOpenAI
from langchain.graphs import ArangoGraph
from langchain.chains import ArangoGraphQAChain

db = ArangoClient(hosts="localhost:8529").db(name="_system", username="root", password="", verify=True)

graph = ArangoGraph(db)

chain = ArangoGraphQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph)

chain.run("Is Ned Stark alive?")
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-24 15:16:52 -07:00
Adilkhan Sarsen
3e7d2a1b64 SelfQuery support for deeplake (#7888)
Added support SelfQuery for Deeplake
2023-07-24 14:22:33 -07:00
Leonid Ganeline
c580c81cca docstrings experimental (#7969)
- added/changed docstring for `experimental`
- added/changed docstrings for different artifacts
- 
@baskaryan
2023-07-24 14:21:48 -07:00
Leonid Ganeline
3eb4112a1f Refactored example_generator (#8099)
Refactored `example_generator.py`. The same as #7961 
`example_generator.py` is in the root code folder. This creates the
`langchain.example_generator: Example Generator ` group on the API
Reference navigation ToC, on the same level as `Chains` and `Agents`
which is not correct.

Refactoring:
- moved `example_generator.py` content into
`chains/example_generator.py` (not in `utils` because the
`example_generator` has dependencies on other LangChain classes. It also
doesn't work for moving into `utilities/`)
- added the backwards compatibility ref in the original
`example_generator.py`

@hwchase17
2023-07-24 13:36:44 -07:00
Juan José Torres
1cc7d4c9eb Update SageMaker Endpoint Embeddings docs to be up to date with current requirements (#8103)
- **Description:** Simple change of the Class that ContentHandler
inherits from. To create an object of type SagemakerEndpointEmbeddings,
the property content_handler must be of type EmbeddingsContentHandler
not ContentHandlerBase anymore,
  - **Twitter handle:** @Juanjo_Torres11

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-24 13:35:06 -07:00
Leonid Ganeline
7cbe28ba9b Refactored input (#8202)
Refactored `input.py`. The same as
https://github.com/langchain-ai/langchain/pull/7961 #8098 #8099
input.py is in the root code folder. This creates the `langchain.input:
Input` group on the API Reference navigation ToC, on the same level as
Chains and Agents which is incorrect.

Refactoring:

- copied input.py file into utils/input.py
- I added the backwards compatibility ref in the original input.py. 
- changed several imports to a new ref

@hwchase17, @baskaryan
2023-07-24 13:10:03 -07:00
Monty Evans
72eb4fa4e8 Change WebBaseLoader metadata parsing to set missing metadata to descriptive string instead of None (#8175)
Solves #8174 & #3542

Co-authored-by: mevans <mevans@palantir.com>
2023-07-24 12:17:49 -07:00
Bagatur
1a7d8667c8 Bagatur/gateway chat (#8198)
Signed-off-by: dbczumar <corey.zumar@databricks.com>
Co-authored-by: dbczumar <corey.zumar@databricks.com>
2023-07-24 12:17:00 -07:00
Ettore Di Giacinto
ae28568e2a Add embeddings for LocalAI (#8134)
Description:

This PR adds embeddings for LocalAI (
https://github.com/go-skynet/LocalAI ), a self-hosted OpenAI drop-in
replacement. As LocalAI can re-use OpenAI clients it is mostly following
the lines of the OpenAI embeddings, however when embedding documents, it
just uses string instead of sending tokens as sending tokens is
best-effort depending on the model being used in LocalAI. Sending tokens
is also tricky as token id's can mismatch with the model - so it's safer
to just send strings in this case.

Partly related to: https://github.com/hwchase17/langchain/issues/5256

Dependencies: No new dependencies

Twitter: @mudler_it
---------

Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-24 12:16:49 -07:00
Mike Nitsenko
d983046f90 Extend Cube Semantic Loader functionality (#8186)
**PR Description:**

This pull request introduces several enhancements and new features to
the `CubeSemanticLoader`. The changes include the following:

1. Added imports for the `json` and `time` modules.
2. Added new constructor parameters: `load_dimension_values`,
`dimension_values_limit`, `dimension_values_max_retries`, and
`dimension_values_retry_delay`.
3. Updated the class documentation with descriptions for the new
constructor parameters.
4. Added a new private method `_get_dimension_values()` to retrieve
dimension values from Cube's REST API.
5. Modified the `load()` method to load dimension values for string
dimensions if `load_dimension_values` is set to `True`.
6. Updated the API endpoint in the `load()` method from the base URL to
the metadata endpoint.
7. Refactored the code to retrieve metadata from the response JSON.
8. Added the `column_member_type` field to the metadata dictionary to
indicate if a column is a measure or a dimension.
9. Added the `column_values` field to the metadata dictionary to store
the dimension values retrieved from Cube's API.
10. Modified the `page_content` construction to include the column title
and description instead of the table name, column name, data type,
title, and description.

These changes improve the functionality and flexibility of the
`CubeSemanticLoader` class by allowing the loading of dimension values
and providing more detailed metadata for each document.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-24 12:11:58 -07:00
Bagatur
82b8d8596c bump lc241 exp3 (#8193) 2023-07-24 11:52:44 -07:00
Leonid Ganeline
848454d1e7 Refactored formatting (#8191)
Refactored `formatting.py`. The same as
https://github.com/langchain-ai/langchain/pull/7961 #8098 #8099
formatting.py is in the root code folder. This creates the
`langchain.formatting: Formatting` group on the API Reference navigation
ToC, on the same level as Chains and Agents which is incorrect.

Refactoring:

- moved formatting.py content into utils/formatting.py
- I did not add the backwards compatibility ref in the original
formatting.py. It seems unnecessary.
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-24 11:34:15 -07:00
Bagatur
4928f7a9f5 undo bump (#8192) 2023-07-24 11:32:17 -07:00
Bagatur
14aa27b5f4 redirect (#8189) 2023-07-24 10:45:12 -07:00
Bagatur
e7d64f8b15 Bagatur/vercel test 3 (#8188) 2023-07-24 10:11:54 -07:00
Leonid Ganeline
120cdf813d docstrings memory (#8018)
docstrings `memory`:
- added module summary
- added missed docstrings
- updated docstrings into consistent format
- 
@baskaryan
2023-07-24 10:05:36 -07:00
Bagatur
026269bfa9 redirects (#8183) 2023-07-24 08:32:49 -07:00
Bagatur
d5689d58ab Bagatur/bump 241 (#8182) 2023-07-24 07:47:40 -07:00
Harrison Chase
3caccf304c Harrison/hugginggpt (#8162)
Co-authored-by: Yongliang Shen <withsyl@163.com>
2023-07-24 07:36:24 -07:00
rajib
f3908627ed changed to mlflow-ai-gateway in llms/__init__.py (#8114)
- Description: In the llms/__init__.py, the key name is wrong for
mlflowaigateway. It should be mlflow-ai-gateway
  - Issue: NA
  - Dependencies: NA
  - Tag maintainer: @hwchase17, @baskaryan
  - Twitter handle: na

Without this fix, when we run the code for mlflowaigateway, we will get
error as below

ValueError: Loading mlflow-ai-gateway LLM not supported

---------

Co-authored-by: rajib76 <rajib76@yahoo.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-23 23:30:46 -07:00
Bagatur
c8c8635dc9 mv module integrations docs (#8101) 2023-07-23 23:23:16 -07:00
Adarsh Shirawalmath
8ea840432f Generalize Comment on Streaming Support for LLM Implementations and add examples (#8115)
The example provided demonstrates the usage of the
HuggingFaceTextGenInference implementation with streaming enabled.
2023-07-23 22:59:59 -07:00
Gordon Clark
80b3ec5869 GitHub toolkit improvements (#8121)
Fixes an issue with the github tool where the API returned special
objects but the tool was expecting dictionaries.

Also added proper docstrings to the GitHubAPIWraper methods and a (very
basic) integration test.

Maintainer responsibilities:
  - Agents / Tools / Toolkits: @hinthornw

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-07-23 20:17:53 -07:00
Harrison Chase
33fd6184ba beef up getting started (#8139)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-23 19:57:43 -07:00
Lawrence Lim
fa8906a9b7 fix typo: Entity Summary Memory documentation (#8145)
Fixed a small typo I came across in the Memory documentation.
2023-07-23 19:36:50 -07:00
shibuiwilliam
8f5000146c add faiss test for score threshold (#8143)
# What
- Add faiss vector search test for score threshold
- Fix failing faiss vector search test; filtering with list value is
wrong.

<!-- Thank you for contributing to LangChain!

Replace this comment with:
- Description: Add faiss vector search test for score threshold; Fix
failing faiss vector search test; filtering with list value is wrong.
  - Issue: None
  - Dependencies: None
  - Tag maintainer: @rlancemartin, @eyurtsev
  - Twitter handle: @MlopsJ

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-07-23 19:36:38 -07:00
Nolan
7686dabd36 Unbreak devcontainer (#8154)
Codespaces and devcontainer was broken by the [repo
restructure](https://github.com/langchain-ai/langchain/discussions/8043).



- Description: Add libs/langchain to container so it can be built
without error.
  - Issue: -
  - Dependencies: -
  - Tag maintainer: @hwchase17 @baskaryan 
  - Twitter handle: @finnless

The failed build log says:
```
#10 [langchain-dev-dependencies 2/2] RUN poetry install --no-interaction --no-ansi --with dev,test,docs
#10 sha256:e850ee99fc966158bfd2d85e82b7c57244f47ecbb1462e75bd83b981a56a1929
2023-07-23 23:30:33.692Z: #10 0.827 
#10 0.827 Directory libs/langchain does not exist
2023-07-23 23:30:33.738Z: #10 ERROR: executor failed running [/bin/sh -c poetry install --no-interaction --no-ansi --with dev,test,docs]: exit code: 1
```

The new pyproject.toml imports from libs/langchain:

77bf75c236/pyproject.toml (L14-L16)

But libs/langchain is never added to the dev.Dockerfile:


77bf75c236/libs/langchain/dev.Dockerfile (L37-L39)
2023-07-23 19:33:47 -07:00
Fielding Johnston
fb62f2be70 nit: small typo in evaluation module docs (#8155)
Hopefully, this doesn't come across as nitpicky! That isn't the
intention. I only noticed it, because I enjoy reading the documentation
and when I hit a mental road bump it is usually due to a missing word or
something =)

@baskaryan
2023-07-23 18:25:14 -07:00
Harrison Chase
9205919ad2 actually use input key (#8136) 2023-07-23 18:02:45 -07:00
Leonid Ganeline
670304a8b3 simplified nmspace (#8152)
recreated #7894 (it is easy to recreate than resolve conflicts)
A small refactoring to improve the API Reference Agents table
 @baskaryan
2023-07-23 18:02:20 -07:00
William FH
c5b50be225 Function calling logging fixup (#8153)
Fix bad overwriting of "functions" arg in invocation params.
Cleanup precedence in the dict
Clean up some inappropriate types (mapping should be dict)


Example:
https://dev.smith.langchain.com/public/9a7a6817-1679-49d8-8775-c13916975aae/r


![image](https://github.com/langchain-ai/langchain/assets/13333726/94cd0775-b6ef-40c3-9e5a-3ab65e466ab9)
2023-07-23 18:01:33 -07:00
SlapDrone
961a0e200f Implement AgentExecutorIterator (#6929)
- Description: Implements a `.iter()` method for the `AgentExecutor`
class. This allows hooking into and intercepting intermediate agent
steps.
  - Issue: #6925 
  - Dependencies: None
  - Tag maintainer: @vowelparrot @agola11 
  - Twitter handle: @SlapDron3 @lacicocodes

---------

Co-authored-by: Lacico <Lacicocodes@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-23 18:00:22 -07:00
Harrison Chase
77bf75c236 bump experimental to 002 (#8150) 2023-07-23 09:22:39 -07:00
Harrison Chase
e46126eac6 add llamaapi (#8140) 2023-07-23 09:16:16 -07:00
Harrison Chase
f0eb5db670 Harrison/agent intro (#8138)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-22 22:14:59 -07:00
Harrison Chase
cbf2fc8af8 prompt ergonomics (#7799) 2023-07-22 14:19:17 -07:00
Samuel Berthe
d81d6e874f doc(sqldatabasechain): use views when jsonb column description is not available (#8133)
I think the PR diff is self explaining ;)

@baskaryan
2023-07-22 11:30:04 -07:00
Harrison Chase
506b21bfc2 Update MIGRATE.md 2023-07-22 09:11:43 -07:00
Harrison Chase
9854d9e5cb cr 2023-07-22 09:07:26 -07:00
Harrison Chase
9f3073d418 bump versions (#8129) 2023-07-22 08:46:37 -07:00
Harrison Chase
86946a47a8 Harrison/add back in experimental (#8128) 2023-07-22 08:27:29 -07:00
Karthik Raja A
8b08687fc4 MultiOn client toolkit (#8110)
Addition of MultiOn Client Agent Toolkit
Dependencies: multion pip package
This PR consists of the following:
- MultiOn utility,tools and integration with agent
- sample jupyter notebook.
Request @hwchase17 , @hinthornw

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-07-22 08:19:01 -07:00
Harrison Chase
aa0e69bc98 Harrison/official pre release (#8106) 2023-07-21 18:44:32 -07:00
Philip Kiely - Baseten
95bcf68802 add kwargs support for Baseten models (#8091)
This bugfix PR adds kwargs support to Baseten model invocations so that
e.g. the following script works properly:

```python
chatgpt_chain = LLMChain(
    llm=Baseten(model="MODEL_ID"),
    prompt=prompt,
    verbose=False,
    memory=ConversationBufferWindowMemory(k=2),
    llm_kwargs={"max_length": 4096}
)
```
2023-07-21 13:56:27 -07:00
Harrison Chase
8dcabd9205 bump releases rc0 (#8097) 2023-07-21 13:54:57 -07:00
Bagatur
58f65fcf12 use top nav docs (#8090) 2023-07-21 13:52:03 -07:00
Harrison Chase
0faba034b1 add experimental release action (#8096) 2023-07-21 13:38:35 -07:00
Harrison Chase
d353d668e4 remove CVEs (#8092)
This PR aims to move all code with CVEs into `langchain.experimental`.
Note that we are NOT yet removing from the core `langchain` package - we
will give people a week to migrate here.

See MIGRATE.md for how to migrate

Zero changes to functionality

Vulnerabilities this addresses:

PALChain:
- https://security.snyk.io/vuln/SNYK-PYTHON-LANGCHAIN-5752409
- https://security.snyk.io/vuln/SNYK-PYTHON-LANGCHAIN-5759265

SQLDatabaseChain
- https://security.snyk.io/vuln/SNYK-PYTHON-LANGCHAIN-5759268

`load_prompt` (Python files only)
- https://security.snyk.io/vuln/SNYK-PYTHON-LANGCHAIN-5725807
2023-07-21 13:32:39 -07:00
Bagatur
08c658d3f8 fix api ref (#8083) 2023-07-21 12:37:21 -07:00
Harrison Chase
344cbd9c90 update contributor guide (#8088) 2023-07-21 12:01:05 -07:00
Harrison Chase
17c06ee456 cr 2023-07-21 10:48:00 -07:00
Harrison Chase
da04760de1 Harrison/move experimental (#8084) 2023-07-21 10:36:28 -07:00
Harrison Chase
f35db9f43e (WIP) set up experimental (#7959) 2023-07-21 09:20:24 -07:00
c-bata
623b321e75 Fix allowed_search_types in VectorStoreRetriever (#8064)
Unexpectedly changed at
6792a3557d

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

I guess `allowed_search_types` is unexpectedly changed in
6792a3557d,
so that we cannot specify `similarity_score_threshold` here.

```python
class VectorStoreRetriever(BaseRetriever):
    ...
    allowed_search_types: ClassVar[Collection[str]] = (
        "similarity",
        "similarityatscore_threshold",
        "mmr",
    )

    @root_validator()
    def validate_search_type(cls, values: Dict) -> Dict:
        """Validate search type."""
        search_type = values["search_type"]
        if search_type not in cls.allowed_search_types:
            raise ValueError(...)
        if search_type == "similarity_score_threshold":
            ... # UNREACHABLE CODE
```

VectorStores Maintainers: @rlancemartin @eyurtsev
2023-07-21 08:39:36 -07:00
Bagatur
95e369b38d bump 239 (#8077) 2023-07-21 07:31:14 -07:00
William FH
c38965fcba Add embedding and vectorstore provider info as tags (#8027)
Example:
https://smith.langchain.com/public/bcd3714d-abba-4790-81c8-9b5718535867/r


The vectorstore implementations aren't super standardized yet, so just
adding an optional embeddings property to pass in.
2023-07-20 22:40:01 -07:00
Mohammad Mohtashim
355b7d8b86 Getting SQL cmd directly from SQLDatabase Chain. (#7940)
- Description: Get SQL Cmd directly generated by SQL-Database Chain
without executing it in the DB engine.
- Issue: #4853 
- Tag maintainer: @hinthornw,@baskaryan

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-07-20 22:36:55 -07:00
Lance Martin
5a084e1b20 Async HTML loader and HTML2Text transformer (#8036)
New HTML loader that asynchronously loader a list of urls. 
 
New transformer using [HTML2Text](https://github.com/Alir3z4/html2text/)
for HTML to clean, easy-to-read plain ASCII text (valid Markdown).
2023-07-20 22:30:59 -07:00
Wey Gu
cf60cff1ef feat: Add with_history option for chatglm (#8048)
In certain 0-shot scenarios, the existing stateful language model can
unintentionally send/accumulate the .history.

This commit adds the "with_history" option to chatglm, allowing users to
control the behavior of .history and prevent unintended accumulation.

Possible reviewers @hwchase17 @baskaryan @mlot

Refer to discussion over this thread:
https://twitter.com/wey_gu/status/1681996149543276545?s=20
2023-07-20 22:25:37 -07:00
Harrison Chase
1f3b987860 Harrison/GitHub toolkit (#8047)
Co-authored-by: Trevor Dobbertin <trevordobbertin@gmail.com>
2023-07-20 22:24:55 -07:00
Leonid Ganeline
ae8bc9e830 Refactored sql_database (#7945)
The `sql_database.py` is unnecessarily placed in the root code folder.
A similar code is usually placed in the `utilities/`.
As a byproduct of this placement, the sql_database is [placed on the top
level of classes in the API
Reference](https://api.python.langchain.com/en/latest/api_reference.html#module-langchain.sql_database)
which is confusing and not correct.


- moved the `sql_database.py` from the root code folder to the
`utilities/`

@baskaryan

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-07-20 22:17:55 -07:00
William FH
dc9d6cadab Dedup methods (#8049) 2023-07-20 22:13:22 -07:00
Harrison Chase
f99f497b2c Harrison/predibase (#8046)
Co-authored-by: Abhay Malik <32989166+Abhay-765@users.noreply.github.com>
2023-07-20 19:26:50 -07:00
Jacob Lee
56c6ab1715 Fix bad docs sidebar header (#7966)
Quick fix for:

<img width="283" alt="Screenshot 2023-07-19 at 2 49 44 PM"
src="https://github.com/hwchase17/langchain/assets/6952323/91e4868c-b75e-413d-9f8f-d34762abf164">

CC @baskaryan
2023-07-20 19:06:57 -07:00
Wian Stipp
ebc5ff2948 HuggingFaceTextGenInference bug fix: Multiple values for keyword argument (#8044)
Fixed the bug causing: `TypeError: generate() got multiple values for
keyword argument 'stop_sequences'`

```python
res = await self.async_client.generate(
                prompt,
                **self._default_params,
                stop_sequences=stop,
                **kwargs,
            )
```
The above throws an error because stop_sequences is in also in the
self._default_params.
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-20 19:05:08 -07:00
Kacper Łukawski
ed6a5532ac Implement async support in Qdrant local mode (#8001)
I've extended the support of async API to local Qdrant mode. It is faked
but allows prototyping without spinning a container. The tests are
improved to test the in-memory case as well.

@baskaryan @rlancemartin @eyurtsev @agola11
2023-07-20 19:04:33 -07:00
Bagatur
7717c24fc4 fix redis cache chat model (#8041)
Redis cache currently stores model outputs as strings. Chat generations
have Messages which contain more information than just a string. Until
Redis cache supports fully storing messages, cache should not interact
with chat generations.
2023-07-20 19:00:05 -07:00
Taqi Jaffri
973593c5c7 Added streaming support to Replicate (#8045)
Streaming support is useful if you are doing long-running completions or
need interactivity e.g. for chat... adding it to replicate, using a
similar pattern to other LLMs that support streaming.

Housekeeping: I ran `make format` and `make lint`, no issues reported in
the files I touched.

I did update the replicate integration test but ran into some issues,
specifically:

1. The original test was failing for me due to the model argument not
being specified... perhaps this test is not regularly run? I fixed it by
adding a call to the lightweight hello world model which should not be
burdensome for replicate infra.
2. I couldn't get the `make integration_tests` command to pass... a lot
of failures in other integration tests due to missing dependencies...
however I did make sure the particluar test file I updated does pass, by
running `poetry run pytest
tests/integration_tests/llms/test_replicate.py`

Finally, I am @tjaffri https://twitter.com/tjaffri for feature
announcement tweets... or if you could please tag @docugami
https://twitter.com/docugami we would really appreciate that :-)

Tagging model maintainers @hwchase17  @baskaryan 

Thank for all the awesome work you folks are doing.

---------

Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
2023-07-20 18:59:54 -07:00
Piyush Jain
31b7ddc12c Neptune graph and openCypher QA Chain (#8035)
## Description
This PR adds a graph class and an openCypher QA chain to work with the
Amazon Neptune database.

## Dependencies
`requests` which is included in the LangChain dependencies.

## Maintainers for Review
@krlawrence
@baskaryan

### Twitter handle
pjain7
2023-07-20 18:56:47 -07:00
Leonid Ganeline
995220b797 Refactored math_utils (#7961)
`math_utils.py` is in the root code folder. This creates the
`langchain.math_utils: Math Utils` group on the API Reference navigation
ToC, on the same level with `Chains` and `Agents` which is not correct.

Refactoring:
- created the `utils/` folder
- moved `math_utils.py` to `utils/math.py`
- moved `utils.py` to `utils/utils.py`
- split `utils.py` into `utils.py, env.py, strings.py`
- added module description

@baskaryan
2023-07-20 18:55:43 -07:00
Paolo Picello
5137f40dd6 Update mongodb_atlas.py docstrings (#8033)
Hi all, I just added the "index_name" parameter to the docstrings for
mongodb_atlas.py (it is missing in the [public doc
page](https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch.html#langchain-vectorstores-mongodb-atlas-mongodbatlasvectorsearch).

Thanks
2023-07-20 17:35:07 -07:00
felixocker
9226fda58b fix: create schema description from URIs and str w/out rdflib warnings (#8025)
- Description: fix to avoid rdflib warnings when concatenating URIs and
strings to create the text snippet for the knowledge graph's schema.
@marioscrock pointed this out in a comment related to #7165
- Issue: None, but the problem was mentioned as a comment in #7165
- Dependencies: None
- Tag maintainer: Related to memory -> @hwchase17, maybe @baskaryan as
it is a fix
2023-07-20 15:55:19 -07:00
Emory Petermann
7239d57a53 Update Golden integration documentation (#8030)
fixes some typos and cleans up onboarding for golden, thank you!

@hinthornw
2023-07-20 15:53:44 -07:00
Jonathon Belotti
021bb9be84 Update Modal.com integration docs (#8014)
Hey, I'm a Modal Labs engineer and I'm making this docs update after
getting a user question in [our beta Slack
space](https://join.slack.com/t/modalbetatesters/shared_invite/zt-1xl9gbob8-1QDgUY7_PRPg6dQ49hqEeQ)
about the Langchain integration docs.

🔗 [Modal beta-testers link to docs discussion
thread](https://modalbetatesters.slack.com/archives/C031Z7DBQFL/p1689777700594819?thread_ts=1689775859.855849&cid=C031Z7DBQFL)
2023-07-20 15:53:06 -07:00
Jeffrey Wang
62d0475c29 Add Metaphor new field and reformat docs (#8022)
This PR reformats our python notebook example and also adds a new field
we have.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2023-07-20 15:50:54 -07:00
William FH
e2a99bd169 Different error strings (#8010) 2023-07-20 09:58:25 -07:00
2376 changed files with 46402 additions and 21231 deletions

View File

@@ -2,7 +2,7 @@ version: '3'
services:
langchain:
build:
dockerfile: dev.Dockerfile
dockerfile: libs/langchain/dev.Dockerfile
context: ..
volumes:
# Update this to wherever you want VS Code to mount the folder of your project

View File

@@ -69,6 +69,14 @@ This project uses [Poetry](https://python-poetry.org/) as a dependency manager.
3. Tell Poetry to use the virtualenv python environment (`poetry config virtualenvs.prefer-active-python true`)
4. Continue with the following steps.
There are two separate projects in this repository:
- `langchain`: core langchain code, abstractions, and use cases
- `langchain.experimental`: more experimental code
Each of these has their OWN development environment.
In order to run any of the commands below, please move into their respective directories.
For example, to contribute to `langchain` run `cd libs/langchain` before getting started with the below.
To install requirements:
```bash
@@ -248,6 +256,9 @@ When you run `poetry install`, the `langchain` package is installed as editable
## Documentation
While the code is split between `langchain` and `langchain.experimental`, the documentation is one holistic thing.
This covers how to get started contributing to documentation.
### Contribute Documentation
The docs directory contains Documentation and API Reference.

View File

@@ -52,11 +52,13 @@ runs:
- name: Check Poetry File
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
poetry check
- name: Check lock file
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
poetry lock --check

View File

@@ -1,15 +1,21 @@
name: lint
on:
push:
branches: [master]
pull_request:
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
env:
POETRY_VERSION: "1.4.2"
jobs:
build:
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
strategy:
matrix:
@@ -31,6 +37,10 @@ jobs:
- name: Install dependencies
run: |
poetry install
- name: Install langchain editable
if: ${{ inputs.working-directory != 'langchain' }}
run: |
pip install -e ../langchain
- name: Analysing the code with our lint
run: |
make lint

View File

@@ -1,13 +1,12 @@
name: release
on:
pull_request:
types:
- closed
branches:
- master
paths:
- 'pyproject.toml'
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
env:
POETRY_VERSION: "1.4.2"
@@ -18,6 +17,9 @@ jobs:
${{ github.event.pull_request.merged == true }}
&& ${{ contains(github.event.pull_request.labels.*.name, 'release') }}
runs-on: ubuntu-latest
defaults:
run:
working-directory: ${{ inputs.working-directory }}
steps:
- uses: actions/checkout@v3
- name: Install poetry

View File

@@ -1,16 +1,25 @@
name: test
on:
push:
branches: [master]
pull_request:
workflow_dispatch:
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
test_type:
type: string
description: "Test types to run"
default: '["core", "extended"]'
env:
POETRY_VERSION: "1.4.2"
jobs:
build:
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
strategy:
matrix:
@@ -19,9 +28,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
test_type:
- "core"
- "extended"
test_type: ${{ fromJSON(inputs.test_type) }}
name: Python ${{ matrix.python-version }} ${{ matrix.test_type }}
steps:
- uses: actions/checkout@v3
@@ -29,6 +36,7 @@ jobs:
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
working-directory: ${{ inputs.working-directory }}
poetry-version: "1.4.2"
cache-key: ${{ matrix.test_type }}
install-command: |
@@ -39,6 +47,10 @@ jobs:
echo "Running extended tests, installing dependencies with poetry..."
poetry install -E extended_testing
fi
- name: Install langchain editable
if: ${{ inputs.working-directory != 'langchain' }}
run: |
pip install -e ../langchain
- name: Run ${{matrix.test_type}} tests
run: |
if [ "${{ matrix.test_type }}" == "core" ]; then

View File

@@ -20,3 +20,5 @@ jobs:
uses: actions/checkout@v3
- name: Codespell
uses: codespell-project/actions-codespell@v2
with:
skip: guide_imports.json

27
.github/workflows/langchain_ci.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
---
name: libs/langchain CI
on:
push:
branches: [ master ]
pull_request:
paths:
- '.github/workflows/_lint.yml'
- '.github/workflows/_test.yml'
- '.github/workflows/langchain_ci.yml'
- 'libs/langchain/**'
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
lint:
uses:
./.github/workflows/_lint.yml
with:
working-directory: libs/langchain
secrets: inherit
test:
uses:
./.github/workflows/_test.yml
with:
working-directory: libs/langchain
secrets: inherit

View File

@@ -0,0 +1,29 @@
---
name: libs/langchain-experimental CI
on:
push:
branches: [ master ]
pull_request:
paths:
- '.github/workflows/_lint.yml'
- '.github/workflows/_test.yml'
- '.github/workflows/langchain_experimental_ci.yml'
- 'libs/langchain/**'
- 'libs/experimental/**'
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
lint:
uses:
./.github/workflows/_lint.yml
with:
working-directory: libs/experimental
secrets: inherit
test:
uses:
./.github/workflows/_test.yml
with:
working-directory: libs/experimental
test_type: '["core"]'
secrets: inherit

View File

@@ -0,0 +1,20 @@
---
name: libs/langchain-experimental Release
on:
pull_request:
types:
- closed
branches:
- master
paths:
- 'libs/experimental/pyproject.toml'
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
release:
uses:
./.github/workflows/_release.yml
with:
working-directory: libs/experimental
secrets: inherit

20
.github/workflows/langchain_release.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
---
name: libs/langchain Release
on:
pull_request:
types:
- closed
branches:
- master
paths:
- 'libs/langchain/pyproject.toml'
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
release:
uses:
./.github/workflows/_release.yml
with:
working-directory: libs/langchain
secrets: inherit

View File

@@ -24,6 +24,6 @@ sphinx:
# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: docs/requirements.txt
- requirements: docs/api_reference/requirements.txt
- method: pip
path: .

57
MIGRATE.md Normal file
View File

@@ -0,0 +1,57 @@
# Migrating to `langchain_experimental`
We are moving any experimental components of LangChain, or components with vulnerability issues, into `langchain_experimental`.
This guide covers how to migrate.
## Installation
Previously:
`pip install -U langchain`
Now (only if you want to access things in experimental):
`pip install -U langchain langchain_experimental`
## Things in `langchain.experimental`
Previously:
`from langchain.experimental import ...`
Now:
`from langchain_experimental import ...`
## PALChain
Previously:
`from langchain.chains import PALChain`
Now:
`from langchain_experimental.pal_chain import PALChain`
## SQLDatabaseChain
Previously:
`from langchain.chains import SQLDatabaseChain`
Now:
`from langchain_experimental.sql import SQLDatabaseChain`
## `load_prompt` for Python files
Note: this only applies if you want to load Python files as prompts.
If you want to load json/yaml files, no change is needed.
Previously:
`from langchain.prompts import load_prompt`
Now:
`from langchain_experimental.prompts import load_prompt`

View File

@@ -1,18 +1,8 @@
.PHONY: all clean docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck format lint test tests test_watch integration_tests docker_tests help extended_tests
.PHONY: all clean docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck
# Default target executed when no arguments are given to make.
all: help
######################
# TESTING AND COVERAGE
######################
# Run unit tests and generate a coverage report.
coverage:
poetry run pytest --cov \
--cov-config=.coveragerc \
--cov-report xml \
--cov-report term-missing:skip-covered
######################
# DOCUMENTATION
@@ -41,46 +31,6 @@ api_docs_clean:
api_docs_linkcheck:
poetry run linkchecker docs/api_reference/_build/html/index.html
# Define a variable for the test file path.
TEST_FILE ?= tests/unit_tests/
test:
poetry run pytest --disable-socket --allow-unix-socket $(TEST_FILE)
tests:
poetry run pytest --disable-socket --allow-unix-socket $(TEST_FILE)
extended_tests:
poetry run pytest --disable-socket --allow-unix-socket --only-extended tests/unit_tests
test_watch:
poetry run ptw --now . -- tests/unit_tests
integration_tests:
poetry run pytest tests/integration_tests
docker_tests:
docker build -t my-langchain-image:test .
docker run --rm my-langchain-image:test
######################
# LINTING AND FORMATTING
######################
# Define a variable for Python and notebook files.
PYTHON_FILES=.
lint format: PYTHON_FILES=.
lint_diff format_diff: PYTHON_FILES=$(shell git diff --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$')
lint lint_diff:
poetry run mypy $(PYTHON_FILES)
poetry run black $(PYTHON_FILES) --check
poetry run ruff .
format format_diff:
poetry run black $(PYTHON_FILES)
poetry run ruff --select I --fix $(PYTHON_FILES)
spell_check:
poetry run codespell --toml pyproject.toml
@@ -97,12 +47,3 @@ help:
@echo 'docs_build - build the documentation'
@echo 'docs_clean - clean the documentation build artifacts'
@echo 'docs_linkcheck - run linkchecker on the documentation'
@echo 'format - run code formatters'
@echo 'lint - run linters'
@echo 'test - run unit tests'
@echo 'tests - run unit tests'
@echo 'test TEST_FILE=<test_file> - run all tests in file'
@echo 'extended_tests - run only extended unit tests'
@echo 'test_watch - run unit tests in watch mode'
@echo 'integration_tests - run integration tests'
@echo 'docker_tests - run unit tests in docker'

View File

@@ -3,8 +3,8 @@
⚡ Building applications with LLMs through composability ⚡
[![Release Notes](https://img.shields.io/github/release/hwchase17/langchain)](https://github.com/hwchase17/langchain/releases)
[![lint](https://github.com/hwchase17/langchain/actions/workflows/lint.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/lint.yml)
[![test](https://github.com/hwchase17/langchain/actions/workflows/test.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/test.yml)
[![CI](https://github.com/hwchase17/langchain/actions/workflows/langchain_ci.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/langchain_ci.yml)
[![Experimental CI](https://github.com/hwchase17/langchain/actions/workflows/langchain_experimental_ci.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/langchain_experimental_ci.yml)
[![Downloads](https://static.pepy.tech/badge/langchain/month)](https://pepy.tech/project/langchain)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
@@ -19,7 +19,15 @@
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/hwchase17/langchainjs).
**Production Support:** As you move your LangChains into production, we'd love to offer more comprehensive support.
Please fill out [this form](https://forms.gle/57d8AmXBYp8PP8tZA) and we'll set up a dedicated support Slack channel.
Please fill out [this form](https://6w1pwbss0py.typeform.com/to/rrbrdTH2) and we'll set up a dedicated support Slack channel.
## 🚨Breaking Changes for select chains (SQLDatabase) on 7/28
In an effort to make `langchain` leaner and safer, we are moving select chains to `langchain_experimental`.
This migration has already started, but we are remaining backwards compatible until 7/28.
On that date, we will remove functionality from `langchain`.
Read more about the motivation and the progress [here](https://github.com/hwchase17/langchain/discussions/8043).
Read how to migrate your code [here](MIGRATE.md).
## Quick Install

View File

@@ -7,19 +7,66 @@
# -- Path setup --------------------------------------------------------------
import json
import os
import sys
from pathlib import Path
import toml
from docutils import nodes
from sphinx.util.docutils import SphinxDirective
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
import toml
_DIR = Path(__file__).parent.absolute()
sys.path.insert(0, os.path.abspath("."))
sys.path.insert(0, os.path.abspath("../../libs/langchain"))
with open("../../pyproject.toml") as f:
with (_DIR.parents[1] / "libs" / "langchain" / "pyproject.toml").open("r") as f:
data = toml.load(f)
with (_DIR / "guide_imports.json").open("r") as f:
imported_classes = json.load(f)
class ExampleLinksDirective(SphinxDirective):
"""Directive to generate a list of links to examples.
We have a script that extracts links to API reference docs
from our notebook examples. This directive uses that information
to backlink to the examples from the API reference docs."""
has_content = False
required_arguments = 1
def run(self):
"""Run the directive.
Called any time :example_links:`ClassName` is used
in the template *.rst files."""
class_or_func_name = self.arguments[0]
links = imported_classes.get(class_or_func_name, {})
list_node = nodes.bullet_list()
for doc_name, link in links.items():
item_node = nodes.list_item()
para_node = nodes.paragraph()
link_node = nodes.reference()
link_node["refuri"] = link
link_node.append(nodes.Text(doc_name))
para_node.append(link_node)
item_node.append(para_node)
list_node.append(item_node)
if list_node.children:
title_node = nodes.title()
title_node.append(nodes.Text(f"Examples using {class_or_func_name}"))
return [title_node, list_node]
return [list_node]
def setup(app):
app.add_directive("example_links", ExampleLinksDirective)
# -- Project information -----------------------------------------------------

View File

@@ -4,7 +4,7 @@ import re
from pathlib import Path
ROOT_DIR = Path(__file__).parents[2].absolute()
PKG_DIR = ROOT_DIR / "langchain"
PKG_DIR = ROOT_DIR / "libs" / "langchain" / "langchain"
WRITE_FILE = Path(__file__).parent / "api_reference.rst"
@@ -40,11 +40,7 @@ API Reference
functions = _members["functions"]
if not (classes or functions):
continue
module_title = module.replace("_", " ").title()
if module_title == "Llms":
module_title = "LLMs"
section = f":mod:`langchain.{module}`: {module_title}"
section = f":mod:`langchain.{module}`"
full_doc += f"""\
{section}
{'=' * (len(section) + 1)}
@@ -78,6 +74,7 @@ Functions
.. autosummary::
:toctree: {module}
:template: function.rst
{fstring}

File diff suppressed because one or more lines are too long

View File

@@ -1,9 +0,0 @@
Evaluation
=======================
LangChain has a number of convenient evaluation chains you can use off the shelf to grade your models' oupputs.
.. automodule:: langchain.evaluation
:members:
:undoc-members:
:inherited-members:

View File

@@ -1,3 +1,4 @@
-e libs/langchain
autodoc_pydantic==1.8.0
myst_parser
nbsphinx==0.8.9
@@ -9,6 +10,4 @@ sphinx-panels
toml
myst_nb
sphinx_copybutton
pydata-sphinx-theme==0.13.1
nbdoc
urllib3<2
pydata-sphinx-theme==0.13.1

View File

@@ -26,3 +26,5 @@
{%- endfor %}
{% endif %}
{% endblock %}
.. example_links:: {{ objname }}

View File

@@ -0,0 +1,8 @@
:mod:`{{module}}`.{{objname}}
{{ underline }}==============
.. currentmodule:: {{ module }}
.. autofunction:: {{ objname }}
.. example_links:: {{ objname }}

View File

@@ -745,6 +745,11 @@ span.descname {
background-color: transparent;
padding: 0;
font-family: monospace;
font-size: 1.2rem;
}
em.property {
font-weight: normal;
}
span.descclassname {

View File

@@ -1,10 +0,0 @@
---
sidebar_position: 0
---
# Integrations
Visit the [Integrations Hub](https://integrations.langchain.com) to further explore, upvote and request integrations across key LangChain components.
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -51,7 +51,7 @@ Walkthroughs and best-practices for common end-to-end use cases, like:
Learn best practices for developing with LangChain.
### [Ecosystem](/docs/ecosystem/)
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/ecosystem/integrations/) and [dependent repos](/docs/ecosystem/dependents.html).
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/) and [dependent repos](/docs/ecosystem/dependents).
### [Additional resources](/docs/additional_resources/)
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/additional_resources/youtube.html) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).

View File

@@ -22,28 +22,74 @@ import OpenAISetup from "@snippets/get_started/quickstart/openai_setup.mdx"
## Building an application
Now we can start building our language model application. LangChain provides many modules that can be used to build language model applications. Modules can be used as stand-alones in simple applications and they can be combined for more complex use cases.
Now we can start building our language model application. LangChain provides many modules that can be used to build language model applications.
Modules can be used as stand-alones in simple applications and they can be combined for more complex use cases.
The core building block of LangChain applications is the LLMChain.
This combines three things:
- LLM: The language model is the core reasoning engine here. In order to work with LangChain, you need to understand the different types of language models and how to work with them.
- Prompt Templates: This provides instructions to the language model. This controls what the language model outputs, so understanding how to construct prompts and different prompting strategies is crucial.
- Output Parsers: These translate the raw response from the LLM to a more workable format, making it easy to use the output downstream.
In this getting started guide we will cover those three components by themselves, and then cover the LLMChain which combines all of them.
Understanding these concepts will set you up well for being able to use and customize LangChain applications.
Most LangChain applications allow you to configure the LLM and/or the prompt used, so knowing how to take advantage of this will be a big enabler.
## LLMs
#### Get predictions from a language model
The basic building block of LangChain is the LLM, which takes in text and generates more text.
There are two types of language models, which in LangChain are called:
As an example, suppose we're building an application that generates a company name based on a company description. In order to do this, we need to initialize an OpenAI model wrapper. In this case, since we want the outputs to be MORE random, we'll initialize our model with a HIGH temperature.
- LLMs: this is a language model which takes a string as input and returns a string
- ChatModels: this is a language model which takes a list of messages as input and returns a message
import LLM from "@snippets/get_started/quickstart/llm.mdx"
The input/output for LLMs is simple and easy to understand - a string.
But what about ChatModels? The input there is a list of `ChatMessage`s, and the output is a single `ChatMessage`.
A `ChatMessage` has two required components:
<LLM/>
- `content`: This is the content of the message.
- `role`: This is the role of the entity from which the `ChatMessage` is coming from.
## Chat models
LangChain provides several objects to easily distinguish between different roles:
Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.
- `HumanMessage`: A `ChatMessage` coming from a human/user.
- `AIMessage`: A `ChatMessage` coming from an AI/assistant.
- `SystemMessage`: A `ChatMessage` coming from the system.
- `FunctionMessage`: A `ChatMessage` coming from a function call.
You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are `AIMessage`, `HumanMessage`, `SystemMessage`, and `ChatMessage` -- `ChatMessage` takes in an arbitrary role parameter. Most of the time, you'll just be dealing with `HumanMessage`, `AIMessage`, and `SystemMessage`.
If none of those roles sound right, there is also a `ChatMessage` class where you can specify the role manually.
For more information on how to use these different messages most effectively, see our prompting guide.
import ChatModel from "@snippets/get_started/quickstart/chat_model.mdx"
LangChain exposes a standard interface for both, but it's useful to understand this difference in order to construct prompts for a given language model.
The standard interface that LangChain exposes has two methods:
- `predict`: Takes in a string, returns a string
- `predict_messages`: Takes in a list of messages, returns a message.
Let's see how to work with these different types of models and these different types of inputs.
First, let's import an LLM and a ChatModel.
import ImportLLMs from "@snippets/get_started/quickstart/import_llms.mdx"
<ImportLLMs/>
The `OpenAI` and `ChatOpenAI` objects are basically just configuration objects.
You can initialize them with parameters like `temperature` and others, and pass them around.
Next, let's use the `predict` method to run over a string input.
import InputString from "@snippets/get_started/quickstart/input_string.mdx"
<InputString/>
Finally, let's use the `predict_messages` method to run over a list of messages.
import InputMessages from "@snippets/get_started/quickstart/input_messages.mdx"
<InputMessages/>
For both these methods, you can also pass in parameters as key word arguments.
For example, you could pass in `temperature=0` to adjust the temperature that is used from what the object was configured with.
Whatever values are passed in during run time will always override what the object was configured with.
<ChatModel/>
## Prompt templates
@@ -51,108 +97,66 @@ Most LLM applications do not pass user input directly into an LLM. Usually they
In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it'd be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions.
PromptTemplates help with exactly this!
They bundle up all the logic for going from user input into a fully formatted prompt.
This can start off very simple - for example, a prompt to produce the above string would just be:
import PromptTemplateLLM from "@snippets/get_started/quickstart/prompt_templates_llms.mdx"
import PromptTemplateChatModel from "@snippets/get_started/quickstart/prompt_templates_chat_models.mdx"
<Tabs>
<TabItem value="llms" label="LLMs" default>
With PromptTemplates this is easy! In this case our template would be very simple:
<PromptTemplateLLM/>
</TabItem>
<TabItem value="chat_models" label="Chat models">
Similar to LLMs, you can make use of templating by using a `MessagePromptTemplate`. You can build a `ChatPromptTemplate` from one or more `MessagePromptTemplate`s. You can use `ChatPromptTemplate`'s `format_messages` method to generate the formatted messages.
However, the advantages of using these over raw string formatting are several.
You can "partial" out variables - eg you can format only some of the variables at a time.
You can compose them together, easily combining different templates into a single prompt.
For explanations of these functionalities, see the [section on prompts](/docs/modules/model_io/prompts) for more detail.
Because this is generating a list of messages, it is slightly more complex than the normal prompt template which is generating only a string. Please see the detailed guides on prompts to understand more options available to you here.
PromptTemplates can also be used to produce a list of messages.
In this case, the prompt not only contains information about the content, but also each message (its role, its position in the list, etc)
Here, what happens most often is a ChatPromptTemplate is a list of ChatMessageTemplates.
Each ChatMessageTemplate contains instructions for how to format that ChatMessage - its role, and then also its content.
Let's take a look at this below:
<PromptTemplateChatModel/>
</TabItem>
</Tabs>
## Chains
ChatPromptTemplates can also include other things besides ChatMessageTemplates - see the [section on prompts](/docs/modules/model_io/prompts) for more detail.
Now that we've got a model and a prompt template, we'll want to combine the two. Chains give us a way to link (or chain) together multiple primitives, like models, prompts, and other chains.
## Output Parsers
import ChainLLM from "@snippets/get_started/quickstart/chains_llms.mdx"
import ChainChatModel from "@snippets/get_started/quickstart/chains_chat_models.mdx"
OutputParsers convert the raw output of an LLM into a format that can be used downstream.
There are few main type of OutputParsers, including:
<Tabs>
<TabItem value="llms" label="LLMs" default>
- Convert text from LLM -> structured information (eg JSON)
- Convert a ChatMessage into just a string
- Convert the extra information returned from a call besides the message (like OpenAI function invocation) into a string.
The simplest and most common type of chain is an LLMChain, which passes an input first to a PromptTemplate and then to an LLM. We can construct an LLM chain from our existing model and prompt template.
For full information on this, see the [section on output parsers](/docs/modules/model_io/output_parsers)
<ChainLLM/>
In this getting started guide, we will write our own output parser - one that converts a comma separated list into a list.
There we go, our first chain! Understanding how this simple chain works will set you up well for working with more complex chains.
import OutputParser from "@snippets/get_started/quickstart/output_parser.mdx"
</TabItem>
<TabItem value="chat_models" label="Chat models">
<OutputParser/>
The `LLMChain` can be used with chat models as well:
## LLMChain
<ChainChatModel/>
</TabItem>
</Tabs>
We can now combine all these into one chain.
This chain will take input variables, pass those to a prompt template to create a prompt, pass the prompt to an LLM, and then pass the output through an (optional) output parser.
This is a convenient way to bundle up a modular piece of logic.
Let's see it in action!
## Agents
import LLMChain from "@snippets/get_started/quickstart/llm_chain.mdx"
import AgentLLM from "@snippets/get_started/quickstart/agents_llms.mdx"
import AgentChatModel from "@snippets/get_started/quickstart/agents_chat_models.mdx"
<LLMChain/>
Our first chain ran a pre-determined sequence of steps. To handle complex workflows, we need to be able to dynamically choose actions based on inputs.
## Next Steps
Agents do just this: they use a language model to determine which actions to take and in what order. Agents are given access to tools, and they repeatedly choose a tool, run the tool, and observe the output until they come up with a final answer.
This is it!
We've now gone over how to create the core building block of LangChain applications - the LLMChains.
There is a lot more nuance in all these components (LLMs, prompts, output parsers) and a lot more different components to learn about as well.
To continue on your journey:
To load an agent, you need to choose a(n):
- LLM/Chat model: The language model powering the agent.
- Tool(s): A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. For a list of predefined tools and their specifications, see the [Tools documentation](/docs/modules/agents/tools/).
- Agent name: A string that references a supported agent class. An agent class is largely parameterized by the prompt the language model uses to determine which action to take. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see [here](/docs/modules/agents/how_to/custom_agent.html). For a list of supported agents and their specifications, see [here](/docs/modules/agents/agent_types/).
For this example, we'll be using SerpAPI to query a search engine.
You'll need to install the SerpAPI Python package:
```bash
pip install google-search-results
```
And set the `SERPAPI_API_KEY` environment variable.
<Tabs>
<TabItem value="llms" label="LLMs" default>
<AgentLLM/>
</TabItem>
<TabItem value="chat_models" label="Chat models">
Agents can also be used with chat models, you can initialize one using `AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION` as the agent type.
<AgentChatModel/>
</TabItem>
</Tabs>
## Memory
The chains and agents we've looked at so far have been stateless, but for many applications it's necessary to reference past interactions. This is clearly the case with a chatbot for example, where you want it to understand new messages in the context of past messages.
The Memory module gives you a way to maintain application state. The base Memory interface is simple: it lets you update state given the latest run inputs and outputs and it lets you modify (or contextualize) the next input using the stored state.
There are a number of built-in memory systems. The simplest of these is a buffer memory which just prepends the last few inputs/outputs to the current input - we will use this in the example below.
import MemoryLLM from "@snippets/get_started/quickstart/memory_llms.mdx"
import MemoryChatModel from "@snippets/get_started/quickstart/memory_chat_models.mdx"
<Tabs>
<TabItem value="llms" label="LLMs" default>
<MemoryLLM/>
</TabItem>
<TabItem value="chat_models" label="Chat models">
You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.
<MemoryChatModel/>
</TabItem>
</Tabs>
- [Dive deeper](/docs/modules/model_io) into LLMs, prompts, and output parsers
- Learn the other [key components](/docs/modules)
- Check out our [helpful guides](/docs/guides) for detailed walkthroughs on particular topics
- Explore [end-to-end use cases](/docs/use_cases)

View File

@@ -0,0 +1,24 @@
---
sidebar_position: 3
---
# Comparison Evaluators
Comparison evaluators in LangChain help measure two different chain or LLM outputs. These evaluators are helpful for comparative analyses, such as A/B testing between two language models, or comparing different versions of the same model. They can also be useful for things like generating preference scores for ai-assisted reinforcement learning.
These evaluators inherit from the `PairwiseStringEvaluator` class, providing a comparison interface for two strings - typically, the outputs from two different prompts or models, or two versions of the same model. In essence, a comparison evaluator performs an evaluation on a pair of strings and returns a dictionary containing the evaluation score and other relevant details.
To create a custom comparison evaluator, inherit from the `PairwiseStringEvaluator` class and overwrite the `_evaluate_string_pairs` method. If you require asynchronous evaluation, also overwrite the `_aevaluate_string_pairs` method.
Here's a summary of the key methods and properties of a comparison evaluator:
- `evaluate_string_pairs`: Evaluate the output string pairs. This function should be overwritten when creating custom evaluators.
- `aevaluate_string_pairs`: Asynchronously evaluate the output string pairs. This function should be overwritten for asynchronous evaluation.
- `requires_input`: This property indicates whether this evaluator requires an input string.
- `requires_reference`: This property specifies whether this evaluator requires a reference label.
Detailed information about creating custom evaluators and the available built-in comparison evaluators are provided in the following sections.
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -0,0 +1,31 @@
---
sidebar_position: 6
---
import DocCardList from "@theme/DocCardList";
# Evaluation
Building applications with language models involves many moving parts. One of the most critical components is ensuring that the outcomes produced by your models are reliable and useful across a broad array of inputs, and that they work well with your application's other software components. Ensuring reliability usually boils down to some combination of application design, testing & evaluation, and runtime checks.
The guides in this section review the APIs and functionality LangChain provides to help yous better evaluate your applications. Evaluation and testing are both critical when thinking about deploying LLM applications, since production environments require repeatable and useful outcomes.
LangChain offers various types of evaluators to help you measure performance and integrity on diverse data, and we hope to encourage the the community to create and share other useful evaluators so everyone can improve. These docs will introduce the evaluator types, how to use them, and provide some examples of their use in real-world scenarios.
Each evaluator type in LangChain comes with ready-to-use implementations and an extensible API that allows for customization according to your unique requirements. Here are some of the types of evaluators we offer:
- [String Evaluators](/docs/guides/evaluation/string/): These evaluators assess the predicted string for a given input, usually comparing it against a reference string.
- [Trajectory Evaluators](/docs/guides/evaluation/trajectory/): These are used to evaluate the entire trajectory of agent actions.
- [Comparison Evaluators](/docs/guides/evaluation/comparison/): These evaluators are designed to compare predictions from two runs on a common input.
These evaluators can be used across various scenarios and can be applied to different chain and LLM implementations in the LangChain library.
We also are working to share guides and cookbooks that demonstrate how to use these evaluators in real-world scenarios, such as:
- [Chain Comparisons](/docs/guides/evaluation/examples/comparisons): This example uses a comparison evaluator to predict the preferred output. It reviews ways to measure confidence intervals to select statistically significant differences in aggregate preference scores across different models or prompts.
## Reference Docs
For detailed information on the available evaluators, including how to instantiate, configure, and customize them, check out the [reference documentation](https://api.python.langchain.com/en/latest/api_reference.html#module-langchain.evaluation) directly.
<DocCardList />

View File

@@ -0,0 +1,27 @@
---
sidebar_position: 2
---
# String Evaluators
A string evaluator is a component within LangChain designed to assess the performance of a language model by comparing its generated outputs (predictions) to a reference string or an input. This comparison is a crucial step in the evaluation of language models, providing a measure of the accuracy or quality of the generated text.
In practice, string evaluators are typically used to evaluate a predicted string against a given input, such as a question or a prompt. Often, a reference label or context string is provided to define what a correct or ideal response would look like. These evaluators can be customized to tailor the evaluation process to fit your application's specific requirements.
To create a custom string evaluator, inherit from the `StringEvaluator` class and implement the `_evaluate_strings` method. If you require asynchronous support, also implement the `_aevaluate_strings` method.
Here's a summary of the key attributes and methods associated with a string evaluator:
- `evaluation_name`: Specifies the name of the evaluation.
- `requires_input`: Boolean attribute that indicates whether the evaluator requires an input string. If True, the evaluator will raise an error when the input isn't provided. If False, a warning will be logged if an input _is_ provided, indicating that it will not be considered in the evaluation.
- `requires_reference`: Boolean attribute specifying whether the evaluator requires a reference label. If True, the evaluator will raise an error when the reference isn't provided. If False, a warning will be logged if a reference _is_ provided, indicating that it will not be considered in the evaluation.
String evaluators also implement the following methods:
- `aevaluate_strings`: Asynchronously evaluates the output of the Chain or Language Model, with support for optional input and label.
- `evaluate_strings`: Synchronously evaluates the output of the Chain or Language Model, with support for optional input and label.
The following sections provide detailed information on available string evaluator implementations as well as how to create a custom string evaluator.
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -0,0 +1,28 @@
---
sidebar_position: 4
---
# Trajectory Evaluators
Trajectory Evaluators in LangChain provide a more holistic approach to evaluating an agent. These evaluators assess the full sequence of actions taken by an agent and their corresponding responses, which we refer to as the "trajectory". This allows you to better measure an agent's effectiveness and capabilities.
A Trajectory Evaluator implements the `AgentTrajectoryEvaluator` interface, which requires two main methods:
- `evaluate_agent_trajectory`: This method synchronously evaluates an agent's trajectory.
- `aevaluate_agent_trajectory`: This asynchronous counterpart allows evaluations to be run in parallel for efficiency.
Both methods accept three main parameters:
- `input`: The initial input given to the agent.
- `prediction`: The final predicted response from the agent.
- `agent_trajectory`: The intermediate steps taken by the agent, given as a list of tuples.
These methods return a dictionary. It is recommended that custom implementations return a `score` (a float indicating the effectiveness of the agent) and `reasoning` (a string explaining the reasoning behind the score).
You can capture an agent's trajectory by initializing the agent with the `return_intermediate_steps=True` parameter. This lets you collect all intermediate steps without relying on special callbacks.
For a deeper dive into the implementation and use of Trajectory Evaluators, refer to the sections below.
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -3,46 +3,80 @@ sidebar_position: 4
---
# Agents
Some applications require a flexible chain of calls to LLMs and other tools based on user input. The **Agent** interface provides the flexibility for such applications. An agent has access to a suite of tools, and determines which ones to use depending on the user input. Agents can use multiple tools, and use the output of one tool as the input to the next.
The core idea of agents is to use an LLM to choose a sequence of actions to take.
In chains, a sequence of actions is hardcoded (in code).
In agents, a language model is used as a reasoning engine to determine which actions to take and in which order.
There are two main types of agents:
There are several key components here:
- **Action agents**: at each timestep, decide on the next action using the outputs of all previous actions
- **Plan-and-execute agents**: decide on the full sequence of actions up front, then execute them all without updating the plan
## Agent
Action agents are suitable for small tasks, while plan-and-execute agents are better for complex or long-running tasks that require maintaining long-term objectives and focus. Often the best approach is to combine the dynamism of an action agent with the planning abilities of a plan-and-execute agent by letting the plan-and-execute agent use action agents to execute plans.
This is the class responsible for deciding what step to take next.
This is powered by a language model and a prompt.
This prompt can include things like:
For a full list of agent types see [agent types](/docs/modules/agents/agent_types/). Additional abstractions involved in agents are:
- [**Tools**](/docs/modules/agents/tools/): the actions an agent can take. What tools you give an agent highly depend on what you want the agent to do
- [**Toolkits**](/docs/modules/agents/toolkits/): wrappers around collections of tools that can be used together a specific use case. For example, in order for an agent to
interact with a SQL database it will likely need one tool to execute queries and another to inspect tables
1. The personality of the agent (useful for having it respond in a certain way)
2. Background context for the agent (useful for giving it more context on the types of tasks it's being asked to do)
3. Prompting strategies to invoke better reasoning (the most famous/widely used being [ReAct](https://arxiv.org/abs/2210.03629))
## Action agents
LangChain provides a few different types of agents to get started.
Even then, you will likely want to customize those agents with parts (1) and (2).
For a full list of agent types see [agent types](/docs/modules/agents/agent_types/)
At a high-level an action agent:
1. Receives user input
2. Decides which tool, if any, to use and the tool input
3. Calls the tool and records the output (also known as an "observation")
4. Decides the next step using the history of tools, tool inputs, and observations
5. Repeats 3-4 until it determines it can respond directly to the user
## Tools
Action agents are wrapped in **agent executors**, which are responsible for calling the agent, getting back an action and action input, calling the tool that the action references with the generated input, getting the output of the tool, and then passing all that information back into the agent to get the next action it should take.
Tools are functions that an agent calls.
There are two important considerations here:
Although an agent can be constructed in many ways, it typically involves these components:
1. Giving the agent access to the right tools
2. Describing the tools in a way that is most helpful to the agent
- **Prompt template**: Responsible for taking the user input and previous steps and constructing a prompt
to send to the language model
- **Language model**: Takes the prompt with use input and action history and decides what to do next
- **Output parser**: Takes the output of the language model and parses it into the next action or a final answer
Without both, the agent you are trying to build will not work.
If you don't give the agent access to a correct set of tools, it will never be able to accomplish the objective.
If you don't describe the tools properly, the agent won't know how to properly use them.
## Plan-and-execute agents
LangChain provides a wide set of tools to get started, but also makes it easy to define your own (including custom descriptions).
For a full list of tools, see [here](/docs/modules/agents/tools/)
At a high-level a plan-and-execute agent:
1. Receives user input
2. Plans the full sequence of steps to take
3. Executes the steps in order, passing the outputs of past steps as inputs to future steps
## Toolkits
The most typical implementation is to have the planner be a language model, and the executor be an action agent. Read more [here](/docs/modules/agents/agent_types/plan_and_execute.html).
Often the set of tools an agent has access to is more important than a single tool.
For this LangChain provides the concept of toolkits - groups of tools needed to accomplish specific objectives.
There are generally around 3-5 tools in a toolkit.
LangChain provides a wide set of toolkits to get started.
For a full list of toolkits, see [here](/docs/modules/agents/toolkits/)
## AgentExecutor
The agent executor is the runtime for an agent.
This is what actually calls the agent and executes the actions it chooses.
Pseudocode for this runtime is below:
```python
next_action = agent.get_action(...)
while next_action != AgentFinish:
observation = run(next_action)
next_action = agent.get_action(..., next_action, observation)
return next_action
```
While this may seem simple, there are several complexities this runtime handles for you, including:
1. Handling cases where the agent selects a non-existent tool
2. Handling cases where the tool errors
3. Handling cases where the agent produces output that cannot be parsed into a tool invocation
4. Logging and observability at all levels (agent decisions, tool calls) either to stdout or [LangSmith](https://smith.langchain.com).
## Other types of agent runtimes
The `AgentExecutor` class is the main agent runtime supported by LangChain.
However, there are other, more experimental runtimes we also support.
These include:
- [Plan-and-execute Agent](/docs/modules/agents/agent_types/plan_and_execute.html)
- [Baby AGI](/docs/use_cases/autonomous_agents/baby_agi.html)
- [Auto GPT](/docs/use_cases/autonomous_agents/autogpt.html)
## Get started

View File

@@ -3,8 +3,8 @@ sidebar_position: 3
---
# Toolkits
:::info
Head to [Integrations](/docs/integrations/toolkits/) for documentation on built-in toolkit integrations.
:::
Toolkits are collections of tools that are designed to be used together for specific tasks and have convenience loading methods.
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -1,2 +0,0 @@
label: 'How-to'
position: 0

View File

@@ -3,6 +3,10 @@ sidebar_position: 2
---
# Tools
:::info
Head to [Integrations](/docs/integrations/tools/) for documentation on built-in tool integrations.
:::
Tools are interfaces that an agent can use to interact with the world.
## Get started

View File

@@ -1 +0,0 @@
label: 'Integrations'

View File

@@ -1,2 +0,0 @@
label: 'How-to'
position: 0

View File

@@ -3,6 +3,10 @@ sidebar_position: 5
---
# Callbacks
:::info
Head to [Integrations](/docs/integrations/callbacks/) for documentation on built-in callbacks integrations with 3rd-party tools.
:::
LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
import GetStarted from "@snippets/modules/callbacks/get_started.mdx"

View File

@@ -1 +0,0 @@
label: 'Integrations'

View File

@@ -1,2 +0,0 @@
label: 'How-to'
position: 0

View File

@@ -3,6 +3,10 @@ sidebar_position: 0
---
# Document loaders
:::info
Head to [Integrations](/docs/integrations/document_loaders/) for documentation on built-in document loader integrations with 3rd-party tools.
:::
Use document loaders to load data from a source as `Document`'s. A `Document` is a piece of text
and associated metadata. For example, there are document loaders for loading a simple `.txt` file, for loading the text
contents of any web page, or even for loading a transcript of a YouTube video.

View File

@@ -3,6 +3,10 @@ sidebar_position: 1
---
# Document transformers
:::info
Head to [Integrations](/docs/integrations/document_transformers/) for documentation on built-in document transformer integrations with 3rd-party tools.
:::
Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example
is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain
has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents.

View File

@@ -1,2 +0,0 @@
label: 'How-to'
position: 0

View File

@@ -3,6 +3,10 @@ sidebar_position: 4
---
# Retrievers
:::info
Head to [Integrations](/docs/integrations/retrievers/) for documentation on built-in retriever integrations with 3rd-party tools.
:::
A retriever is an interface that returns documents given an unstructured query. It is more general than a vector store.
A retriever does not need to be able to store documents, only to return (or retrieve) it. Vector stores can be used
as the backbone of a retriever, but there are other types of retrievers as well.

View File

@@ -3,6 +3,10 @@ sidebar_position: 2
---
# Text embedding models
:::info
Head to [Integrations](/docs/integrations/text_embedding/) for documentation on built-in integrations with text embedding model providers.
:::
The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.

View File

@@ -3,6 +3,10 @@ sidebar_position: 3
---
# Vector stores
:::info
Head to [Integrations](/docs/integrations/vectorstores/) for documentation on built-in integrations with 3rd-party vector stores.
:::
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding
vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are
'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search

View File

@@ -1,8 +0,0 @@
---
sidebar_position: 3
---
# Comparison Evaluators
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -1,28 +0,0 @@
---
sidebar_position: 6
---
import DocCardList from "@theme/DocCardList";
# Evaluation
Language models can be unpredictable. This makes it challenging to ship reliable applications to production, where repeatable, useful outcomes across diverse inputs are a minimum requirement. Tests help demonstrate each component in an LLM application can produce the required or expected functionality. These tests also safeguard against regressions while you improve interconnected pieces of an integrated system. However, measuring the quality of generated text can be challenging. It can be hard to agree on the right set of metrics for your application, and it can be difficult to translate those into better performance. Furthermore, it's common to lack sufficient evaluation data adequately test the range of inputs and expected outputs for each component when you're just getting started. The LangChain community is building open source tools and guides to help address these challenges.
LangChain exposes different types of evaluators for common types of evaluation. Each type has off-the-shelf implementations you can use to get started, as well as an
extensible API so you can create your own or contribute improvements for everyone to use. The following sections have example notebooks for you to get started.
- [String Evaluators](/docs/modules/evaluation/string/): Evaluate the predicted string for a given input, usually against a reference string
- [Trajectory Evaluators](/docs/modules/evaluation/trajectory/): Evaluate the whole trajectory of agent actions
- [Comparison Evaluators](/docs/modules/evaluation/comparison/): Compare predictions from two runs on a common input
This section also provides some additional examples of how you could use these evaluators for different scenarios or apply to different chain implementations in the LangChain library. Some examples include:
- [Preference Scoring Chain Outputs](/docs/modules/evaluation/examples/comparisons): An example using a comparison evaluator on different models or prompts to select statistically significant differences in aggregate preference scores
## Reference Docs
For detailed information of the available evaluators, including how to instantiate, configure, and customize them. Check out the [reference documentation](https://api.python.langchain.com/en/latest/api_reference.html#module-langchain.evaluation) directly.
<DocCardList />

View File

@@ -1,8 +0,0 @@
---
sidebar_position: 4
---
# Trajectory Evaluators
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -1,2 +0,0 @@
label: 'How-to'
position: 0

View File

@@ -6,6 +6,10 @@ sidebar_position: 3
🚧 _Docs under construction_ 🚧
:::info
Head to [Integrations](/docs/integrations/memory/) for documentation on built-in memory integrations with 3rd-party tools.
:::
By default, Chains and Agents are stateless,
meaning that they treat each incoming query independently (like the underlying LLMs and chat models themselves).
In some applications, like chatbots, it is essential

View File

@@ -1 +0,0 @@
label: 'Integrations'

View File

@@ -1,2 +0,0 @@
label: 'How-to'
position: 0

View File

@@ -3,18 +3,16 @@ sidebar_position: 1
---
# Chat models
:::info
Head to [Integrations](/docs/integrations/chat/) for documentation on built-in integrations with chat model providers.
:::
Chat models are a variation on language models.
While chat models use language models under the hood, the interface they expose is a bit different.
Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
The following sections of documentation are provided:
- **How-to guides**: Walkthroughs of core functionality, like streaming, creating chat prompts, etc.
- **Integrations**: How to use different chat model providers (OpenAI, Anthropic, etc).
## Get started
import GetStarted from "@snippets/modules/model_io/models/chat/get_started.mdx"

View File

@@ -1,2 +0,0 @@
label: 'How-to'
position: 0

View File

@@ -3,14 +3,12 @@ sidebar_position: 0
---
# LLMs
:::info
Head to [Integrations](/docs/integrations/llms/) for documentation on built-in integrations with LLM providers.
:::
Large Language Models (LLMs) are a core component of LangChain.
LangChain does not serve it's own LLMs, but rather provides a standard interface for interacting with many different LLMs.
For more detailed documentation check out our:
- **How-to guides**: Walkthroughs of core functionality, like streaming, async, etc.
- **Integrations**: How to use different LLM providers (OpenAI, Anthropic, etc.)
LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs.
## Get started

View File

@@ -148,6 +148,28 @@ const config = {
navbar: {
title: "🦜️🔗 LangChain",
items: [
{
to: "/docs/get_started/introduction",
label: "Docs",
position: "left",
},
{
type: 'docSidebar',
position: 'left',
sidebarId: 'use_cases',
label: 'Use cases',
},
{
type: 'docSidebar',
position: 'left',
sidebarId: 'integrations',
label: 'Integrations',
},
{
href: "https://api.python.langchain.com",
label: "API",
position: "left",
},
{
to: "https://smith.langchain.com",
label: "LangSmith",
@@ -161,8 +183,9 @@ const config = {
// Please keep GitHub link to the right for consistency.
{
href: "https://github.com/hwchase17/langchain",
label: "GitHub",
position: "right",
position: 'right',
className: 'header-github-link',
'aria-label': 'GitHub repository',
},
],
},

View File

@@ -0,0 +1,150 @@
import importlib
import inspect
import json
import logging
import os
import re
from pathlib import Path
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Base URL for all class documentation
_BASE_URL = "https://api.python.langchain.com/en/latest/"
# Regular expression to match Python code blocks
code_block_re = re.compile(r"^(```python\n)(.*?)(```\n)", re.DOTALL | re.MULTILINE)
# Regular expression to match langchain import lines
_IMPORT_RE = re.compile(r"(from\s+(langchain\.\w+(\.\w+)*?)\s+import\s+)(\w+)")
_CURRENT_PATH = Path(__file__).parent.absolute()
# Directory where generated markdown files are stored
_DOCS_DIR = _CURRENT_PATH / "docs"
_JSON_PATH = _CURRENT_PATH.parent / "api_reference" / "guide_imports.json"
def find_files(path):
"""Find all MDX files in the given path"""
for root, _, files in os.walk(path):
for file in files:
if file.endswith(".mdx") or file.endswith(".md"):
yield os.path.join(root, file)
def get_full_module_name(module_path, class_name):
"""Get full module name using inspect"""
module = importlib.import_module(module_path)
class_ = getattr(module, class_name)
return inspect.getmodule(class_).__name__
def main():
"""Main function"""
global_imports = {}
for file in find_files(_DOCS_DIR):
print(f"Adding links for imports in {file}")
# replace_imports now returns the import information rather than writing it to a file
file_imports = replace_imports(file)
if file_imports:
# Use relative file path as key
relative_path = os.path.relpath(file, _DOCS_DIR)
doc_url = f"https://python.langchain.com/docs/{relative_path.replace('.mdx', '').replace('.md', '')}"
for import_info in file_imports:
doc_title = import_info["title"]
class_name = import_info["imported"]
if class_name not in global_imports:
global_imports[class_name] = {}
global_imports[class_name][doc_title] = doc_url
# Write the global imports information to a JSON file
with _JSON_PATH.open("w") as f:
json.dump(global_imports, f)
def _get_doc_title(data: str, file_name: str) -> str:
try:
return re.findall(r"^#\s+(.*)", data, re.MULTILINE)[0]
except IndexError:
pass
# Parse the rst-style titles
try:
return re.findall(r"^(.*)\n=+\n", data, re.MULTILINE)[0]
except IndexError:
return file_name
def replace_imports(file):
"""Replace imports in each Python code block with links to their documentation and append the import info in a comment"""
all_imports = []
with open(file, "r") as f:
data = f.read()
file_name = os.path.basename(file)
_DOC_TITLE = _get_doc_title(data, file_name)
def replacer(match):
# Extract the code block content
code = match.group(2)
# Replace if any import comment exists
# TODO: Use our own custom <code> component rather than this
# injection method
existing_comment_re = re.compile(r"^<!--IMPORTS:.*?-->\n", re.MULTILINE)
code = existing_comment_re.sub("", code)
# Process imports in the code block
imports = []
for import_match in _IMPORT_RE.finditer(code):
class_name = import_match.group(4)
try:
module_path = get_full_module_name(import_match.group(2), class_name)
except AttributeError as e:
logger.warning(f"Could not find module for {class_name}, {e}")
continue
except ImportError as e:
# Some CentOS OpenSSL issues can cause this to fail
logger.warning(f"Failed to load for class {class_name}, {e}")
continue
url = (
_BASE_URL
+ "/"
+ module_path.split(".")[1]
+ "/"
+ module_path
+ "."
+ class_name
+ ".html"
)
# Add the import information to our list
imports.append(
{
"imported": class_name,
"source": import_match.group(2),
"docs": url,
"title": _DOC_TITLE,
}
)
if imports:
all_imports.extend(imports)
# Create a unique comment containing the import information
import_comment = f"<!--IMPORTS:{json.dumps(imports)}-->"
# Inject the import comment at the start of the code block
return match.group(1) + import_comment + "\n" + code + match.group(3)
else:
# If there are no imports, return the original match
return match.group(0)
# Use re.sub to replace each Python code block
data = code_block_re.sub(replacer, data)
with open(file, "w") as f:
f.write(data)
return all_imports
if __name__ == "__main__":
main()

View File

@@ -20,7 +20,7 @@
module.exports = {
// By default, Docusaurus generates a sidebar from the docs folder structure
sidebar: [
docs: [
{
type: "category",
label: "Get started",
@@ -30,7 +30,7 @@ module.exports = {
link: {
type: 'generated-index',
description: 'Get started with LangChain',
slug: "get_started",
slug: "get_started",
},
},
{
@@ -44,17 +44,6 @@ module.exports = {
id: "modules/index"
},
},
{
type: "category",
label: "Use cases",
collapsed: true,
items: [{ type: "autogenerated", dirName: "use_cases" }],
link: {
type: 'generated-index',
description: 'Walkthroughs of common end-to-end use cases',
slug: "use_cases",
},
},
{
type: "category",
label: "Guides",
@@ -63,7 +52,7 @@ module.exports = {
link: {
type: 'generated-index',
description: 'Design guides for key parts of the development process',
slug: "guides",
slug: "guides",
},
},
{
@@ -73,7 +62,7 @@ module.exports = {
items: [{ type: "autogenerated", dirName: "ecosystem" }],
link: {
type: 'generated-index',
slug: "ecosystem",
slug: "ecosystem",
},
},
{
@@ -83,18 +72,32 @@ module.exports = {
items: [{ type: "autogenerated", dirName: "additional_resources" }, { type: "link", label: "Gallery", href: "https://github.com/kyrolabs/awesome-langchain" }],
link: {
type: 'generated-index',
slug: "additional_resources",
slug: "additional_resources",
},
},
],
integrations: [
{
type: "html",
value: "<hr>",
defaultStyle: true,
type: "category",
label: "Integrations",
collapsible: false,
items: [{ type: "autogenerated", dirName: "integrations" }],
link: {
type: 'generated-index',
slug: "integrations",
},
},
],
use_cases: [
{
type: "link",
href: "https://api.python.langchain.com",
label: "API reference",
type: "category",
label: "Use cases",
collapsible: false,
items: [{ type: "autogenerated", dirName: "use_cases" }],
link: {
type: 'generated-index',
slug: "use_cases",
},
},
],
};

View File

@@ -139,4 +139,22 @@
.hidden {
display: none !important;
}
.header-github-link:hover {
opacity: 0.6;
}
.header-github-link::before {
content: '';
width: 24px;
height: 24px;
display: flex;
background: url("data:image/svg+xml,%3Csvg viewBox='0 0 24 24' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath d='M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12'/%3E%3C/svg%3E")
no-repeat;
}
[data-theme='dark'] .header-github-link::before {
background: url("data:image/svg+xml,%3Csvg viewBox='0 0 24 24' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath fill='white' d='M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12'/%3E%3C/svg%3E")
no-repeat;
}

View File

@@ -21,7 +21,7 @@ function Imports({ imports }) {
</h4>
<ul style={{ paddingBottom: "1rem" }}>
{imports.map(({ imported, source, docs }) => (
<li>
<li key={imported}>
<a href={docs}>
<span>{imported}</span>
</a>{" "}
@@ -34,14 +34,25 @@ function Imports({ imports }) {
}
export default function CodeBlockWrapper({ children, ...props }) {
// Initialize imports as an empty array
let imports = [];
// Check if children is a string
if (typeof children === "string") {
return <CodeBlock {...props}>{children}</CodeBlock>;
// Search for an IMPORTS comment in the code
const match = /<!--IMPORTS:(.*?)-->\n/.exec(children);
if (match) {
imports = JSON.parse(match[1]);
children = children.replace(match[0], "");
}
} else if (children.imports) {
imports = children.imports;
}
return (
<>
<CodeBlock {...props}>{children.content}</CodeBlock>
<Imports imports={children.imports} />
<CodeBlock {...props}>{children}</CodeBlock>
{imports.length > 0 && <Imports imports={imports} />}
</>
);
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,32 @@
#!/bin/bash
### See: https://github.com/urllib3/urllib3/issues/2168
# Requests lib breaks for old SSL versions,
# which are defaults on Amazon Linux 2 (which Vercel uses for builds)
yum -y update
yum remove openssl-devel -y
yum install gcc bzip2-devel libffi-devel zlib-devel wget tar -y
yum install openssl11 -y
yum install openssl11-devel -y
# Install python 3.11 to connect with openSSL 1.1.1
wget https://www.python.org/ftp/python/3.11.4/Python-3.11.4.tgz
tar xzf Python-3.11.4.tgz
cd Python-3.11.4
./configure
make altinstall
# Check python version
echo "Python Version"
python3.11 --version
cd ..
python3 --version
python3 -m venv .venv
###
# Install nbdev and generate docs
cd ..
python3.11 -m venv .venv
source .venv/bin/activate
python3 -m pip install -r requirements.txt
python3.11 -m pip install --upgrade pip
python3.11 -m pip install -r vercel_requirements.txt
cp -r extras/* docs_skeleton/docs
cd docs_skeleton
nbdoc_build
python3.11 generate_api_reference_links.py

View File

@@ -31,7 +31,7 @@ There isn't any special setup for it.
## LLM
See a [usage example](/docs/modules/model_io/models/llms/integrations/INCLUDE_REAL_NAME.html).
See a [usage example](/docs/integrations/llms/INCLUDE_REAL_NAME).
```python
from langchain.llms import integration_class_REPLACE_ME
@@ -40,7 +40,7 @@ from langchain.llms import integration_class_REPLACE_ME
## Text Embedding Models
See a [usage example](/docs/modules/data_connection/text_embedding/integrations/INCLUDE_REAL_NAME.html)
See a [usage example](/docs/integrations/text_embedding/INCLUDE_REAL_NAME)
```python
from langchain.embeddings import integration_class_REPLACE_ME
@@ -49,7 +49,7 @@ from langchain.embeddings import integration_class_REPLACE_ME
## Chat Models
See a [usage example](/docs/modules/model_io/models/chat/integrations/INCLUDE_REAL_NAME.html)
See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME)
```python
from langchain.chat_models import integration_class_REPLACE_ME
@@ -57,7 +57,7 @@ from langchain.chat_models import integration_class_REPLACE_ME
## Document Loader
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/INCLUDE_REAL_NAME.html).
See a [usage example](/docs/integrations/document_loaders/INCLUDE_REAL_NAME).
```python
from langchain.document_loaders import integration_class_REPLACE_ME

View File

@@ -1,66 +0,0 @@
# Modal
This page covers how to use the Modal ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Modal wrappers.
## Installation and Setup
- Install with `pip install modal-client`
- Run `modal token new`
## Define your Modal Functions and Webhooks
You must include a prompt. There is a rigid response structure.
```python
class Item(BaseModel):
prompt: str
@stub.webhook(method="POST")
def my_webhook(item: Item):
return {"prompt": my_function.call(item.prompt)}
```
An example with GPT2:
```python
from pydantic import BaseModel
import modal
stub = modal.Stub("example-get-started")
volume = modal.SharedVolume().persist("gpt2_model_vol")
CACHE_PATH = "/root/model_cache"
@stub.function(
gpu="any",
image=modal.Image.debian_slim().pip_install(
"tokenizers", "transformers", "torch", "accelerate"
),
shared_volumes={CACHE_PATH: volume},
retries=3,
)
def run_gpt2(text: str):
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
encoded_input = tokenizer(text, return_tensors='pt').input_ids
output = model.generate(encoded_input, max_length=50, do_sample=True)
return tokenizer.decode(output[0], skip_special_tokens=True)
class Item(BaseModel):
prompt: str
@stub.webhook(method="POST")
def get_text(item: Item):
return {"prompt": run_gpt2.call(item.prompt)}
```
## Wrappers
### LLM
There exists an Modal LLM wrapper, which you can access with
```python
from langchain.llms import Modal
```

View File

@@ -0,0 +1,381 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2da95378",
"metadata": {},
"source": [
"# Pairwise String Comparison\n",
"\n",
"Often you will want to compare predictions of an LLM, Chain, or Agent for a given input. The `StringComparison` evaluators facilitate this so you can answer questions like:\n",
"\n",
"- Which LLM or prompt produces a preferred output for a given question?\n",
"- Which examples should I include for few-shot example selection?\n",
"- Which output is better to include for fintetuning?\n",
"\n",
"The simplest and often most reliable automated way to choose a preferred prediction for a given input is to use the `pairwise_string` evaluator.\n",
"\n",
"Check out the reference docs for the [PairwiseStringEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain) for more info."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "f6790c46",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.evaluation import load_evaluator\n",
"\n",
"evaluator = load_evaluator(\"labeled_pairwise_string\")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "49ad9139",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'reasoning': 'Both responses are relevant to the question asked, as they both provide a numerical answer to the question about the number of dogs in the park. However, Response A is incorrect according to the reference answer, which states that there are four dogs. Response B, on the other hand, is correct as it matches the reference answer. Neither response demonstrates depth of thought, as they both simply provide a numerical answer without any additional information or context. \\n\\nBased on these criteria, Response B is the better response.\\n',\n",
" 'value': 'B',\n",
" 'score': 0}"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_string_pairs(\n",
" prediction=\"there are three dogs\",\n",
" prediction_b=\"4\",\n",
" input=\"how many dogs are in the park?\",\n",
" reference=\"four\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "7491d2e6-4e77-4b17-be6b-7da966785c1d",
"metadata": {},
"source": [
"## Methods\n",
"\n",
"\n",
"The pairwise string evaluator can be called using [evaluate_string_pairs](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.evaluate_string_pairs) (or async [aevaluate_string_pairs](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.aevaluate_string_pairs)) methods, which accept:\n",
"\n",
"- prediction (str) The predicted response of the first model, chain, or prompt.\n",
"- prediction_b (str) The predicted response of the second model, chain, or prompt.\n",
"- input (str) The input question, prompt, or other text.\n",
"- reference (str) (Only for the labeled_pairwise_string variant) The reference response.\n",
"\n",
"They return a dictionary with the following values:\n",
"- value: 'A' or 'B', indicating whether `prediction` or `prediction_b` is preferred, respectively\n",
"- score: Integer 0 or 1 mapped from the 'value', where a score of 1 would mean that the first `prediction` is preferred, and a score of 0 would mean `prediction_b` is preferred.\n",
"- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score"
]
},
{
"cell_type": "markdown",
"id": "ed353b93-be71-4479-b9c0-8c97814c2e58",
"metadata": {},
"source": [
"## Without References\n",
"\n",
"When references aren't available, you can still predict the preferred response.\n",
"The results will reflect the evaluation model's preference, which is less reliable and may result\n",
"in preferences that are factually incorrect."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "586320da",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.evaluation import load_evaluator\n",
"\n",
"evaluator = load_evaluator(\"pairwise_string\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "7f56c76e-a39b-4509-8b8a-8a2afe6c3da1",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'reasoning': 'Both responses are correct and relevant to the question. However, Response B is more helpful and insightful as it provides a more detailed explanation of what addition is. Response A is correct but lacks depth as it does not explain what the operation of addition entails. \\n\\nFinal Decision: [[B]]',\n",
" 'value': 'B',\n",
" 'score': 0}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_string_pairs(\n",
" prediction=\"Addition is a mathematical operation.\",\n",
" prediction_b=\"Addition is a mathematical operation that adds two numbers to create a third number, the 'sum'.\",\n",
" input=\"What is addition?\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "4a09b21d-9851-47e8-93d3-90044b2945b0",
"metadata": {
"tags": []
},
"source": [
"## Defining the Criteria\n",
"\n",
"By default, the LLM is instructed to select the 'preferred' response based on helpfulness, relevance, correctness, and depth of thought. You can customize the criteria by passing in a `criteria` argument, where the criteria could take any of the following forms:\n",
"- [`Criteria`](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.Criteria.html#langchain.evaluation.criteria.eval_chain.Criteria) enum or its string value - to use one of the default criteria and their descriptions\n",
"- [Constitutional principal](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.models.ConstitutionalPrinciple.html#langchain.chains.constitutional_ai.models.ConstitutionalPrinciple) - use one any of the constitutional principles defined in langchain\n",
"- Dictionary: a list of custom criteria, where the key is the name of the criteria, and the value is the description.\n",
"- A list of criteria or constitutional principles - to combine multiple criteria in one.\n",
"\n",
"Below is an example for determining preferred writing responses based on a custom style."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "8539e7d9-f7b0-4d32-9c45-593a7915c093",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"custom_criteria = {\n",
" \"simplicity\": \"Is the language straightforward and unpretentious?\",\n",
" \"clarity\": \"Are the sentences clear and easy to understand?\",\n",
" \"precision\": \"Is the writing precise, with no unnecessary words or details?\",\n",
" \"truthfulness\": \"Does the writing feel honest and sincere?\",\n",
" \"subtext\": \"Does the writing suggest deeper meanings or themes?\",\n",
"}\n",
"evaluator = load_evaluator(\"pairwise_string\", criteria=custom_criteria)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "fec7bde8-fbdc-4730-8366-9d90d033c181",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'reasoning': 'Response A is simple, clear, and precise. It uses straightforward language to convey a deep and sincere message about families. The metaphor of joy and sorrow as music is effective and easy to understand.\\n\\nResponse B, on the other hand, is more complex and less clear. The language is more pretentious, with words like \"domicile,\" \"resounds,\" \"abode,\" \"dissonant,\" and \"elegy.\" While it conveys a similar message to Response A, it does so in a more convoluted way. The precision is also lacking due to the use of unnecessary words and details.\\n\\nBoth responses suggest deeper meanings or themes about the shared joy and unique sorrow in families. However, Response A does so in a more effective and accessible way.\\n\\nTherefore, the better response is [[A]].',\n",
" 'value': 'A',\n",
" 'score': 1}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_string_pairs(\n",
" prediction=\"Every cheerful household shares a similar rhythm of joy; but sorrow, in each household, plays a unique, haunting melody.\",\n",
" prediction_b=\"Where one finds a symphony of joy, every domicile of happiness resounds in harmonious,\"\n",
" \" identical notes; yet, every abode of despair conducts a dissonant orchestra, each\"\n",
" \" playing an elegy of grief that is peculiar and profound to its own existence.\",\n",
" input=\"Write some prose about families.\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "a25b60b2-627c-408a-be4b-a2e5cbc10726",
"metadata": {},
"source": [
"## Customize the LLM\n",
"\n",
"By default, the loader uses `gpt-4` in the evaluation chain. You can customize this when loading."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "de84a958-1330-482b-b950-68bcf23f9e35",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(temperature=0)\n",
"\n",
"evaluator = load_evaluator(\"labeled_pairwise_string\", llm=llm)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "e162153f-d50a-4a7c-a033-019dabbc954c",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'reasoning': 'Here is my assessment:\\n\\nResponse B is more helpful, insightful, and accurate than Response A. Response B simply states \"4\", which directly answers the question by providing the exact number of dogs mentioned in the reference answer. In contrast, Response A states \"there are three dogs\", which is incorrect according to the reference answer. \\n\\nIn terms of helpfulness, Response B gives the precise number while Response A provides an inaccurate guess. For relevance, both refer to dogs in the park from the question. However, Response B is more correct and factual based on the reference answer. Response A shows some attempt at reasoning but is ultimately incorrect. Response B requires less depth of thought to simply state the factual number.\\n\\nIn summary, Response B is superior in terms of helpfulness, relevance, correctness, and depth. My final decision is: [[B]]\\n',\n",
" 'value': 'B',\n",
" 'score': 0}"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_string_pairs(\n",
" prediction=\"there are three dogs\",\n",
" prediction_b=\"4\",\n",
" input=\"how many dogs are in the park?\",\n",
" reference=\"four\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e0e89c13-d0ad-4f87-8fcb-814399bafa2a",
"metadata": {},
"source": [
"## Customize the Evaluation Prompt\n",
"\n",
"You can use your own custom evaluation prompt to add more task-specific instructions or to instruct the evaluator to score the output.\n",
"\n",
"*Note: If you use a prompt that expects generates a result in a unique format, you may also have to pass in a custom output parser (`output_parser=your_parser()`) instead of the default `PairwiseStringResultOutputParser`"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "fb817efa-3a4d-439d-af8c-773b89d97ec9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"\n",
"prompt_template = PromptTemplate.from_template(\n",
" \"\"\"Given the input context, which do you prefer: A or B?\n",
"Evaluate based on the following criteria:\n",
"{criteria}\n",
"Reason step by step and finally, respond with either [[A]] or [[B]] on its own line.\n",
"\n",
"DATA\n",
"----\n",
"input: {input}\n",
"reference: {reference}\n",
"A: {prediction}\n",
"B: {prediction_b}\n",
"---\n",
"Reasoning:\n",
"\n",
"\"\"\"\n",
")\n",
"evaluator = load_evaluator(\n",
" \"labeled_pairwise_string\", prompt=prompt_template\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "d40aa4f0-cfd5-4cb4-83c8-8d2300a04c2f",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"input_variables=['prediction', 'reference', 'prediction_b', 'input'] output_parser=None partial_variables={'criteria': 'helpfulness: Is the submission helpful, insightful, and appropriate?\\nrelevance: Is the submission referring to a real quote from the text?\\ncorrectness: Is the submission correct, accurate, and factual?\\ndepth: Does the submission demonstrate depth of thought?'} template='Given the input context, which do you prefer: A or B?\\nEvaluate based on the following criteria:\\n{criteria}\\nReason step by step and finally, respond with either [[A]] or [[B]] on its own line.\\n\\nDATA\\n----\\ninput: {input}\\nreference: {reference}\\nA: {prediction}\\nB: {prediction_b}\\n---\\nReasoning:\\n\\n' template_format='f-string' validate_template=True\n"
]
}
],
"source": [
"# The prompt was assigned to the evaluator\n",
"print(evaluator.prompt)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "9467bb42-7a31-4071-8f66-9ed2c6f06dcd",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'reasoning': 'Helpfulness: Both A and B are helpful as they provide a direct answer to the question.\\nRelevance: A is relevant as it refers to the correct name of the dog from the text. B is not relevant as it provides a different name.\\nCorrectness: A is correct as it accurately states the name of the dog. B is incorrect as it provides a different name.\\nDepth: Both A and B demonstrate a similar level of depth as they both provide a straightforward answer to the question.\\n\\nGiven these evaluations, the preferred response is:\\n',\n",
" 'value': 'A',\n",
" 'score': 1}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_string_pairs(\n",
" prediction=\"The dog that ate the ice cream was named fido.\",\n",
" prediction_b=\"The dog's name is spot\",\n",
" input=\"What is the name of the dog that ate the ice cream?\",\n",
" reference=\"The dog's name is fido\",\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -24,18 +24,15 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.evaluation.comparison import PairwiseStringEvalChain\n",
"from langchain.evaluation import load_evaluator\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4\")\n",
"\n",
"eval_chain = PairwiseStringEvalChain.from_llm(llm=llm)"
"eval_chain = load_evaluator(\"pairwise_string\")"
]
},
{
@@ -50,7 +47,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"metadata": {
"tags": []
},
@@ -59,13 +56,13 @@
"name": "stderr",
"output_type": "stream",
"text": [
"Found cached dataset parquet (/Users/wfh/.cache/huggingface/datasets/LangChainDatasets___parquet/LangChainDatasets--langchain-howto-queries-bbb748bbee7e77aa/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec)\n"
"Found cached dataset parquet (/Users/wfh/.cache/huggingface/datasets/LangChainDatasets___parquet/LangChainDatasets--langchain-howto-queries-bbb748bbee7e77aa/0.0.0/14a00e99c0d15a23649d0db8944380ac81082d4b021f398733dd84f3a6c569a7)\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "d852a1884480457292c90d8bd9d4f1e6",
"model_id": "a2358d37246640ce95e0f9940194590a",
"version_major": 2,
"version_minor": 0
},
@@ -94,7 +91,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"metadata": {
"tags": []
},
@@ -127,7 +124,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"metadata": {
"tags": []
},
@@ -152,7 +149,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 5,
"metadata": {
"tags": []
},
@@ -160,7 +157,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "b076d6bf6680422aa9082d4bad4d98a3",
"model_id": "87277cb39a1a4726bb7cc533a24e2ea4",
"version_major": 2,
"version_minor": 0
},
@@ -170,14 +167,6 @@
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised ServiceUnavailableError: The server is overloaded or not ready yet..\n",
"Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised ServiceUnavailableError: The server is overloaded or not ready yet..\n"
]
}
],
"source": [
@@ -215,7 +204,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 6,
"metadata": {
"tags": []
},
@@ -252,7 +241,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
@@ -270,7 +259,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 8,
"metadata": {
"tags": []
},
@@ -279,8 +268,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI Functions Agent: 90.00%\n",
"Structured Chat Agent: 10.00%\n"
"OpenAI Functions Agent: 95.00%\n",
"None: 5.00%\n"
]
}
],
@@ -310,7 +299,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 9,
"metadata": {
"tags": []
},
@@ -349,7 +338,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 10,
"metadata": {
"tags": []
},
@@ -358,8 +347,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
"The \"OpenAI Functions Agent\" would be preferred between 69.90% and 97.21% percent of the time (with 95% confidence).\n",
"The \"Structured Chat Agent\" would be preferred between 2.79% and 30.10% percent of the time (with 95% confidence).\n"
"The \"OpenAI Functions Agent\" would be preferred between 83.18% and 100.00% percent of the time (with 95% confidence).\n",
"The \"Structured Chat Agent\" would be preferred between 0.00% and 16.82% percent of the time (with 95% confidence).\n"
]
}
],
@@ -380,7 +369,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 11,
"metadata": {
"tags": []
},
@@ -389,9 +378,17 @@
"name": "stdout",
"output_type": "stream",
"text": [
"The p-value is 0.00040. If the null hypothesis is true (i.e., if the selected eval chain actually has no preference between the models),\n",
"then there is a 0.04025% chance of observing the OpenAI Functions Agent be preferred at least 18\n",
"times out of 20 trials.\n"
"The p-value is 0.00000. If the null hypothesis is true (i.e., if the selected eval chain actually has no preference between the models),\n",
"then there is a 0.00038% chance of observing the OpenAI Functions Agent be preferred at least 19\n",
"times out of 19 trials.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/ipykernel_15978/384907688.py:6: DeprecationWarning: 'binom_test' is deprecated in favour of 'binomtest' from version 1.7.0 and will be removed in Scipy 1.12.0.\n",
" p_value = stats.binom_test(successes, n, p=0.5, alternative=\"two-sided\")\n"
]
}
],

View File

@@ -0,0 +1,318 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "bce7335e-f3b2-44f3-90cc-8c0a23a89a21",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from langchain.agents import load_tools\n",
"from langchain.agents import initialize_agent\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.utilities import GoogleSearchAPIWrapper\n",
"from langchain.schema import (\n",
" SystemMessage,\n",
" HumanMessage,\n",
" AIMessage\n",
")\n",
"\n",
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_ENDPOINT\"] = \"https://api.smith.langchain.com\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = \"******\"\n",
"# os.environ[\"LANGCHAIN_PROJECT\"] = \"Jarvis\"\n",
"\n",
"\n",
"prefix_messages = [{\"role\": \"system\", \"content\": \"You are a helpful discord Chatbot.\"}]\n",
"\n",
"llm = ChatOpenAI(model_name='gpt-3.5-turbo', \n",
" temperature=0.5, \n",
" max_tokens = 2000)\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n",
"agent = initialize_agent(tools,\n",
" llm,\n",
" agent=\"zero-shot-react-description\",\n",
" verbose=True,\n",
" handle_parsing_errors=True\n",
" )\n",
"\n",
"\n",
"async def on_ready():\n",
" print(f'{bot.user} has connected to Discord!')\n",
"\n",
"async def on_message(message):\n",
"\n",
" print(\"Detected bot name in message:\", message.content)\n",
"\n",
" # Capture the output of agent.run() in the response variable\n",
" response = agent.run(message.content)\n",
"\n",
" while response:\n",
" print(response)\n",
" chunk, response = response[:2000], response[2000:]\n",
" print(f\"Chunk: {chunk}\")\n",
" print(\"Response sent.\")\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "1551ce9f-b6de-4035-b6d6-825722823b48",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from dataclasses import dataclass\n",
"@dataclass\n",
"class Message:\n",
" content: str"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "6e6859ec-8544-4407-9663-6b53c0092903",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Detected bot name in message: Hi AI, how are you today?\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThis question is not something that can be answered using the available tools.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3m\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Agent stopped due to iteration limit or time limit.\n",
"Chunk: Agent stopped due to iteration limit or time limit.\n",
"Response sent.\n"
]
}
],
"source": [
"await on_message(Message(content=\"Hi AI, how are you today?\"))"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "b850294c-7f8f-4e79-adcf-47e4e3a898df",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langsmith import Client\n",
"\n",
"client = Client()"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "6d089ddc-69bc-45a8-b8db-9962e4f1f5ee",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from itertools import islice\n",
"\n",
"runs = list(islice(client.list_runs(), 10))"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "f0349fac-5a98-400f-ba03-61ed4e1332be",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"runs = sorted(runs, key=lambda x: x.start_time, reverse=True)"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "02f133f0-39ee-4b46-b443-12c1f9b76fff",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"ids = [run.id for run in runs]"
]
},
{
"cell_type": "code",
"execution_count": 39,
"id": "3366dce4-0c38-4a7d-8111-046a58b24917",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"runs2 = list(client.list_runs(id=ids))\n",
"runs2 = sorted(runs2, key=lambda x: x.start_time, reverse=True)"
]
},
{
"cell_type": "code",
"execution_count": 42,
"id": "82915b90-39a0-47d6-9121-56a13f210f52",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"['a36092d2-4ad5-4fb4-9b0d-0dba9a2ed836',\n",
" '9398e6be-964f-4aa4-8de9-ad78cd4b7074']"
]
},
"execution_count": 42,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"[str(x) for x in ids[:2]]"
]
},
{
"cell_type": "code",
"execution_count": 48,
"id": "f610ec91-dc48-4a17-91c5-5c4675c77abc",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langsmith.run_helpers import traceable\n",
"\n",
"@traceable(run_type=\"llm\", name=\"\"\"<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/dQw4w9WgXcQ?start=5\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen></iframe>\"\"\")\n",
"def foo():\n",
" return \"bar\""
]
},
{
"cell_type": "code",
"execution_count": 49,
"id": "bd317bd7-8b2a-433a-8ec3-098a84ba8e64",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"'bar'"
]
},
"execution_count": 49,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"foo()"
]
},
{
"cell_type": "code",
"execution_count": 52,
"id": "b142519b-6885-415c-83b9-4a346fb90589",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.llms import AzureOpenAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5c50bb2b-72b8-4322-9b16-d857ecd9f347",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -5,16 +5,13 @@
"id": "4cf569a7-9a1d-4489-934e-50e57760c907",
"metadata": {},
"source": [
"# Evaluating Custom Criteria\n",
"# Criteria Evaluation\n",
"\n",
"Suppose you want to test a model's output against a custom rubric or custom set of criteria, how would you go about testing this?\n",
"In scenarios where you wish to assess a model's output using a specific rubric or criteria set, the `criteria` evaluator proves to be a handy tool. It allows you to verify if an LLM or Chain's output complies with a defined set of criteria.\n",
"\n",
"The `criteria` evaluator is a convenient way to predict whether an LLM or Chain's output complies with a set of criteria, so long as you can\n",
"properly define those criteria.\n",
"To understand its functionality and configurability in depth, refer to the reference documentation of the [CriteriaEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain) class.\n",
"\n",
"For more details, check out the reference docs for the [CriteriaEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain)'s class definition.\n",
"\n",
"### Without References\n",
"### Usage without references\n",
"\n",
"In this example, you will use the `CriteriaEvalChain` to check whether an output is concise. First, create the evaluation chain to predict whether outputs are \"concise\"."
]
@@ -30,7 +27,12 @@
"source": [
"from langchain.evaluation import load_evaluator\n",
"\n",
"evaluator = load_evaluator(\"criteria\", criteria=\"conciseness\")"
"evaluator = load_evaluator(\"criteria\", criteria=\"conciseness\")\n",
"\n",
"# This is equivalent to loading using the enum\n",
"from langchain.evaluation import EvaluatorType\n",
"\n",
"evaluator = load_evaluator(EvaluatorType.CRITERIA, criteria=\"conciseness\")"
]
},
{
@@ -45,7 +47,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'reasoning': 'The criterion is conciseness. This means the submission should be brief and to the point. \\n\\nLooking at the submission, the answer to the task is included, but there is additional commentary that is not necessary to answer the question. The phrase \"That\\'s an elementary question\" and \"The answer you\\'re looking for is\" could be removed and the answer would still be clear and correct. \\n\\nTherefore, the submission is not concise and does not meet the criterion. \\n\\nN', 'value': 'N', 'score': 0}\n"
"{'reasoning': 'The criterion is conciseness, which means the submission should be brief and to the point. \\n\\nLooking at the submission, the answer to the question \"What\\'s 2+2?\" is indeed \"four\". However, the respondent has added extra information, stating \"That\\'s an elementary question.\" This statement does not contribute to answering the question and therefore makes the response less concise.\\n\\nTherefore, the submission does not meet the criterion of conciseness.\\n\\nN', 'value': 'N', 'score': 0}\n"
]
}
],
@@ -59,49 +61,20 @@
},
{
"cell_type": "markdown",
"id": "43397a9f-ccca-4f91-b0e1-df0cada2efb1",
"id": "35e61e4d-b776-4f6b-8c89-da5d3604134a",
"metadata": {},
"source": [
"**Default Criteria**\n",
"#### Output Format\n",
"\n",
"Most of the time, you'll want to define your own custom criteria (see below), but we also provide some common criteria you can load with a single string.\n",
"Here's a list of pre-implemented criteria:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8c4ec9dd-6557-4f23-8480-c822eb6ec552",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"['conciseness',\n",
" 'relevance',\n",
" 'correctness',\n",
" 'coherence',\n",
" 'harmfulness',\n",
" 'maliciousness',\n",
" 'helpfulness',\n",
" 'controversiality',\n",
" 'mysogyny',\n",
" 'criminality',\n",
" 'insensitive']"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.evaluation import CriteriaEvalChain\n",
"All string evaluators expose an [evaluate_strings](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html?highlight=evaluate_strings#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.evaluate_strings) (or async [aevaluate_strings](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html?highlight=evaluate_strings#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.aevaluate_strings)) method, which accepts:\n",
"\n",
"# For a list of other default supported criteria, try calling `supported_default_criteria`\n",
"CriteriaEvalChain.get_supported_default_criteria()"
"- input (str) The input to the agent.\n",
"- prediction (str) The predicted response.\n",
"\n",
"The criteria evaluators return a dictionary with the following values:\n",
"- score: Binary integeer 0 to 1, where 1 would mean that the output is compliant with the criteria, and 0 otherwise\n",
"- value: A \"Y\" or \"N\" corresponding to the score\n",
"- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score"
]
},
{
@@ -111,12 +84,12 @@
"source": [
"## Using Reference Labels\n",
"\n",
"Some criteria (such as correctness) require reference labels to work correctly. To do this, initialize with `requires_reference=True` and call the evaluator with a `reference` string."
"Some criteria (such as correctness) require reference labels to work correctly. To do this, initialize the `labeled_criteria` evaluator and call the evaluator with a `reference` string."
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"id": "20d8a86b-beba-42ce-b82c-d9e5ebc13686",
"metadata": {
"tags": []
@@ -126,13 +99,12 @@
"name": "stdout",
"output_type": "stream",
"text": [
"With ground truth: 1\n",
"Without ground truth: 0\n"
"With ground truth: 1\n"
]
}
],
"source": [
"evaluator = load_evaluator(\"criteria\", criteria=\"correctness\", requires_reference=True)\n",
"evaluator = load_evaluator(\"labeled_criteria\", criteria=\"correctness\")\n",
"\n",
"# We can even override the model's learned knowledge using ground truth labels\n",
"eval_result = evaluator.evaluate_strings(\n",
@@ -143,6 +115,51 @@
"print(f'With ground truth: {eval_result[\"score\"]}')"
]
},
{
"cell_type": "markdown",
"id": "e05b5748-d373-4ff8-85d9-21da4641e84c",
"metadata": {},
"source": [
"**Default Criteria**\n",
"\n",
"Most of the time, you'll want to define your own custom criteria (see below), but we also provide some common criteria you can load with a single string.\n",
"Here's a list of pre-implemented criteria. Note that in the absence of labels, the LLM merely predicts what it thinks the best answer is and is not grounded in actual law or context."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "47de7359-db3e-4cad-bcfa-4fe834dea893",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[<Criteria.CONCISENESS: 'conciseness'>,\n",
" <Criteria.RELEVANCE: 'relevance'>,\n",
" <Criteria.CORRECTNESS: 'correctness'>,\n",
" <Criteria.COHERENCE: 'coherence'>,\n",
" <Criteria.HARMFULNESS: 'harmfulness'>,\n",
" <Criteria.MALICIOUSNESS: 'maliciousness'>,\n",
" <Criteria.HELPFULNESS: 'helpfulness'>,\n",
" <Criteria.CONTROVERSIALITY: 'controversiality'>,\n",
" <Criteria.MISOGYNY: 'misogyny'>,\n",
" <Criteria.CRIMINALITY: 'criminality'>,\n",
" <Criteria.INSENSITIVITY: 'insensitivity'>]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.evaluation import Criteria\n",
"\n",
"# For a list of other default supported criteria, try calling `supported_default_criteria`\n",
"list(Criteria)"
]
},
{
"cell_type": "markdown",
"id": "077c4715-e857-44a3-9f87-346642586a8d",
@@ -152,32 +169,52 @@
"\n",
"To evaluate outputs against your own custom criteria, or to be more explicit the definition of any of the default criteria, pass in a dictionary of `\"criterion_name\": \"criterion_description\"`\n",
"\n",
"Note: the evaluator still predicts whether the output complies with ALL of the criteria provided. If you specify antagonistic criteria / antonyms, the evaluator won't be very useful."
"Note: it's recommended that you create a single evaluator per criterion. This way, separate feedback can be provided for each aspect. Additionally, if you provide antagonistic criteria, the evaluator won't be very useful, as it will be configured to predict compliance for ALL of the criteria provided."
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 19,
"id": "bafa0a11-2617-4663-84bf-24df7d0736be",
"metadata": {},
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'reasoning': 'The criterion is asking if the output contains numeric information. The submission does mention the \"late 16th century,\" which is a numeric information. Therefore, the submission meets the criterion.\\n\\nY', 'value': 'Y', 'score': 1}\n"
"{'reasoning': \"The criterion asks if the output contains numeric or mathematical information. The joke in the submission does contain mathematical information. It refers to the mathematical concept of squaring a number and also mentions 'pi', which is a mathematical constant. Therefore, the submission does meet the criterion.\\n\\nY\", 'value': 'Y', 'score': 1}\n",
"{'reasoning': 'Let\\'s assess the submission based on the given criteria:\\n\\n1. Numeric: The output does not contain any explicit numeric information. The word \"square\" and \"pi\" are mathematical terms but they are not numeric information per se.\\n\\n2. Mathematical: The output does contain mathematical information. The terms \"square\" and \"pi\" are mathematical terms. The joke is a play on the mathematical concept of squaring a number (in this case, pi).\\n\\n3. Grammatical: The output is grammatically correct. The sentence structure, punctuation, and word usage are all correct.\\n\\n4. Logical: The output is logical. It makes sense within the context of the joke. The joke is a play on words between the mathematical concept of squaring a number (pi) and eating a square pie.\\n\\nBased on the above analysis, the submission does not meet all the criteria because it does not contain numeric information.\\nN', 'value': 'N', 'score': 0}\n"
]
}
],
"source": [
"custom_criterion = {\"numeric\": \"Does the output contain numeric information?\"}\n",
"custom_criterion = {\"numeric\": \"Does the output contain numeric or mathematical information?\"}\n",
"\n",
"eval_chain = load_evaluator(\n",
" EvaluatorType.CRITERIA,\n",
" llm=eval_llm,\n",
" criteria=custom_criterion,\n",
")\n",
"query = \"Tell me a joke\"\n",
"prediction = \"I ate some square pie but I don't know the square of pi.\"\n",
"eval_result = eval_chain.evaluate_strings(prediction=prediction, input=query)\n",
"print(eval_result)\n",
"\n",
"# If you wanted to specify multiple criteria. Generally not recommended\n",
"custom_criteria = {\n",
" \"numeric\": \"Does the output contain numeric information?\",\n",
" \"mathematical\": \"Does the output contain mathematical information?\",\n",
" \"grammatical\": \"Is the output grammatically correct?\",\n",
" \"logical\": \"Is the output logical?\",\n",
"}\n",
"\n",
"eval_chain = load_evaluator(\n",
" EvaluatorType.CRITERIA,\n",
" criteria=custom_criteria,\n",
")\n",
"eval_result = eval_chain.evaluate_strings(prediction=prediction, input=query)\n",
"print(\"Multi-criteria evaluation\")\n",
"print(eval_result)"
]
},
@@ -194,7 +231,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 9,
"id": "99e3c242-5b12-4bd5-b487-64990a159655",
"metadata": {},
"outputs": [
@@ -220,7 +257,7 @@
" ConstitutionalPrinciple(critique_request='Discuss the assistants last response to the human and whether it is in any way insensitive, sexist, racist, or socially inappropriate.', revision_request='Revise the assistants last response to remove content that is in any way insensitive, sexist, racist, or socially inappropriate.', name='insensitive'))]"
]
},
"execution_count": 7,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
@@ -234,28 +271,23 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 12,
"id": "f4061bc0-a88a-4d1c-88a0-8003bba084c7",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'reasoning': 'The criterion asks to identify if the assistant\\'s response is harmful, unethical, racist, sexist, toxic, dangerous, or illegal.\\n\\nThe assistant\\'s response is \"I say that man is a lilly-livered nincompoop\". This is a negative statement about a person named Will. \\n\\nThe term \"lilly-livered\" is an old-fashioned term meaning cowardly, and \"nincompoop\" is a term meaning a foolish or stupid person. \\n\\nWhile this is a negative and potentially insulting statement, it does not appear to be harmful in a physical sense, unethical, racist, sexist, toxic, dangerous, or illegal. \\n\\nIt could be argued that the statement is harmful in a psychological or emotional sense, as it is insulting. However, without further context, it\\'s hard to definitively say if this meets the criteria of being harmful.\\n\\nN',\n",
" 'value': 'N',\n",
" 'score': 0}"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
"name": "stdout",
"output_type": "stream",
"text": [
"{'reasoning': 'The criterion asks to identify if the assistant\\'s response is harmful, unethical, racist, sexist, toxic, dangerous, or illegal.\\n\\nLooking at the assistant\\'s response, it is clear that it is not racist or sexist as it does not discriminate or stereotype based on race or gender. \\n\\nThe response is also not illegal as it does not involve any criminal activity or encourage any form of illegal behavior.\\n\\nThe response is not dangerous as it does not pose a physical threat or risk to anyone\\'s safety.\\n\\nHowever, the assistant\\'s response can be considered harmful and toxic as it uses derogatory language (\"lilly-livered nincompoop\") to describe \\'Will\\'. This can be seen as a form of verbal abuse or insult, which can cause emotional harm.\\n\\nThe response can also be seen as unethical, as it is generally considered inappropriate to insult or belittle someone in this manner.\\n\\nN', 'value': 'N', 'score': 0}\n"
]
}
],
"source": [
"eval_chain = load_evaluator(\n",
" EvaluatorType.CRITERIA, llm=eval_llm, criteria=PRINCIPLES[\"harmful1\"]\n",
"evaluator = load_evaluator(\n",
" EvaluatorType.CRITERIA, criteria=PRINCIPLES[\"harmful1\"]\n",
")\n",
"eval_result = evaluator.evaluate_strings(\n",
" prediction=\"I say that man is a lilly-livered nincompoop\",\n",
@@ -278,7 +310,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 13,
"id": "1717162d-f76c-4a14-9ade-168d6fa42b7a",
"metadata": {
"tags": []
@@ -291,7 +323,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 14,
"id": "8727e6f4-aaba-472d-bb7d-09fc1a0f0e2a",
"metadata": {
"tags": []
@@ -306,7 +338,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 15,
"id": "3f6f0d8b-cf42-4241-85ae-35b3ce8152a0",
"metadata": {
"tags": []
@@ -316,7 +348,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'reasoning': 'Here is my step-by-step reasoning for each criterion:\\n\\nconciseness: The submission is not concise. It contains unnecessary words and phrases like \"That\\'s an elementary question\" and \"you\\'re looking for\". The answer could have simply been stated as \"4\" to be concise.\\n\\nN', 'value': 'N', 'score': 0}\n"
"{'reasoning': 'Step 1) Analyze the conciseness criterion: Is the submission concise and to the point?\\nStep 2) The submission provides extraneous information beyond just answering the question directly. It characterizes the question as \"elementary\" and provides reasoning for why the answer is 4. This additional commentary makes the submission not fully concise.\\nStep 3) Therefore, based on the analysis of the conciseness criterion, the submission does not meet the criteria.\\n\\nN', 'value': 'N', 'score': 0}\n"
]
}
],
@@ -340,7 +372,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 16,
"id": "22e57704-682f-44ff-96ba-e915c73269c0",
"metadata": {
"tags": []
@@ -364,13 +396,13 @@
"prompt = PromptTemplate.from_template(fstring)\n",
"\n",
"evaluator = load_evaluator(\n",
" \"criteria\", criteria=\"correctness\", prompt=prompt, requires_reference=True\n",
" \"labeled_criteria\", criteria=\"correctness\", prompt=prompt\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 17,
"id": "5d6b0eca-7aea-4073-a65a-18c3a9cdb5af",
"metadata": {
"tags": []
@@ -380,7 +412,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'reasoning': 'Correctness: No, the submission is not correct. The expected response was \"It\\'s 17 now.\" but the response given was \"What\\'s 2+2? That\\'s an elementary question. The answer you\\'re looking for is that two and two is four.\"', 'value': 'N', 'score': 0}\n"
"{'reasoning': 'Correctness: No, the response is not correct. The expected response was \"It\\'s 17 now.\" but the response given was \"What\\'s 2+2? That\\'s an elementary question. The answer you\\'re looking for is that two and two is four.\"', 'value': 'N', 'score': 0}\n"
]
}
],
@@ -404,6 +436,12 @@
"\n",
"Remember when selecting criteria to decide whether they ought to require ground truth labels or not. Things like \"correctness\" are best evaluated with ground truth or with extensive context. Also, remember to pick aligned principles for a given chain so that the classification makes sense."
]
},
{
"cell_type": "markdown",
"id": "a684e2f1",
"metadata": {},
"source": []
}
],
"metadata": {

View File

@@ -53,7 +53,7 @@
{
"data": {
"text/plain": [
"{'score': 12}"
"{'score': 0.11555555555555552}"
]
},
"execution_count": 3,
@@ -79,7 +79,7 @@
{
"data": {
"text/plain": [
"{'score': 4}"
"{'score': 0.0724999999999999}"
]
},
"execution_count": 4,
@@ -143,7 +143,7 @@
"outputs": [],
"source": [
"jaro_evaluator = load_evaluator(\n",
" \"string_distance\", distance=StringDistance.JARO, requires_reference=True\n",
" \"string_distance\", distance=StringDistance.JARO\n",
")"
]
},

Some files were not shown because too many files have changed in this diff Show More