Compare commits

..

106 Commits

Author SHA1 Message Date
Harrison Chase
ee8203859c add kwargs for chat models 2023-06-08 00:52:38 -07:00
Harrison Chase
893d20f735 bump version to 194 (#5866) 2023-06-07 22:47:48 -07:00
Harrison Chase
35cfd25db3 Harrison/nebula graph (#5865)
Co-authored-by: Wey Gu <weyl.gu@gmail.com>
Co-authored-by: chenweisomebody <chenweisomebody@gmail.com>
2023-06-07 21:56:43 -07:00
Harrison Chase
658f8bdee7 Harrison/fauna loader (#5864)
Co-authored-by: Shadid12 <Shadid12@users.noreply.github.com>
2023-06-07 21:32:23 -07:00
Liang Zhang
5518f24ec3 Implement saving and loading of RetrievalQA chain (#5818)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes #3983
Mimicing what we do for saving and loading VectorDBQA chain, I added the
logic for RetrievalQA chain.
Also added a unit test. I did not find how we test other chains for
their saving and loading functionality, so I just added a file with one
test case. Let me know if there are recommended ways to test it.

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:
@dev2049
<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-07 21:07:13 -07:00
Liang Zhang
b93638ef1e Refactor and update databricks integration page (#5575)
# Your PR Title (What it does)

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes # (issue)

## Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-07 20:45:47 -07:00
volodymyr-memsql
a1549901ce Added SingleStoreDB Vector Store (#5619)
- Added `SingleStoreDB` vector store, which is a wrapper over the
SingleStore DB database, that can be used as a vector storage and has an
efficient similarity search.
- Added integration tests for the vector store
- Added jupyter notebook with the example

@dev2049

---------

Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-07 20:45:33 -07:00
jjzhuo
78aa59c68b Fix serialization issue with W&B (#5693)
The chain input_documents are not displaying properly in W&B, due to
serialization issue:

<img width="1164" alt="Screenshot 2023-06-04 at 11 58 26 AM"
src="https://github.com/hwchase17/langchain/assets/134809928/f31f14f6-0935-4cca-9913-6760cd40eadf">

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-07 20:44:59 -07:00
Alec Flett
ec0dd6e34a propagate callbacks to ConversationalRetrievalChain (#5572)
# Allow callbacks to monitor ConversationalRetrievalChain

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

I ran into an issue where load_qa_chain was not passing the callbacks
down to the child LLM chains, and so made sure that callbacks are
propagated. There are probably more improvements to do here but this
seemed like a good place to stop.

Note that I saw a lot of references to callbacks_manager, which seems to
be deprecated. I left that code alone for now.



## Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
@agola11
<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-07 20:25:21 -07:00
Jeff Vestal
3294774148 Add knn and query search field options to ElasticKnnSearch (#5641)
in the `ElasticKnnSearch` class added 2 arguments that were not exposed
properly

`knn_search` added:
- `vector_query_field: Optional[str] = 'vector'`
-- vector_query_field: Field name to use in knn search if not default
'vector'

`knn_hybrid_search` added:
- `vector_query_field: Optional[str] = 'vector'`
-- vector_query_field: Field name to use in knn search if not default
'vector'
- `query_field: Optional[str] = 'text'`
-- query_field: Field name to use in search if not default 'text'



Fixes # https://github.com/hwchase17/langchain/issues/5633


cc: @dev2049 @hwchase17

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-07 20:19:14 -07:00
Mark Marryatt
cef79ca579 Fix exporting GCP Vertex Matching Engine from vectorstores (#5793)
The Vertex Matching Engine docs include [the
line](b177a29d3f/docs/modules/indexes/vectorstores/examples/matchingengine.ipynb (L32))
`from langchain.vectorstores import MatchingEngine` which doesn't work
as it wasn't added to the vectorestores module exports.



  - @dev2049
2023-06-07 19:45:33 -07:00
Dave Ingram
106364a45c Update to Getting Started docs page for Memory (#5855)
Simply fixing a small typo in the memory page. 

Also removed an extra code block at the end of the file.

Along the way, the current outputs seem to have changed in a few places
so left that for posterity, and updated the number of runs which seems
harmless, though I can clean that up if preferred.
2023-06-07 19:45:21 -07:00
bnassivet
9355e3f5f5 qdrant vector store - search with relevancy scores (#5781)
Implementation of similarity_search_with_relevance_scores for quadrant
vector store.
As implemented the method is also compatible with other capacities such
as filtering.

Integration tests updated.


#### Who can review?

Tag maintainers/contributors who might be interested:

  VectorStores / Retrievers / Memory
  - @dev2049
2023-06-07 19:26:40 -07:00
Ning Ren
f15763518a docs: add Shale Protocol integration guide (#5814)
This PR adds documentation for Shale Protocol's integration with
LangChain.

[Shale Protocol](https://shaleprotocol.com) provides forever-free
production-ready inference APIs to the open-source community. We have
global data centers and plan to support all major open LLMs (estimated
~1,000 by 2025).

The team consists of software and ML engineers, AI researchers,
designers, and operators across North America and Asia. Combined
together, the team has 50+ years experience in machine learning, cloud
infrastructure, software engineering and product development. Team
members have worked at places like Google and Microsoft.

#### Who can review?

Tag maintainers/contributors who might be interested:

  - @hwchase17
  - @agola11

---------

Co-authored-by: Karen Sheng <46656667+karensheng@users.noreply.github.com>
2023-06-07 19:25:59 -07:00
Duarte OC
137da7e4b6 Update microsoft loader example with docx2txt dependency (#5832)
@eyurtsev
2023-06-07 19:21:48 -07:00
Aidan Holland
9f4b720a63 Add additional VertexAI Params (#5837)
## Changes

- Added the `stop` param to the `_VertexAICommon` class so it can be set
at llm initialization

## Example Usage

```python
VertexAI(
    # ...
    temperature=0.15,
    max_output_tokens=128,
    top_p=1,
    top_k=40,
    stop=["\n```"],
)
```

## Possible Reviewers

- @hwchase17 
- @agola11
2023-06-07 19:20:37 -07:00
Eduard van Valkenburg
76fcd96dae Add logging in PBI tool (#5841)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

Add some logging into the powerbi tool so that you can see the queries
being sent to PBI and attempts to correct them.

<!-- Remove if not applicable -->

Fixes # (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested: @vowelparrot 

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-07 19:19:21 -07:00
Matt Robinson
11fec7d4d1 feat: Add UnstructuredCSVLoader for CSV files (#5844)
### Summary

Adds an `UnstructuredCSVLoader` for loading CSVs. One advantage of using
`UnstructuredCSVLoader` relative to the standard `CSVLoader` is that if
you use `UnstructuredCSVLoader` in `"elements"` mode, an HTML
representation of the table will be available in the metadata.

#### Who can review?

@hwchase17
 @eyurtsev
2023-06-07 19:18:01 -07:00
Soos3D
0b4a51930c Add how to use a custom scraping function with the sitemap loader. (#5847)
Hi! I just added an example of how to use a custom scraping function
with the sitemap loader. I recently used this feature and had to dig in
the source code to find it. I thought it might be useful to other devs
to have an example in the Jupyter Notebook directly.

I only added the example to the documentation page. 

@eyurtsev I was not able to run the lint. Please let me know if I have
to do anything else.

I know this is a very small contribution, but I hope it will be
valuable. My Twitter handle is @web3Dav3.

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-07 19:16:51 -07:00
Yessen Kanapin
c66755b661 Add DeepInfra embeddings integration with tests and examples, better exception handling for Deep Infra LLM (#5854)
#### Who can review?

Tag maintainers/contributors who might be interested:
  @hwchase17 - project lead
  - @agola11

---------

Co-authored-by: Yessen Kanapin <yessen@deepinfra.com>
2023-06-07 19:14:30 -07:00
ugfly1210
4d8cda1c3b FIX: backslash escaped (#5815)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

LatexTextSplitter needs to use "\n\\\chapter" when separators are
escaped, such as "\n\\\chapter", otherwise it will report an error:
(re.error: bad escape \c at position 1 (line 2, column 1))


Fixes # (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use

re.error: bad escape \c at position 1 (line 2, column 1)

See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

@hwchase17  @dev2049 

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

Co-authored-by: Pang <ugfly@qq.com>
2023-06-07 16:01:07 -07:00
Zander Chase
3af36943e8 Rm extraneous args to the trace group helper (#5801)
These are being ignored
2023-06-07 13:09:29 -07:00
whysage
8ef7274ee6 feat: issue-5712 add sleep tool (#5715)
Fixes # 5712 added sleep tool
2023-06-07 09:39:02 -07:00
Zander Chase
d9fcc45d05 Add in the async methods and link the run id (#5810) 2023-06-07 08:27:44 -07:00
Harrison Chase
ce7c11625f bump version to 193 (#5838) 2023-06-07 07:38:57 -07:00
warjiang
5a207cce8f fix: fullfill openai params when embedding (#5821)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes #5822 
I upgrade my langchain lib by execute `pip install -U langchain`, and
the verion is 0.0.192。But i found that openai.api_base not working. I
use azure openai service as openai backend, the openai.api_base is very
import for me. I hava compared tag/0.0.192 and tag/0.0.191, and figure
out that:

![image](https://github.com/hwchase17/langchain/assets/6478745/e183fdb2-8224-45c9-b3b4-26d62823999a)
openai params is moved inside `_invocation_params` function,and used in
some openai invoke:

![image](https://github.com/hwchase17/langchain/assets/6478745/5a55a048-5fa9-4bf4-aaef-3902226bec5e)

![image](https://github.com/hwchase17/langchain/assets/6478745/85b8cebc-eeb8-4538-a525-814719c8f8df)
but still some case not covered like:

![image](https://github.com/hwchase17/langchain/assets/6478745/e0297620-f2b2-4f4f-98bd-d0ed19022dac)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:
@hwchase17 

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-07 07:32:57 -07:00
Harrison Chase
b3ae6bcd3f bump ver to 192 (#5812) 2023-06-06 22:23:11 -07:00
Harrison Chase
5468528748 rm docs mongo (#5811) 2023-06-06 22:22:44 -07:00
Andrew Switlyk
69f4ffb851 Update adding_memory.ipynb (#5806)
just change "to" to "too" so it matches the above prompt

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes # (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-06 22:10:53 -07:00
Sun bin
2be4fbb835 add doc about reusing MongoDBAtlasVectorSearch (#5805)
DOC: add doc about reusing MongoDBAtlasVectorSearch

#### Who can review?

Anyone authorized.
2023-06-06 22:10:36 -07:00
bnassivet
062c3c00a2 fixed faiss integ tests (#5808)
Fixes # 5807

Realigned tests with implementation.
Also reinforced folder unicity for the test_faiss_local_save_load test
using date-time suffix

#### Before submitting

- Integration test updated
- formatting and linting ok (locally) 

#### Who can review?

Tag maintainers/contributors who might be interested:

  @hwchase17 - project lead
  VectorStores / Retrievers / Memory
  -@dev2049
2023-06-06 22:07:27 -07:00
SvMax
92b87c2fec added support for different types in ResponseSchema class (#5789)
I added support for specifing different types with ResponseSchema
objects:

## before
`
extracted_info = ResponseSchema(name="extracted_info", description="List
of extracted information")
`
generate the following doc: ```json\n{\n\t\"extracted_info\": string //
List of extracted information}```
This brings GPT to create a JSON with only one string in the specified
field even if you requested a List in the description.

## now
`extracted_info = ResponseSchema(name="extracted_info",
type="List[string]", description="List of extracted information")
`
generate the following doc: ```json\n{\n\t\"extracted_info\":
List[string] // List of extracted information}```
This way the model responds better to the prompt generating an array of
strings.

Tag maintainers/contributors who might be interested:
  Agents / Tools / Toolkits
  @vowelparrot

Don't know who can be interested, I suppose this is a tool, so I tagged
you vowelparrot,
anyway, it's a minor change, and shouldn't impact any other part of the
framework.
2023-06-06 22:00:48 -07:00
Harrison Chase
3954bcf396 WIP: openai settings (#5792)
[] need to test more
[] make sure they arent saved when serializing
[] do for embeddings
2023-06-06 21:57:58 -07:00
Alex Lee
b7999a9bc1 Add UTF-8 json ouput support while langchain.debug is set to True. (#5802)
Before:
<img width="984" alt="image"
src="https://github.com/hwchase17/langchain/assets/4317474/2b0807b4-a1d6-4df2-87cc-92b1c8e10534">

After:
<img width="992" alt="image"
src="https://github.com/hwchase17/langchain/assets/4317474/128c2c7d-2ed5-4c95-954d-b0964c83526a">


Thanks in advance.

 @agola11
2023-06-06 21:56:33 -07:00
kourosh hakhamaneshi
a0d847f636 [Docs][Hotfix] Fix broken links (#5800)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Some links were broken from the previous merge. This PR fixes them.
Tested locally.

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

Signed-off-by: Kourosh Hakhamaneshi <kourosh@anyscale.com>
2023-06-06 17:17:16 -07:00
Zander Chase
217b5cc72d Base RunEvaluator Chain (#5750)
Clean up a bit and only implement the QA and reference free
implementations from https://github.com/hwchase17/langchain/pull/5618
2023-06-06 16:42:15 -07:00
Lance Martin
4092fd21dc YoutubeAudioLoader and updates to OpenAIWhisperParser (#5772)
This introduces the `YoutubeAudioLoader`, which will load blobs from a
YouTube url and write them. Blobs are then parsed by
`OpenAIWhisperParser()`, as show in this
[PR](https://github.com/hwchase17/langchain/pull/5580), but we extend
the parser to split audio such that each chuck meets the 25MB OpenAI
size limit. As shown in the notebook, this enables a very simple UX:

```
# Transcribe the video to text
loader = GenericLoader(YoutubeAudioLoader([url],save_dir),OpenAIWhisperParser())
docs = loader.load()
``` 

Tested on full set of Karpathy lecture videos:

```
# Karpathy lecture videos
urls = ["https://youtu.be/VMj-3S1tku0"
        "https://youtu.be/PaCmpygFfXo",
        "https://youtu.be/TCH_1BHY58I",
        "https://youtu.be/P6sfmUTpUmc",
        "https://youtu.be/q8SA3rM6ckI",
        "https://youtu.be/t3YJ5hKiMQ0",
        "https://youtu.be/kCc8FmEb1nY"]

# Directory to save audio files 
save_dir = "~/Downloads/YouTube"
 
# Transcribe the videos to text
loader = GenericLoader(YoutubeAudioLoader(urls,save_dir),OpenAIWhisperParser())
docs = loader.load()
```
2023-06-06 15:15:08 -07:00
Gengliang Wang
2a4b32dee2 Revise DATABRICKS_API_TOKEN as DATABRICKS_TOKEN (#5796)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

In the [Databricks
integration](https://python.langchain.com/en/latest/integrations/databricks.html)
and [Databricks
LLM](https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html),
we suggestted users to set the ENV variable `DATABRICKS_API_TOKEN`.
However, this is inconsistent with the other Databricks library. To make
it consistent, this PR changes the variable from `DATABRICKS_API_TOKEN`
to `DATABRICKS_TOKEN`

After changes, there is no more `DATABRICKS_API_TOKEN` in the doc
```
$ git grep DATABRICKS_API_TOKEN|wc -l
0

$ git grep DATABRICKS_TOKEN|wc -l
8
```
cc @hwchase17 @dev2049 @mengxr since you have reviewed the previous PRs.
2023-06-06 14:22:49 -07:00
Paul-Emile Brotons
daf3e99b96 fixing from_documents method of the MongoDB Atlas vector store (#5794)
FIxed a bug in from_documents method --> Collection objects do not
implement truth value testing or bool().
@dev2049
2023-06-06 14:22:23 -07:00
Ankush Gola
b177a29d3f support returning run info for llms, chat models and chains (#5666)
returning the run id is important for accessing the run later on
2023-06-06 10:07:46 -07:00
Yoann Poupart
65111eb2b3 Attribute support for html tags (#5782)
# What does this PR do?

Change the HTML tags so that a tag with attributes can be found.

## Before submitting

- [x] Tests added
- [x] CI/CD validated

### Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.
2023-06-06 09:27:37 -07:00
Zander Chase
0cfaa76e45 Set Falsey (#5783)
Seems natural to try to disable logging by setting `MY_VAR=false` rather
than unsetting (especially once you've already set it in the background)
2023-06-06 09:26:38 -07:00
Harrison Chase
2ae2d6cd1d fix ver 191 (#5784) 2023-06-06 09:17:23 -07:00
Zander Chase
204a73c1d9 Use client from LCP-SDK (#5695)
- Remove the client implementation (this breaks backwards compatibility
for existing testers. I could keep the stub in that file if we want, but
not many people are using it yet
- Add SDK as dependency
- Update the 'run_on_dataset' method to be a function that optionally
accepts a client as an argument
- Remove the langchain plus server implementation (you get it for free
with the SDK now)

We could make the SDK optional for now, but the plan is to use w/in the
tracer so it would likely become a hard dependency at some point.
2023-06-06 06:51:05 -07:00
Harrison Chase
08e2352f7b bump ver 191 (#5766) 2023-06-05 20:54:08 -07:00
berkedilekoglu
f907b62526 Scores are explained in vectorestore docs (#5613)
# Scores in Vectorestores' Docs Are Explained

Following vectorestores can return scores with similar documents by
using `similarity_search_with_score`:
- chroma
- docarray_hnsw
- docarray_in_memory
- faiss
- myscale
- qdrant
- supabase
- vectara
- weaviate

However, in documents, these scores were either not explained at all or
explained in a way that could lead to misunderstandings (e.g., FAISS).
For instance in FAISS document: if we consider the score returned by the
function as a similarity score, we understand that a document returning
a higher score is more similar to the source document. However, since
the scores returned by the function are distance scores, we should
understand that smaller scores correspond to more similar documents.

For the libraries other than Vectara, I wrote the scores they use by
investigating from the source libraries. Since I couldn't be certain
about the score metric used by Vectara, I didn't make any changes in its
documentation. The links mentioned in Vectara's documentation became
broken due to updates, so I replaced them with working ones.

VectorStores / Retrievers / Memory
  - @dev2049

my twitter: [berkedilekoglu](https://twitter.com/berkedilekoglu)

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-05 20:39:49 -07:00
Adil Ansari
233b52735e feat: Support for Tigris Vector Database for vector search (#5703)
### Changes
- New vector store integration - [Tigris](https://tigrisdata.com)
- Adds [tigrisdb](https://pypi.org/project/tigrisdb/) optional
dependency
- Example notebook demonstrating usage

Fixes #5535 
Closes tigrisdata/tigris-client-python#40

#### Twitter handles
We'd love a shoutout on our
[@TigrisData](https://twitter.com/TigrisData) and
[@adilansari](https://twitter.com/adilansari) twitter handles

#### Who can review?
@dev2049

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-05 20:39:16 -07:00
Edrick Da Corte Henriquez
38dabdbb3a Update tutorials.md (#5761)
# Added an overview of LangChain modules

Aimed at introducing newcomers to LangChain's main modules :)

Twitter handle is @edrick_dch 

## Who can review?

@eyurtsev
2023-06-05 20:37:11 -07:00
Ankush Gola
84a46753ab Tracing Group (#5326)
Add context manager to group all runs under a virtual parent

---------

Co-authored-by: vowelparrot <130414180+vowelparrot@users.noreply.github.com>
2023-06-05 19:18:43 -07:00
Ilya
d5b1608216 fix markdown text splitter horizontal lines (#5625)
Fixes #5614 

#### Issue

The `***` combination produces an exception when used as a seperator in
`re.split`. Instead `\*\*\*` should be used for regex exprations.

#### Who can review?

@eyurtsev
2023-06-05 16:40:26 -07:00
Harrison Chase
25487fa5ee Harrison/youtube multi language (#5758)
Co-authored-by: rafly lesmana <raflylesmana111@gmail.com>
2023-06-05 16:38:07 -07:00
Shelby Jenkins
2dcda8a8ac Strips whitespace and \n from loc before filtering urls from sitemap (#5728)
Fixes #5699 



#### Who can review?

Tag maintainers/contributors who might be interested:

@woodworker @LeSphax @johannhartmann

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-05 16:33:55 -07:00
Harrison Chase
98dd6d068a cohere retries (#5757)
…719)

A minor update to retry Cohore API call in case of errors using tenacity
as it is done for OpenAI LLMs.

#### Who can review?

@hwchase17, @agola11 

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes # (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: Sagar Sapkota <22609549+sagar-spkt@users.noreply.github.com>
2023-06-05 16:28:58 -07:00
M Waleed Kadous
5124c1e0d9 Add aviary support (#5661)
Aviary is an open source toolkit for evaluating and deploying open
source LLMs. You can find out more about it on
[http://github.com/ray-project/aviary). You can try it out at
[http://aviary.anyscale.com](aviary.anyscale.com).

This code adds support for Aviary in LangChain. To minimize
dependencies, it connects directly to the HTTP endpoint.

The current implementation is not accelerated and uses the default
implementation of `predict` and `generate`.

It includes a test and a simple example. 

@hwchase17 and @agola11 could you have a look at this?

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-05 16:28:42 -07:00
felpigeon
a47c8618ec Add class attribute "return_generated_question" to class "BaseConversationalRetrievalChain" (#5749)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

Adding a class attribute "return_generated_question" to class
"BaseConversationalRetrievalChain". If set to `True`, the chain's output
has a key "generated_question" with the question generated by the
sub-chain `question_generator` as the value. This way the generated
question can be logged.

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
@dev2049 @vowelparrot
2023-06-05 16:10:12 -07:00
Leonid Ganeline
87ad4fc4b2 docs: updated ecosystem/dependents (#5753)
updated `ecosystem/dependents` data (it was updated 2+ weeks ago)

#### Who can review?

@hwchase17 
@eyurtsev
@dev2049
2023-06-05 16:09:55 -07:00
Leonid Ganeline
92a5f00ffb docs: ecosystem/integrations update 5 (#5752)
- added missed integration to `docs/ecosystem/integrations/`
- updated notebooks to consistent format: changed titles, file names;
added descriptions

#### Who can review?
 @hwchase17 
 @dev2049
2023-06-05 16:08:55 -07:00
Lance Martin
aea090045b Create OpenAIWhisperParser for generating Documents from audio files (#5580)
# OpenAIWhisperParser

This PR creates a new parser, `OpenAIWhisperParser`, that uses the
[OpenAI Whisper
model](https://platform.openai.com/docs/guides/speech-to-text/quickstart)
to perform transcription of audio files to text (`Documents`). Please
see the notebook for usage.
2023-06-05 15:51:13 -07:00
Hao Chen
a4c9053d40 Integrate Clickhouse as Vector Store (#5650)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

#### Description

This PR is mainly to integrate open source version of ClickHouse as
Vector Store as it is easy for both local development and adoption of
LangChain for enterprises who already have large scale clickhouse
deployment.

ClickHouse is a open source real-time OLAP database with full SQL
support and a wide range of functions to assist users in writing
analytical queries. Some of these functions and data structures perform
distance operations between vectors, [enabling ClickHouse to be used as
a vector
database](https://clickhouse.com/blog/vector-search-clickhouse-p1).
Recently added ClickHouse capabilities like [Approximate Nearest
Neighbour (ANN)
indices](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/annindexes)
support faster approximate matching of vectors and provide a promising
development aimed to further enhance the vector matching capabilities of
ClickHouse.

In LangChain, some ClickHouse based commercial variant vector stores
like
[Chroma](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/chroma.py)
and
[MyScale](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/myscale.py),
etc are already integrated, but for some enterprises with large scale
Clickhouse clusters deployment, it will be more straightforward to
upgrade existing clickhouse infra instead of moving to another similar
vector store solution, so we believe it's a valid requirement to
integrate open source version of ClickHouse as vector store.

As `clickhouse-connect` is already included by other integrations, this
PR won't include any new dependencies.

#### Before submitting

<!-- If you're adding a new integration, please include:

1. Added a test for the integration:
https://github.com/haoch/langchain/blob/clickhouse/tests/integration_tests/vectorstores/test_clickhouse.py
2. Added an example notebook and document showing its use: 
* Notebook:
https://github.com/haoch/langchain/blob/clickhouse/docs/modules/indexes/vectorstores/examples/clickhouse.ipynb
* Doc:
https://github.com/haoch/langchain/blob/clickhouse/docs/integrations/clickhouse.md

See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

1. Added a test for the integration:
https://github.com/haoch/langchain/blob/clickhouse/tests/integration_tests/vectorstores/test_clickhouse.py
2. Added an example notebook and document showing its use: 
* Notebook:
https://github.com/haoch/langchain/blob/clickhouse/docs/modules/indexes/vectorstores/examples/clickhouse.ipynb
* Doc:
https://github.com/haoch/langchain/blob/clickhouse/docs/integrations/clickhouse.md


#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
 
@hwchase17 @dev2049 Could you please help review?

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-05 13:32:04 -07:00
Gustavo Brian
2f2d27fd82 Error in documentation: Chroma constructor (#5731)
Chroma("langchain_store", embeddings.embed_query) must be
Chroma("langchain_store", embeddings)
2023-06-05 13:30:58 -07:00
George Geddes
019eb13681 Fix a typo in the documentation for the Slack document loader (#5745)
Fixes a typo I noticed while reading the docs.
2023-06-05 13:30:24 -07:00
Andrew Grangaard
450eb91fe2 Removes unnecessary backslash escaping for backticks in python (#5751)
Fixed python deprecation warning:
    DeprecationWarning: invalid escape sequence '`'
    
backticks (`) do not have special meaning in python strings and should
not be escaped.

-- @spazm on twitter

### Who can review:

@nfcampos ported this change from javascript, @hwchase17 wrote the
original STRUCTURED_FORMAT_INSTRUCTIONS,
2023-06-05 13:30:11 -07:00
Daniel Chalef
0551bc90a5 Zep Hybrid Search (#5742)
Zep now supports persisting custom metadata with messages and hybrid
search across both message embeddings and structured metadata. This PR
implements custom metadata and enhancements to the
`ZepChatMessageHistory` and `ZepRetriever` classes to implement this
support.

Tag maintainers/contributors who might be interested:

  VectorStores / Retrievers / Memory
  - @dev2049

---------

Co-authored-by: Daniel Chalef <daniel.chalef@private.org>
2023-06-05 12:59:28 -07:00
Tomaz Bratanic
a0ea6f6b6b Cypher search: Check if generated Cypher is provided in backticks (#5541)
# Check if generated Cypher code is wrapped in backticks

Some LLMs like the VertexAI like to explain how they generated the
Cypher statement and wrap the actual code in three backticks:

![Screenshot from 2023-06-01
08-08-23](https://github.com/hwchase17/langchain/assets/19948365/1d8eecb3-d26c-4882-8f5b-6a9bc7e93690)


I have observed a similar pattern with OpenAI chat models in a
conversational settings, where multiple user and assistant message are
provided to the LLM to generate Cypher statements, where then the LLM
wants to maybe apologize for previous steps or explain its thoughts.
Interestingly, both OpenAI and VertexAI wrap the code in three backticks
if they are doing any explaining or apologizing. Checking if the
generated cypher is wrapped in backticks seems like a low-hanging fruit
to expand the cypher search to other LLMs and conversational settings.
2023-06-05 12:48:13 -07:00
Abhijeet Malamkar
1a9ac3b1f9 Adding support to save multiple memories at a time. Cuts save time by … (#5172)
# Adding support to save multiple memories at a time. Cuts save time by
more then half

<!--
Thank you for contributing to LangChain! Your PR will appear in our next
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

        @hwchase17 - project lead

        Tracing / Callbacks
        - @agola11

        Async
        - @agola11

        DataLoaders
        - @eyurtsev

        Models
        - @hwchase17
        - @agola11

  
        -
        
        VectorStores / Retrievers / Memory
        - @dev2049
        
 -->
@dev2049
 @vowelparrot

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-05 12:47:48 -07:00
kourosh hakhamaneshi
625717daa8 docs: Added Deploying LLMs into production + a new ecosystem (#4047)
Signed-off-by: Kourosh Hakhamaneshi <kourosh@anyscale.com>
Co-authored-by: Kamil Kaczmarek <kaczmarek.poczta@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-05 12:47:27 -07:00
Ralph Schlosser
74f8e603d9 Addresses GPT4All wrapper model_type attribute issues #5720. (#5743)
Fixes #5720.

A more in-depth discussion is in my comment here:
https://github.com/hwchase17/langchain/issues/5720#issuecomment-1577047018

In a nutshell, there has been a subtle change in the latest version of
GPT4Alls Python bindings. The change I submitted yesterday is compatible
with this version, however, this version is as of yet unreleased and
thus the code change breaks Langchain's wrapper under the currently
released version of GPT4All.

This pull request proposes a backwards-compatible solution.
2023-06-05 12:45:29 -07:00
Harrison Chase
d0d89d39ef bump version to 190 (#5704) 2023-06-04 20:04:50 -07:00
mheguy-stingray
b64c39dfe7 top_k and top_p transposed in vertexai (#5673)
Fix transposed properties in vertexai model


Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-04 16:59:53 -07:00
Tobias Herbold
3fb0e4872a sqlalchemy MovedIn20Warning declarative_base DEPRICATION fix (#5676)
fix for the sqlalchemy deprecated declarative_base import :

```
MovedIn20Warning: The ``declarative_base()`` function is now available as sqlalchemy.orm.declarative_base(). (deprecated since: 2.0) (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
  Base = declarative_base()  # type: Any
```

Import is wrapped in an try catch Block to fallback to the old import if
needed.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-04 16:52:52 -07:00
Jens Madsen
8d9e9e013c refactor: extract token text splitter function (#5179)
# Token text splitter for sentence transformers

The current TokenTextSplitter only works with OpenAi models via the
`tiktoken` package. This is not clear from the name `TokenTextSplitter`.
In this (first PR) a token based text splitter for sentence transformer
models is added. In the future I think we should work towards injecting
a tokenizer into the TokenTextSplitter to make ti more flexible.
Could perhaps be reviewed by @dev2049

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-04 14:41:44 -07:00
Nathan Azrak
26ec845921 Raise an exception in MKRL and Chat Output Parsers if parsing text which contains both an action and a final answer (#5609)
Raises exception if OutputParsers receive a response with both a valid
action and a final answer

Currently, if an OutputParser receives a response which includes both an
action and a final answer, they return a FinalAnswer object. This allows
the parser to accept responses which propose an action and hallucinate
an answer without the action being parsed or taken by the agent.

This PR changes the logic to:
1. store a variable checking whether a response contains the
`FINAL_ANSWER_ACTION` (this is the easier condition to check).
2. store a variable checking whether the response contains a valid
action
3. if both are present, raise a new exception stating that both are
present
4. if an action is present, return an AgentAction
5. if an answer is present, return an AgentAnswer
6. if neither is present, raise the relevant exception based around the
action format (these have been kept consistent with the prior exception
messages)

Disclaimer:
* Existing mock data included strings which did include an action and an
answer. This might indicate that prioritising returning AgentAnswer was
always correct, and I am patching out desired behaviour? @hwchase17 to
advice. Curious if there are allowed cases where this is not
hallucinating, and we do want the LLM to output an action which isn't
taken.
* I have not passed `send_to_llm` through this new exception

Fixes #5601 

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

@hwchase17 - project lead
@vowelparrot
2023-06-04 14:40:49 -07:00
Lucas Rodrigues
c112d7334d Update MongoDBChatMessageHistory to create an index on SessionId (#5632)
All the queries to the database are done based on the SessionId
property, this will optimize how Mongo retrieves all messages from a
session

#### Who can review?

Tag maintainers/contributors who might be interested:
@dev2049
2023-06-04 14:39:56 -07:00
Jason Weill
6c11f94013 Retitles Bedrock doc to appear in correct alphabetical order in site nav (#5639)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes #5638. Retitles "Amazon Bedrock" page to "Bedrock" so that the
Integrations section of the left nav is properly sorted in alphabetical
order.

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-04 14:39:25 -07:00
Will Smith
6e25e65085 SQL agent : Improved prompt engineering prevents agent guessing database column names. (#5671)
@vowelparrot:

Minor change to the SQL agent:

Tells agent to introspect the schema of the most relevant tables, I
found this to dramatically decrease the chance that the agent wastes
times guessing column names.
2023-06-04 14:39:00 -07:00
Nuhman Pk
8f98592ac9 Added Dependencies Status, Open issues and releases badges in Readme.md (#5681)
[![Dependency
Status](https://img.shields.io/librariesio/github/hwchase17/langchain)](https://libraries.io/github/hwchase17/langchain)
[![Open
Issues](https://img.shields.io/github/issues-raw/hwchase17/langchain)](https://github.com/hwchase17/langchain/issues)
[![Release
Notes](https://img.shields.io/github/release/hwchase17/langchain)](https://github.com/hwchase17/langchain/releases)
2023-06-04 14:30:52 -07:00
Harrison Chase
b9040669a0 Harrison/pipeline prompt (#5540)
idea is to make prompts more composable
2023-06-04 14:29:37 -07:00
George Roberts
647210a4b9 Add args_schema to google_places tool (#5680)
Tiny change to actually add the args_schema to the tool.

@vowelparrot
2023-06-04 14:28:46 -07:00
Ralph Schlosser
8fea0529c1 This fixes issue #5651 - GPT4All wrapper loading issue (#5657)
Fixes #5651 

Small typo in wrapper code. Note the `model_type` parameter is currently
unused by GPT4All.

https://github.com/hwchase17/langchain/issues/5651

#### Who can review?
2023-06-04 07:21:16 -07:00
Jiayao Yu
6a3ceaa377 Support similarity_score_threshold retrieval with Chroma (#5655)
Fixes https://github.com/hwchase17/langchain/issues/5067

Verified the following code now works correctly:
```
db = Chroma(persist_directory=index_directory(index_name), embedding_function=embeddings)
retriever = db.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": 0.4})
docs = retriever.get_relevant_documents(query)
```
2023-06-03 16:57:00 -07:00
Hao Chen
3e45b83065 Improve Error Messaging for APOC Procedure Failure in Neo4jGraph (#5547)
## Improve Error Messaging for APOC Procedure Failure in Neo4jGraph

This commit revises the error message provided when the
'apoc.meta.data()' procedure fails. Previously, the message simply
instructed the user to install the APOC plugin in Neo4j. The new error
message is more specific.

Also removed an unnecessary newline in the Cypher statement variable:
`node_properties_query`.

Fixes #5545 

## Who can review?
  - @vowelparrot
  - @dev2049
2023-06-03 16:56:39 -07:00
Ricardo Reis
33ea606f45 Update youtube.py - Fix metadata validation error in YoutubeLoader (#5479)
This commit addresses a ValueError occurring when the YoutubeLoader
class tries to add datetime metadata from a YouTube video's publish
date. The error was happening because the ChromaDB metadata validation
only accepts str, int, or float data types.

In the `_get_video_info` method of the `YoutubeLoader` class, the
publish date retrieved from the YouTube video was of datetime type. This
commit fixes the issue by converting the datetime object to a string
before adding it to the metadata dictionary.

Additionally, this commit introduces error handling in the
`_get_video_info` method to ensure that all metadata fields have valid
values. If a metadata field is found to be None, a default value is
assigned. This prevents potential errors during metadata validation when
metadata fields are None.

The file modified in this commit is youtube.py.

# Your PR Title (What it does)

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes # (issue)

## Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-03 16:56:17 -07:00
Shuqian
5af2c51e78 refactor: BaseStringMessagePromptTemplate from_template method (#5332)
# refactor BaseStringMessagePromptTemplate from_template method 

Refactor the `from_template` method of the
`BaseStringMessagePromptTemplate` class to allow passing keyword
arguments to the `from_template` method of `PromptTemplate`.
Enable the usage of arguments like `template_format`.
In my scenario, I intend to utilize Jinja2 for formatting the human
message prompt in the chat template.

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Models
  - @hwchase17
  - @agola11
  - @jonasalexander 

 -->

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-03 16:55:58 -07:00
mbchang
d3bdb8ea6d FileCallbackHandler (#5589)
# like
[StdoutCallbackHandler](https://github.com/hwchase17/langchain/blob/master/langchain/callbacks/stdout.py),
but writes to a file

When running experiments I have found myself wanting to log the outputs
of my chains in a more lightweight way than using WandB tracing. This PR
contributes a callback handler that writes to file what
`StdoutCallbackHandler` would print.

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

## Example Notebook

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use



See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

See the included `filecallbackhandler.ipynb` notebook for usage. Would
it be better to include this notebook under `modules/callbacks` or under
`integrations/`?

![image](https://github.com/hwchase17/langchain/assets/6439365/c624de0e-343f-4eab-a55b-8808a887489f)


## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

@agola11

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-03 16:48:48 -07:00
rajib
1c51d3db0f Created fix for 5475 (#5659)
Created fix for 5475
Currently in PGvector, we do not have any function that returns the
instance of an existing store. The from_documents always adds embeddings
and then returns the store. This fix is to add a function that will
return the instance of an existing store

Also changed the jupyter example for PGVector to show the example of
using the function

<!-- Remove if not applicable -->

Fixes # 5475

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?
@dev2049
@hwchase17 

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: rajib76 <rajib76@yahoo.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-03 16:47:52 -07:00
Michael Landis
475007d63a fix: correct momento chat history notebook typo and title (#5646)
This PR corrects a minor typo in the Momento chat message history
notebook and also expands the title from "Momento" to "Momento Chat
History", inline with other chat history storage providers.


#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

cc @dev2049 who reviewed the original integration
2023-06-03 16:39:27 -07:00
Paul-Emile Brotons
92f218207b removing client+namespace in favor of collection (#5610)
removing client+namespace in favor of collection for an easier
instantiation and to be similar to the typescript library

@dev2049
2023-06-03 16:27:31 -07:00
Harrison Chase
ad09367a92 Harrison/pubmed integration (#5664)
Co-authored-by: younis basher <71520361+younis-ba@users.noreply.github.com>
Co-authored-by: Younis Bashir <younis@omicmd.com>
2023-06-03 16:25:28 -07:00
Harrison Chase
9921f8cc3a Harrison/update azure nb (#5665)
Co-authored-by: NEWTON MALLICK <38786893+N-E-W-T-O-N@users.noreply.github.com>
2023-06-03 16:25:08 -07:00
C.J. Jameson
4e71a1702b nit: pgvector python example notebook, fix variable reference (#5595)
# Your PR Title (What it does)

Fixes the pgvector python example notebook : one of the variables was
not referencing anything

## Before submitting

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

VectorStores / Retrievers / Memory
  - @dev2049
2023-06-03 15:29:34 -07:00
Leonid Ganeline
b201cfaa0f docs ecosystem/integrations update 4 (#5590)
# docs `ecosystem/integrations` update 4

Added missed integrations. Fixed inconsistencies. 

## Who can review?

@hwchase17 
@dev2049
2023-06-03 15:29:03 -07:00
Davis Chase
ae3611730a handle single arg to and/or (#5637)
@ryderwishart @eyurtsev thoughts on handling this in the parser itself?
related to #5570
2023-06-03 15:18:46 -07:00
khallbobo
934319fc28 Add parameters to send_message() call for vertexai chat models (PaLM2) (#5566)
# Ensure parameters are used by vertexai chat models (PaLM2)

The current version of the google aiplatform contains a bug where
parameters for a chat model are not used as intended.

See https://github.com/googleapis/python-aiplatform/issues/2263

Params can be passed both to start_chat() and send_message(); however,
the parameters passed to start_chat() will not be used if send_message()
is called without the overrides. This is due to the defaults in
send_message() being global values rather than None (there is code in
send_message() which would use the params from start_chat() if the param
passed to send_message() evaluates to False, but that won't happen as
the defaults are global values).

Fixes # 5531

@hwchase17
@agola11
2023-06-03 15:17:38 -07:00
UmerHA
44ad9628c9 QuickFix for FinalStreamingStdOutCallbackHandler: Ignore new lines & white spaces (#5497)
# Make FinalStreamingStdOutCallbackHandler more robust by ignoring new
lines & white spaces

`FinalStreamingStdOutCallbackHandler` doesn't work out of the box with
`ChatOpenAI`, as it tokenized slightly differently than `OpenAI`. The
response of `OpenAI` contains the tokens `["\nFinal", " Answer", ":"]`
while `ChatOpenAI` contains `["Final", " Answer", ":"]`.

This PR make `FinalStreamingStdOutCallbackHandler` more robust by
ignoring new lines & white spaces when determining if the answer prefix
has been reached.

Fixes #5433

## Who can review?
Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
Tracing / Callbacks
- @agola11

Twitter: [@UmerHAdil](https://twitter.com/@UmerHAdil) | Discord:
RicChilligerDude#7589
2023-06-03 15:05:58 -07:00
Nathan Azrak
1f4abb265a Adds the option to pass the original prompt into the AgentExecutor for PlanAndExecute agents (#5401)
# Adds the option to pass the original prompt into the AgentExecutor for
PlanAndExecute agents

This PR allows the user to optionally specify that they wish for the
original prompt/objective to be passed into the Executor agent used by
the PlanAndExecute agent. This solves a potential problem where the plan
is formed referring to some context contained in the original prompt,
but which is not included in the current prompt.

Currently, the prompt format given to the Executor is:
```
System: Respond to the human as helpfully and accurately as possible. You have access to the following tools:

<Tool and Action Description>

<Output Format Description>

Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.
Thought:
Human: <Previous steps>

<Current step>
```

This PR changes the final part after `Human:` to optionally insert the
objective:
```
Human: <objective>

<Previous steps>

<Current step>
```

I have given a specific example in #5400 where the context of a database
path is lost, since the plan refers to the "given path".

The PR has been linted and formatted. So that existing behaviour is not
changed, I have defaulted the argument to `False` and added it as the
last argument in the signature, so it does not cause issues for any
users passing args positionally as opposed to using keywords.

Happy to take any feedback or make required changes! 

Fixes #5400

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
@vowelparrot

---------

Co-authored-by: Nathan Azrak <nathan.azrak@gmail.com>
2023-06-03 14:59:09 -07:00
Felipe Ferreira
ae2cf1f598 Implements support for Personal Access Token Authentication in the ConfluenceLoader (#5385)
# Implements support for Personal Access Token Authentication in the
ConfluenceLoader

Fixes #5191

Implements a new optional parameter for the ConfluenceLoader: `token`.
This allows the use of personal access authentication when using the
on-prem server version of Confluence.

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
@eyurtsev @Jflick58 

Twitter Handle: felipe_yyc

---------

Co-authored-by: Felipe <feferreira@ea.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-03 14:57:49 -07:00
Gardner Bickford
b81f98b8a6 Update confluence.py to return spaces between elements (#5383)
# Update confluence.py to return spaces between elements like headers
and links.

Please see
https://stackoverflow.com/questions/48913975/how-to-return-nicely-formatted-text-in-beautifulsoup4-when-html-text-is-across-m

Given:

```html
<address>
        183 Main St<br>East Copper<br>Massachusetts<br>U S A<br>
        MA 01516-113
    </address>
```

The document loader currently returns:

```
'183 Main StEast CopperMassachusettsU S A        MA 01516-113'
```

After this change, the document loader will return:

```
183 Main St East Copper Massachusetts U S A MA 01516-113
```


@eyurtsev would you prefer this to be an option that can be passed in?
2023-06-03 14:57:25 -07:00
Zeeland
b72401b47b pref: reduce DB query error rate (#5339)
# Reduce DB query error rate

If you use sql agent of `SQLDatabaseToolkit` to query data, it is prone
to errors in query fields and often uses fields that do not exist in
database tables for queries. However, the existing prompt does not
effectively make the agent aware that there are problems with the fields
they query. At this time, we urgently need to improve the prompt so that
the agent realizes that they have queried non-existent fields and allows
them to use the `schema_sql_db`, that is,` ListSQLDatabaseTool` first
queries the corresponding fields in the table in the database, and then
uses `QuerySQLDatabaseTool` for querying.

There is a demo of my project to show this problem.

**Original Agent**

```python
def create_mysql_kit():
    db = SQLDatabase.from_uri("mysql+pymysql://xxxxxxx")
    llm = OpenAI(temperature=0)

    toolkit = SQLDatabaseToolkit(db=db, llm=llm)
    agent_executor = create_sql_agent(
        llm=OpenAI(temperature=0),
        toolkit=toolkit,
        verbose=True
    )
    agent_executor.run("Who are the users of sysuser in this system? Tell me the username of all users")


if __name__ == '__main__':
    create_mysql_kit()

```

**original output**

```
> Entering new AgentExecutor chain...
Action: list_tables_sql_db
Action Input: ""
Observation: app_sysrole_menus, app_bimfacemodel, app_project_users, app_measuringpointdata, auth_user, auth_user_groups, django_apscheduler_djangojobexecution, app_project, app_elementpoint, django_apscheduler_djangojob, django_content_type, app_sysrole, django_admin_log, app_bimfaceaccount, app_measuringpoint_warning_thresholds, app_measuringpoint, app_company, auth_group_permissions, app_sysuser, app_sysuser_companies, app_sysmenu, app_datawarningthreshold, auth_group, auth_permission, app_datawarningrecord, auth_user_user_permissions, app_bimfaceaccount_bimface_models, django_migrations, app_measuringitem, django_session
Thought: I should query the app_sysuser table to get the users in the system.
Action: query_sql_db
Action Input: SELECT username FROM app_sysuser LIMIT 10;
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'username' in 'field list'")
[SQL: SELECT username FROM app_sysuser LIMIT 10;]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should double check my query before executing it.
Action: query_checker_sql_db
Action Input: SELECT username FROM app_sysuser LIMIT 10;
Observation: 

SELECT username FROM app_sysuser LIMIT 10;
Thought: The query looks correct, so I should execute it.
Action: query_sql_db
Action Input: SELECT username FROM app_sysuser LIMIT 10;
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'username' in 'field list'")
[SQL: SELECT username FROM app_sysuser LIMIT 10;]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should double check my query before executing it.
Action: query_checker_sql_db
Action Input: SELECT name FROM app_sysuser LIMIT 10;
Observation: 

SELECT name FROM app_sysuser LIMIT 10;
Thought: The query looks correct, so I should execute it.
Action: query_sql_db
Action Input: SELECT name FROM app_sysuser LIMIT 10;
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'name' in 'field list'")
[SQL: SELECT name FROM app_sysuser LIMIT 10;]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should double check my query before executing it.
Action: query_checker_sql_db
Action Input: SELECT id, username FROM app_sysuser LIMIT 10;
Observation: 

SELECT id, username FROM app_sysuser LIMIT 10;
Thought: The query looks correct, so I should execute it.
Action: query_sql_db
Action Input: SELECT id, username FROM app_sysuser LIMIT 10;
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'id' in 'field list'")
[SQL: SELECT id, username FROM app_sysuser LIMIT 10;]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should double check my query before executing it.
Action: query_checker_sql_db
Action Input: SELECT user_id, username FROM app_sysuser LIMIT 10;
Observation: 

SELECT user_id, username FROM app_sysuser WHERE user_id IS NOT NULL AND user_id BETWEEN 0 AND 10 UNION ALL SELECT user_id, username FROM app_sysuser WHERE username IS NOT NULL ORDER BY user_id ASC LIMIT 10;
Thought: The query looks correct, so I should execute it.
Action: query_sql_db
Action Input: SELECT user_id, username FROM app_sysuser WHERE user_id IS NOT NULL AND user_id BETWEEN 0 AND 10 UNION ALL SELECT user_id, username FROM app_sysuser WHERE username IS NOT NULL ORDER BY user_id ASC LIMIT 10;
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'username' in 'field list'")
[SQL: SELECT user_id, username FROM app_sysuser WHERE user_id IS NOT NULL AND user_id BETWEEN 0 AND 10 UNION ALL SELECT user_id, username FROM app_sysuser WHERE username IS NOT NULL ORDER BY user_id ASC LIMIT 10;]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should double check my query before executing it.
Action: query_checker_sql_db
Action Input: SELECT user_id, user_name FROM app_sysuser LIMIT 10;
Observation: 

SELECT user_id, user_name FROM app_sysuser WHERE user_id NOT IN (NULL) AND user_name BETWEEN 'A' AND 'Z' UNION ALL SELECT user_id, user_name FROM app_sysuser WHERE user_id IS NOT NULL AND user_name LIKE '%a%' LIMIT 10;
Thought: The query looks correct, so I should execute it.
Action: query_sql_db
Action Input: SELECT user_id, user_name FROM app_sysuser WHERE user_id NOT IN (NULL) AND user_name BETWEEN 'A' AND 'Z' UNION ALL SELECT user_id, user_name FROM app_sysuser WHERE user_id IS NOT NULL AND user_name LIKE '%a%' LIMIT 10;
Observation: []
Thought: I now know the final answer.
Final Answer: There are no users in the sysuser table.

> Finished chain.

Process finished with exit code 0
```

As you can see, agent always use incorrect sql to query the data.
Actually, the username field name of my `app_sysuser` table is
`user_name`, not the `username` believed by the agent.

**Optimized agent output**

```
> Entering new AgentExecutor chain...
Action: list_tables_sql_db
Action Input: ""
Observation: app_project_users, django_content_type, app_measuringpoint_warning_thresholds, app_bimfaceaccount_bimface_models, django_session, app_sysuser, django_migrations, app_bimfacemodel, app_elementpoint, app_measuringpoint, django_apscheduler_djangojobexecution, auth_permission, app_sysuser_companies, app_measuringpointdata, app_measuringitem, app_sysrole_menus, django_admin_log, auth_group, django_apscheduler_djangojob, app_sysmenu, app_project, app_bimfaceaccount, app_datawarningthreshold, app_datawarningrecord, auth_user_groups, auth_user_user_permissions, auth_group_permissions, auth_user, app_company, app_sysrole
Thought: I should query the app_sysuser table to get the usernames of all the users.
Action: query_sql_db
Action Input: SELECT username FROM app_sysuser
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'username' in 'field list'")
[SQL: SELECT username FROM app_sysuser]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should use the schema_sql_db tool to get the correct table fields.
Action: schema_sql_db
Action Input: app_sysuser
Observation: 
CREATE TABLE app_sysuser (
	user_id INTEGER NOT NULL AUTO_INCREMENT, 
	user_account VARCHAR(50) COLLATE utf8mb4_general_ci NOT NULL, 
	user_password VARCHAR(255) COLLATE utf8mb4_general_ci NOT NULL, 
	user_name VARCHAR(50) COLLATE utf8mb4_general_ci NOT NULL, 
	user_avatar VARCHAR(255) COLLATE utf8mb4_general_ci, 
	user_email VARCHAR(254) COLLATE utf8mb4_general_ci, 
	user_phone VARCHAR(20) COLLATE utf8mb4_general_ci, 
	user_create_time DATETIME(6) NOT NULL, 
	user_update_time DATETIME(6) NOT NULL, 
	user_status SMALLINT, 
	role_id_id INTEGER, 
	PRIMARY KEY (user_id), 
	CONSTRAINT app_sysuser_role_id_id_51b50f39_fk_app_sysrole_role_id FOREIGN KEY(role_id_id) REFERENCES app_sysrole (role_id)
)COLLATE utf8mb4_general_ci DEFAULT CHARSET=utf8mb4 ENGINE=InnoDB

/*
3 rows from app_sysuser table:
user_id	user_account	user_password	user_name	user_avatar	user_email	user_phone	user_create_time	user_update_time	user_status	role_id_id
xxxxxxxxxxxxxx
*/
Thought: I should query the app_sysuser table to get the usernames of all the users.
Action: query_sql_db
Action Input: SELECT user_account FROM app_sysuser LIMIT 10
Observation: [('baiyun',), ('eatrice',), ('lisi',), ('pingxiang',), ('wangwu',), ('zeeland',), ('zsj',), ('zzw',)]
Thought: I now know the final answer
Final Answer: The usernames of the users in the sysuser table are baiyun, eatrice, lisi, pingxiang, wangwu, zeeland, zsj, and zzw.

> Finished chain.

Process finished with exit code 0

```

I have tested about 10 related prompts and they all work properly, with
a much lower error rate compared to before


## Who can review?

@vowelparrot

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-03 14:55:56 -07:00
mbchang
ce6dbe41a9 minor refactor GenerativeAgentMemory (#5315)
# minor refactor of GenerativeAgentMemory

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

<!-- Remove if not applicable -->

- refactor `format_memories_detail` to be more reusable
- modified prompts for getting topics for reflection and for generating
insights
- update `characters.ipynb` to reflect changes

## Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049
        
 -->
@vowelparrot
@hwchase17
@dev2049
2023-06-03 14:53:14 -07:00
Leonid Ganeline
95c6ed0568 docs: modules pages simplified (#5116)
# docs: modules pages simplified

Fixied #5627  issue

Merged several repetitive sections in the `modules` pages. Some texts,
that were hard to understand, were also simplified.


## Who can review?

@hwchase17
@dev2049
2023-06-03 14:44:32 -07:00
Chandan Routray
bc875a9df1 Fixed multi input prompt for MapReduceChain (#4979)
# Fixed multi input prompt for MapReduceChain

Added `kwargs` support for inner chains of `MapReduceChain` via
`from_params` method
Currently the `from_method` method of intialising `MapReduceChain` chain
doesn't work if prompt has multiple inputs. It happens because it uses
`StuffDocumentsChain` and `MapReduceDocumentsChain` underneath, both of
them require specifying `document_variable_name` if `prompt` of their
`llm_chain` has more than one `input`.

With this PR, I have added support for passing their respective `kwargs`
via the `from_params` method.

## Fixes https://github.com/hwchase17/langchain/issues/4752

## Who can review? 
@dev2049 @hwchase17 @agola11

---------

Co-authored-by: imeckr <chandanroutray2012@gmail.com>
2023-06-03 14:41:03 -07:00
Matt Robinson
a97e4252e3 feat: add UnstructuredExcelLoader for .xlsx and .xls files (#5617)
# Unstructured Excel Loader

Adds an `UnstructuredExcelLoader` class for `.xlsx` and `.xls` files.
Works with `unstructured>=0.6.7`. A plain text representation of the
Excel file will be available under the `page_content` attribute in the
doc. If you use the loader in `"elements"` mode, an HTML representation
of the Excel file will be available under the `text_as_html` metadata
key. Each sheet in the Excel document is its own document.

### Testing

```python
from langchain.document_loaders import UnstructuredExcelLoader

loader = UnstructuredExcelLoader(
    "example_data/stanley-cups.xlsx",
    mode="elements"
)
docs = loader.load()
```

## Who can review?

@hwchase17
@eyurtsev
2023-06-03 12:44:12 -07:00
Leonid Ganeline
9a7488a5ce fix import issue (#5636)
# fix for the import issue

Added document loader classes from [`figma`, `iugu`, `onedrive_file`] to
`document_loaders/__inti__.py` imports
Also sorted `__all__`

Fixed #5623 issue
2023-06-02 14:58:41 -07:00
Zander Chase
20ec1173f4 Update Tracer Auth / Reduce Num Calls (#5517)
Update the session creation and calls

---------

Co-authored-by: Ankush Gola <ankush.gola@gmail.com>
2023-06-02 12:13:56 -07:00
Sean Morgan
949729ff5c Fix bedrock llm boto3 client instantiation (#5629)
Same issue as https://github.com/hwchase17/langchain/pull/5574
2023-06-02 12:04:49 -07:00
Caleb Ellington
c5a7a85a4e fix chroma update_document to embed entire documents, fixes a characer-wise embedding bug (#5584)
# Chroma update_document full document embeddings bugfix

Chroma update_document takes a single document, but treats the
page_content sting of that document as a list when getting the new
document embedding.

This is a two-fold problem, where the resulting embedding for the
updated document is incorrect (it's only an embedding of the first
character in the new page_content) and it calls the embedding function
for every character in the new page_content string, using many tokens in
the process.

Fixes #5582


Co-authored-by: Caleb Ellington <calebellington@Calebs-MBP.hsd1.ca.comcast.net>
2023-06-02 11:12:48 -07:00
277 changed files with 12058 additions and 4292 deletions

View File

@@ -2,6 +2,7 @@
⚡ Building applications with LLMs through composability ⚡
[![Release Notes](https://img.shields.io/github/release/hwchase17/langchain)](https://github.com/hwchase17/langchain/releases)
[![lint](https://github.com/hwchase17/langchain/actions/workflows/lint.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/lint.yml)
[![test](https://github.com/hwchase17/langchain/actions/workflows/test.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/test.yml)
[![linkcheck](https://github.com/hwchase17/langchain/actions/workflows/linkcheck.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/linkcheck.yml)
@@ -12,6 +13,8 @@
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/hwchase17/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/hwchase17/langchain)
[![GitHub star chart](https://img.shields.io/github/stars/hwchase17/langchain?style=social)](https://star-history.com/#hwchase17/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/hwchase17/langchain)](https://libraries.io/github/hwchase17/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/hwchase17/langchain)](https://github.com/hwchase17/langchain/issues)
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/hwchase17/langchainjs).

View File

@@ -0,0 +1,137 @@
===========================
Deploying LLMs in Production
===========================
In today's fast-paced technological landscape, the use of Large Language Models (LLMs) is rapidly expanding. As a result, it's crucial for developers to understand how to effectively deploy these models in production environments. LLM interfaces typically fall into two categories:
- **Case 1: Utilizing External LLM Providers (OpenAI, Anthropic, etc.)**
In this scenario, most of the computational burden is handled by the LLM providers, while LangChain simplifies the implementation of business logic around these services. This approach includes features such as prompt templating, chat message generation, caching, vector embedding database creation, preprocessing, etc.
- **Case 2: Self-hosted Open-Source Models**
Alternatively, developers can opt to use smaller, yet comparably capable, self-hosted open-source LLM models. This approach can significantly decrease costs, latency, and privacy concerns associated with transferring data to external LLM providers.
Regardless of the framework that forms the backbone of your product, deploying LLM applications comes with its own set of challenges. It's vital to understand the trade-offs and key considerations when evaluating serving frameworks.
Outline
=======
This guide aims to provide a comprehensive overview of the requirements for deploying LLMs in a production setting, focusing on:
- `Designing a Robust LLM Application Service <#robust>`_
- `Maintaining Cost-Efficiency <#cost>`_
- `Ensuring Rapid Iteration <#iteration>`_
Understanding these components is crucial when assessing serving systems. LangChain integrates with several open-source projects designed to tackle these issues, providing a robust framework for productionizing your LLM applications. Some notable frameworks include:
- `Ray Serve <../integrations/ray_serve.html>`_
- `BentoML <https://github.com/ssheng/BentoChain>`_
- `Modal <../integrations/modal.html>`_
These links will provide further information on each ecosystem, assisting you in finding the best fit for your LLM deployment needs.
Designing a Robust LLM Application Service
===========================================
.. _robust:
When deploying an LLM service in production, it's imperative to provide a seamless user experience free from outages. Achieving 24/7 service availability involves creating and maintaining several sub-systems surrounding your application.
Monitoring
----------
Monitoring forms an integral part of any system running in a production environment. In the context of LLMs, it is essential to monitor both performance and quality metrics.
**Performance Metrics:** These metrics provide insights into the efficiency and capacity of your model. Here are some key examples:
- Query per second (QPS): This measures the number of queries your model processes in a second, offering insights into its utilization.
- Latency: This metric quantifies the delay from when your client sends a request to when they receive a response.
- Tokens Per Second (TPS): This represents the number of tokens your model can generate in a second.
**Quality Metrics:** These metrics are typically customized according to the business use-case. For instance, how does the output of your system compare to a baseline, such as a previous version? Although these metrics can be calculated offline, you need to log the necessary data to use them later.
Fault tolerance
---------------
Your application may encounter errors such as exceptions in your model inference or business logic code, causing failures and disrupting traffic. Other potential issues could arise from the machine running your application, such as unexpected hardware breakdowns or loss of spot-instances during high-demand periods. One way to mitigate these risks is by increasing redundancy through replica scaling and implementing recovery mechanisms for failed replicas. However, model replicas aren't the only potential points of failure. It's essential to build resilience against various failures that could occur at any point in your stack.
Zero down time upgrade
----------------------
System upgrades are often necessary but can result in service disruptions if not handled correctly. One way to prevent downtime during upgrades is by implementing a smooth transition process from the old version to the new one. Ideally, the new version of your LLM service is deployed, and traffic gradually shifts from the old to the new version, maintaining a constant QPS throughout the process.
Load balancing
--------------
Load balancing, in simple terms, is a technique to distribute work evenly across multiple computers, servers, or other resources to optimize the utilization of the system, maximize throughput, minimize response time, and avoid overload of any single resource. Think of it as a traffic officer directing cars (requests) to different roads (servers) so that no single road becomes too congested.
There are several strategies for load balancing. For example, one common method is the *Round Robin* strategy, where each request is sent to the next server in line, cycling back to the first when all servers have received a request. This works well when all servers are equally capable. However, if some servers are more powerful than others, you might use a *Weighted Round Robin* or *Least Connections* strategy, where more requests are sent to the more powerful servers, or to those currently handling the fewest active requests. Let's imagine you're running a LLM chain. If your application becomes popular, you could have hundreds or even thousands of users asking questions at the same time. If one server gets too busy (high load), the load balancer would direct new requests to another server that is less busy. This way, all your users get a timely response and the system remains stable.
Maintaining Cost-Efficiency and Scalability
============================================
.. _cost:
Deploying LLM services can be costly, especially when you're handling a large volume of user interactions. Charges by LLM providers are usually based on tokens used, making a chat system inference on these models potentially expensive. However, several strategies can help manage these costs without compromising the quality of the service.
Self-hosting models
-------------------
Several smaller and open-source LLMs are emerging to tackle the issue of reliance on LLM providers. Self-hosting allows you to maintain similar quality to LLM provider models while managing costs. The challenge lies in building a reliable, high-performing LLM serving system on your own machines.
Resource Management and Auto-Scaling
------------------------------------
Computational logic within your application requires precise resource allocation. For instance, if part of your traffic is served by an OpenAI endpoint and another part by a self-hosted model, it's crucial to allocate suitable resources for each. Auto-scaling—adjusting resource allocation based on traffic—can significantly impact the cost of running your application. This strategy requires a balance between cost and responsiveness, ensuring neither resource over-provisioning nor compromised application responsiveness.
Utilizing Spot Instances
------------------------
On platforms like AWS, spot instances offer substantial cost savings, typically priced at about a third of on-demand instances. The trade-off is a higher crash rate, necessitating a robust fault-tolerance mechanism for effective use.
Independent Scaling
-------------------
When self-hosting your models, you should consider independent scaling. For example, if you have two translation models, one fine-tuned for French and another for Spanish, incoming requests might necessitate different scaling requirements for each.
Batching requests
-----------------
In the context of Large Language Models, batching requests can enhance efficiency by better utilizing your GPU resources. GPUs are inherently parallel processors, designed to handle multiple tasks simultaneously. If you send individual requests to the model, the GPU might not be fully utilized as it's only working on a single task at a time. On the other hand, by batching requests together, you're allowing the GPU to work on multiple tasks at once, maximizing its utilization and improving inference speed. This not only leads to cost savings but can also improve the overall latency of your LLM service.
In summary, managing costs while scaling your LLM services requires a strategic approach. Utilizing self-hosting models, managing resources effectively, employing auto-scaling, using spot instances, independently scaling models, and batching requests are key strategies to consider. Open-source libraries such as Ray Serve and BentoML are designed to deal with these complexities.
Ensuring Rapid Iteration
========================
.. _iteration:
The LLM landscape is evolving at an unprecedented pace, with new libraries and model architectures being introduced constantly. Consequently, it's crucial to avoid tying yourself to a solution specific to one particular framework. This is especially relevant in serving, where changes to your infrastructure can be time-consuming, expensive, and risky. Strive for infrastructure that is not locked into any specific machine learning library or framework, but instead offers a general-purpose, scalable serving layer. Here are some aspects where flexibility plays a key role:
Model composition
-----------------
Deploying systems like LangChain demands the ability to piece together different models and connect them via logic. Take the example of building a natural language input SQL query engine. Querying an LLM and obtaining the SQL command is only part of the system. You need to extract metadata from the connected database, construct a prompt for the LLM, run the SQL query on an engine, collect and feed back the response to the LLM as the query runs, and present the results to the user. This demonstrates the need to seamlessly integrate various complex components built in Python into a dynamic chain of logical blocks that can be served together.
Cloud providers
---------------
Many hosted solutions are restricted to a single cloud provider, which can limit your options in today's multi-cloud world. Depending on where your other infrastructure components are built, you might prefer to stick with your chosen cloud provider.
Infrastructure as Code (IaC)
---------------------------
Rapid iteration also involves the ability to recreate your infrastructure quickly and reliably. This is where Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Kubernetes YAML files come into play. They allow you to define your infrastructure in code files, which can be version controlled and quickly deployed, enabling faster and more reliable iterations.
CI/CD
-----
In a fast-paced environment, implementing CI/CD pipelines can significantly speed up the iteration process. They help automate the testing and deployment of your LLM applications, reducing the risk of errors and enabling faster feedback and iteration.

View File

@@ -2,191 +2,230 @@
Dependents stats for `hwchase17/langchain`
[![](https://img.shields.io/static/v1?label=Used%20by&message=5152&color=informational&logo=slickpic)](https://github.com/hwchase17/langchain/network/dependents)
[![](https://img.shields.io/static/v1?label=Used%20by%20(public)&message=172&color=informational&logo=slickpic)](https://github.com/hwchase17/langchain/network/dependents)
[![](https://img.shields.io/static/v1?label=Used%20by%20(private)&message=4980&color=informational&logo=slickpic)](https://github.com/hwchase17/langchain/network/dependents)
[![](https://img.shields.io/static/v1?label=Used%20by%20(stars)&message=17239&color=informational&logo=slickpic)](https://github.com/hwchase17/langchain/network/dependents)
[![](https://img.shields.io/static/v1?label=Used%20by&message=7484&color=informational&logo=slickpic)](https://github.com/hwchase17/langchain/network/dependents)
[![](https://img.shields.io/static/v1?label=Used%20by%20(public)&message=212&color=informational&logo=slickpic)](https://github.com/hwchase17/langchain/network/dependents)
[![](https://img.shields.io/static/v1?label=Used%20by%20(private)&message=7272&color=informational&logo=slickpic)](https://github.com/hwchase17/langchain/network/dependents)
[![](https://img.shields.io/static/v1?label=Used%20by%20(stars)&message=19095&color=informational&logo=slickpic)](https://github.com/hwchase17/langchain/network/dependents)
[update: 2023-05-17; only dependent repositories with Stars > 100]
[update: 2023-06-05; only dependent repositories with Stars > 100]
| Repository | Stars |
| :-------- | -----: |
|[openai/openai-cookbook](https://github.com/openai/openai-cookbook) | 35401 |
|[LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant) | 32861 |
|[microsoft/TaskMatrix](https://github.com/microsoft/TaskMatrix) | 32766 |
|[hpcaitech/ColossalAI](https://github.com/hpcaitech/ColossalAI) | 29560 |
|[reworkd/AgentGPT](https://github.com/reworkd/AgentGPT) | 22315 |
|[imartinez/privateGPT](https://github.com/imartinez/privateGPT) | 17474 |
|[openai/chatgpt-retrieval-plugin](https://github.com/openai/chatgpt-retrieval-plugin) | 16923 |
|[mindsdb/mindsdb](https://github.com/mindsdb/mindsdb) | 16112 |
|[jerryjliu/llama_index](https://github.com/jerryjliu/llama_index) | 15407 |
|[mlflow/mlflow](https://github.com/mlflow/mlflow) | 14345 |
|[GaiZhenbiao/ChuanhuChatGPT](https://github.com/GaiZhenbiao/ChuanhuChatGPT) | 10372 |
|[databrickslabs/dolly](https://github.com/databrickslabs/dolly) | 9919 |
|[AIGC-Audio/AudioGPT](https://github.com/AIGC-Audio/AudioGPT) | 8177 |
|[logspace-ai/langflow](https://github.com/logspace-ai/langflow) | 6807 |
|[imClumsyPanda/langchain-ChatGLM](https://github.com/imClumsyPanda/langchain-ChatGLM) | 6087 |
|[arc53/DocsGPT](https://github.com/arc53/DocsGPT) | 5292 |
|[e2b-dev/e2b](https://github.com/e2b-dev/e2b) | 4622 |
|[nsarrazin/serge](https://github.com/nsarrazin/serge) | 4076 |
|[madawei2699/myGPTReader](https://github.com/madawei2699/myGPTReader) | 3952 |
|[zauberzeug/nicegui](https://github.com/zauberzeug/nicegui) | 3952 |
|[go-skynet/LocalAI](https://github.com/go-skynet/LocalAI) | 3762 |
|[GreyDGL/PentestGPT](https://github.com/GreyDGL/PentestGPT) | 3388 |
|[mmabrouk/chatgpt-wrapper](https://github.com/mmabrouk/chatgpt-wrapper) | 3243 |
|[zilliztech/GPTCache](https://github.com/zilliztech/GPTCache) | 3189 |
|[wenda-LLM/wenda](https://github.com/wenda-LLM/wenda) | 3050 |
|[marqo-ai/marqo](https://github.com/marqo-ai/marqo) | 2930 |
|[gkamradt/langchain-tutorials](https://github.com/gkamradt/langchain-tutorials) | 2710 |
|[PrefectHQ/marvin](https://github.com/PrefectHQ/marvin) | 2545 |
|[project-baize/baize-chatbot](https://github.com/project-baize/baize-chatbot) | 2479 |
|[whitead/paper-qa](https://github.com/whitead/paper-qa) | 2399 |
|[langgenius/dify](https://github.com/langgenius/dify) | 2344 |
|[GerevAI/gerev](https://github.com/GerevAI/gerev) | 2283 |
|[hwchase17/chat-langchain](https://github.com/hwchase17/chat-langchain) | 2266 |
|[guangzhengli/ChatFiles](https://github.com/guangzhengli/ChatFiles) | 1903 |
|[Azure-Samples/azure-search-openai-demo](https://github.com/Azure-Samples/azure-search-openai-demo) | 1884 |
|[OpenBMB/BMTools](https://github.com/OpenBMB/BMTools) | 1860 |
|[Farama-Foundation/PettingZoo](https://github.com/Farama-Foundation/PettingZoo) | 1813 |
|[OpenGVLab/Ask-Anything](https://github.com/OpenGVLab/Ask-Anything) | 1571 |
|[IntelligenzaArtificiale/Free-Auto-GPT](https://github.com/IntelligenzaArtificiale/Free-Auto-GPT) | 1480 |
|[hwchase17/notion-qa](https://github.com/hwchase17/notion-qa) | 1464 |
|[NVIDIA/NeMo-Guardrails](https://github.com/NVIDIA/NeMo-Guardrails) | 1419 |
|[Unstructured-IO/unstructured](https://github.com/Unstructured-IO/unstructured) | 1410 |
|[Kav-K/GPTDiscord](https://github.com/Kav-K/GPTDiscord) | 1363 |
|[paulpierre/RasaGPT](https://github.com/paulpierre/RasaGPT) | 1344 |
|[StanGirard/quivr](https://github.com/StanGirard/quivr) | 1330 |
|[lunasec-io/lunasec](https://github.com/lunasec-io/lunasec) | 1318 |
|[vocodedev/vocode-python](https://github.com/vocodedev/vocode-python) | 1286 |
|[agiresearch/OpenAGI](https://github.com/agiresearch/OpenAGI) | 1156 |
|[h2oai/h2ogpt](https://github.com/h2oai/h2ogpt) | 1141 |
|[jina-ai/thinkgpt](https://github.com/jina-ai/thinkgpt) | 1106 |
|[yanqiangmiffy/Chinese-LangChain](https://github.com/yanqiangmiffy/Chinese-LangChain) | 1072 |
|[ttengwang/Caption-Anything](https://github.com/ttengwang/Caption-Anything) | 1064 |
|[jina-ai/dev-gpt](https://github.com/jina-ai/dev-gpt) | 1057 |
|[juncongmoo/chatllama](https://github.com/juncongmoo/chatllama) | 1003 |
|[greshake/llm-security](https://github.com/greshake/llm-security) | 1002 |
|[visual-openllm/visual-openllm](https://github.com/visual-openllm/visual-openllm) | 957 |
|[richardyc/Chrome-GPT](https://github.com/richardyc/Chrome-GPT) | 918 |
|[irgolic/AutoPR](https://github.com/irgolic/AutoPR) | 886 |
|[mmz-001/knowledge_gpt](https://github.com/mmz-001/knowledge_gpt) | 867 |
|[thomas-yanxin/LangChain-ChatGLM-Webui](https://github.com/thomas-yanxin/LangChain-ChatGLM-Webui) | 850 |
|[microsoft/X-Decoder](https://github.com/microsoft/X-Decoder) | 837 |
|[peterw/Chat-with-Github-Repo](https://github.com/peterw/Chat-with-Github-Repo) | 826 |
|[cirediatpl/FigmaChain](https://github.com/cirediatpl/FigmaChain) | 782 |
|[hashintel/hash](https://github.com/hashintel/hash) | 778 |
|[seanpixel/Teenage-AGI](https://github.com/seanpixel/Teenage-AGI) | 773 |
|[jina-ai/langchain-serve](https://github.com/jina-ai/langchain-serve) | 738 |
|[corca-ai/EVAL](https://github.com/corca-ai/EVAL) | 737 |
|[ai-sidekick/sidekick](https://github.com/ai-sidekick/sidekick) | 717 |
|[rlancemartin/auto-evaluator](https://github.com/rlancemartin/auto-evaluator) | 703 |
|[poe-platform/api-bot-tutorial](https://github.com/poe-platform/api-bot-tutorial) | 689 |
|[SamurAIGPT/Camel-AutoGPT](https://github.com/SamurAIGPT/Camel-AutoGPT) | 666 |
|[eyurtsev/kor](https://github.com/eyurtsev/kor) | 608 |
|[run-llama/llama-lab](https://github.com/run-llama/llama-lab) | 559 |
|[namuan/dr-doc-search](https://github.com/namuan/dr-doc-search) | 544 |
|[pieroit/cheshire-cat](https://github.com/pieroit/cheshire-cat) | 520 |
|[griptape-ai/griptape](https://github.com/griptape-ai/griptape) | 514 |
|[getmetal/motorhead](https://github.com/getmetal/motorhead) | 481 |
|[hwchase17/chat-your-data](https://github.com/hwchase17/chat-your-data) | 462 |
|[langchain-ai/langchain-aiplugin](https://github.com/langchain-ai/langchain-aiplugin) | 452 |
|[jina-ai/agentchain](https://github.com/jina-ai/agentchain) | 439 |
|[SamurAIGPT/ChatGPT-Developer-Plugins](https://github.com/SamurAIGPT/ChatGPT-Developer-Plugins) | 437 |
|[alexanderatallah/window.ai](https://github.com/alexanderatallah/window.ai) | 433 |
|[michaelthwan/searchGPT](https://github.com/michaelthwan/searchGPT) | 427 |
|[mpaepper/content-chatbot](https://github.com/mpaepper/content-chatbot) | 425 |
|[mckaywrigley/repo-chat](https://github.com/mckaywrigley/repo-chat) | 422 |
|[whyiyhw/chatgpt-wechat](https://github.com/whyiyhw/chatgpt-wechat) | 421 |
|[freddyaboulton/gradio-tools](https://github.com/freddyaboulton/gradio-tools) | 407 |
|[jonra1993/fastapi-alembic-sqlmodel-async](https://github.com/jonra1993/fastapi-alembic-sqlmodel-async) | 395 |
|[yeagerai/yeagerai-agent](https://github.com/yeagerai/yeagerai-agent) | 383 |
|[akshata29/chatpdf](https://github.com/akshata29/chatpdf) | 374 |
|[OpenGVLab/InternGPT](https://github.com/OpenGVLab/InternGPT) | 368 |
|[ruoccofabrizio/azure-open-ai-embeddings-qna](https://github.com/ruoccofabrizio/azure-open-ai-embeddings-qna) | 358 |
|[101dotxyz/GPTeam](https://github.com/101dotxyz/GPTeam) | 357 |
|[mtenenholtz/chat-twitter](https://github.com/mtenenholtz/chat-twitter) | 354 |
|[amosjyng/langchain-visualizer](https://github.com/amosjyng/langchain-visualizer) | 343 |
|[msoedov/langcorn](https://github.com/msoedov/langcorn) | 334 |
|[showlab/VLog](https://github.com/showlab/VLog) | 330 |
|[continuum-llms/chatgpt-memory](https://github.com/continuum-llms/chatgpt-memory) | 324 |
|[steamship-core/steamship-langchain](https://github.com/steamship-core/steamship-langchain) | 323 |
|[daodao97/chatdoc](https://github.com/daodao97/chatdoc) | 320 |
|[xuwenhao/geektime-ai-course](https://github.com/xuwenhao/geektime-ai-course) | 308 |
|[StevenGrove/GPT4Tools](https://github.com/StevenGrove/GPT4Tools) | 301 |
|[logan-markewich/llama_index_starter_pack](https://github.com/logan-markewich/llama_index_starter_pack) | 300 |
|[andylokandy/gpt-4-search](https://github.com/andylokandy/gpt-4-search) | 299 |
|[Anil-matcha/ChatPDF](https://github.com/Anil-matcha/ChatPDF) | 287 |
|[itamargol/openai](https://github.com/itamargol/openai) | 273 |
|[BlackHC/llm-strategy](https://github.com/BlackHC/llm-strategy) | 267 |
|[momegas/megabots](https://github.com/momegas/megabots) | 259 |
|[bborn/howdoi.ai](https://github.com/bborn/howdoi.ai) | 238 |
|[Cheems-Seminar/grounded-segment-any-parts](https://github.com/Cheems-Seminar/grounded-segment-any-parts) | 232 |
|[ur-whitelab/exmol](https://github.com/ur-whitelab/exmol) | 227 |
|[sullivan-sean/chat-langchainjs](https://github.com/sullivan-sean/chat-langchainjs) | 227 |
|[explosion/spacy-llm](https://github.com/explosion/spacy-llm) | 226 |
|[recalign/RecAlign](https://github.com/recalign/RecAlign) | 218 |
|[jupyterlab/jupyter-ai](https://github.com/jupyterlab/jupyter-ai) | 218 |
|[alvarosevilla95/autolang](https://github.com/alvarosevilla95/autolang) | 215 |
|[conceptofmind/toolformer](https://github.com/conceptofmind/toolformer) | 213 |
|[MagnivOrg/prompt-layer-library](https://github.com/MagnivOrg/prompt-layer-library) | 209 |
|[JohnSnowLabs/nlptest](https://github.com/JohnSnowLabs/nlptest) | 208 |
|[airobotlab/KoChatGPT](https://github.com/airobotlab/KoChatGPT) | 197 |
|[langchain-ai/auto-evaluator](https://github.com/langchain-ai/auto-evaluator) | 195 |
|[yvann-hub/Robby-chatbot](https://github.com/yvann-hub/Robby-chatbot) | 195 |
|[alejandro-ao/langchain-ask-pdf](https://github.com/alejandro-ao/langchain-ask-pdf) | 192 |
|[daveebbelaar/langchain-experiments](https://github.com/daveebbelaar/langchain-experiments) | 189 |
|[NimbleBoxAI/ChainFury](https://github.com/NimbleBoxAI/ChainFury) | 187 |
|[kaleido-lab/dolphin](https://github.com/kaleido-lab/dolphin) | 184 |
|[Anil-matcha/Website-to-Chatbot](https://github.com/Anil-matcha/Website-to-Chatbot) | 183 |
|[plchld/InsightFlow](https://github.com/plchld/InsightFlow) | 180 |
|[OpenBMB/AgentVerse](https://github.com/OpenBMB/AgentVerse) | 166 |
|[benthecoder/ClassGPT](https://github.com/benthecoder/ClassGPT) | 166 |
|[jbrukh/gpt-jargon](https://github.com/jbrukh/gpt-jargon) | 161 |
|[hardbyte/qabot](https://github.com/hardbyte/qabot) | 160 |
|[shaman-ai/agent-actors](https://github.com/shaman-ai/agent-actors) | 153 |
|[radi-cho/datasetGPT](https://github.com/radi-cho/datasetGPT) | 153 |
|[poe-platform/poe-protocol](https://github.com/poe-platform/poe-protocol) | 152 |
|[paolorechia/learn-langchain](https://github.com/paolorechia/learn-langchain) | 149 |
|[ajndkr/lanarky](https://github.com/ajndkr/lanarky) | 149 |
|[fengyuli-dev/multimedia-gpt](https://github.com/fengyuli-dev/multimedia-gpt) | 147 |
|[yasyf/compress-gpt](https://github.com/yasyf/compress-gpt) | 144 |
|[homanp/superagent](https://github.com/homanp/superagent) | 143 |
|[realminchoi/babyagi-ui](https://github.com/realminchoi/babyagi-ui) | 141 |
|[ethanyanjiali/minChatGPT](https://github.com/ethanyanjiali/minChatGPT) | 141 |
|[ccurme/yolopandas](https://github.com/ccurme/yolopandas) | 139 |
|[hwchase17/langchain-streamlit-template](https://github.com/hwchase17/langchain-streamlit-template) | 138 |
|[Jaseci-Labs/jaseci](https://github.com/Jaseci-Labs/jaseci) | 136 |
|[openai/openai-cookbook](https://github.com/openai/openai-cookbook) | 38024 |
|[LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant) | 33609 |
|[microsoft/TaskMatrix](https://github.com/microsoft/TaskMatrix) | 33136 |
|[hpcaitech/ColossalAI](https://github.com/hpcaitech/ColossalAI) | 30032 |
|[imartinez/privateGPT](https://github.com/imartinez/privateGPT) | 28094 |
|[reworkd/AgentGPT](https://github.com/reworkd/AgentGPT) | 23430 |
|[openai/chatgpt-retrieval-plugin](https://github.com/openai/chatgpt-retrieval-plugin) | 17942 |
|[jerryjliu/llama_index](https://github.com/jerryjliu/llama_index) | 16697 |
|[mindsdb/mindsdb](https://github.com/mindsdb/mindsdb) | 16410 |
|[mlflow/mlflow](https://github.com/mlflow/mlflow) | 14517 |
|[GaiZhenbiao/ChuanhuChatGPT](https://github.com/GaiZhenbiao/ChuanhuChatGPT) | 10793 |
|[databrickslabs/dolly](https://github.com/databrickslabs/dolly) | 10155 |
|[openai/evals](https://github.com/openai/evals) | 10076 |
|[AIGC-Audio/AudioGPT](https://github.com/AIGC-Audio/AudioGPT) | 8619 |
|[logspace-ai/langflow](https://github.com/logspace-ai/langflow) | 8211 |
|[imClumsyPanda/langchain-ChatGLM](https://github.com/imClumsyPanda/langchain-ChatGLM) | 8154 |
|[PromtEngineer/localGPT](https://github.com/PromtEngineer/localGPT) | 6853 |
|[StanGirard/quivr](https://github.com/StanGirard/quivr) | 6830 |
|[PipedreamHQ/pipedream](https://github.com/PipedreamHQ/pipedream) | 6520 |
|[go-skynet/LocalAI](https://github.com/go-skynet/LocalAI) | 6018 |
|[arc53/DocsGPT](https://github.com/arc53/DocsGPT) | 5643 |
|[e2b-dev/e2b](https://github.com/e2b-dev/e2b) | 5075 |
|[langgenius/dify](https://github.com/langgenius/dify) | 4281 |
|[nsarrazin/serge](https://github.com/nsarrazin/serge) | 4228 |
|[zauberzeug/nicegui](https://github.com/zauberzeug/nicegui) | 4084 |
|[madawei2699/myGPTReader](https://github.com/madawei2699/myGPTReader) | 4039 |
|[wenda-LLM/wenda](https://github.com/wenda-LLM/wenda) | 3871 |
|[GreyDGL/PentestGPT](https://github.com/GreyDGL/PentestGPT) | 3837 |
|[zilliztech/GPTCache](https://github.com/zilliztech/GPTCache) | 3625 |
|[csunny/DB-GPT](https://github.com/csunny/DB-GPT) | 3545 |
|[gkamradt/langchain-tutorials](https://github.com/gkamradt/langchain-tutorials) | 3404 |
|[mmabrouk/chatgpt-wrapper](https://github.com/mmabrouk/chatgpt-wrapper) | 3303 |
|[postgresml/postgresml](https://github.com/postgresml/postgresml) | 3052 |
|[marqo-ai/marqo](https://github.com/marqo-ai/marqo) | 3014 |
|[MineDojo/Voyager](https://github.com/MineDojo/Voyager) | 2945 |
|[PrefectHQ/marvin](https://github.com/PrefectHQ/marvin) | 2761 |
|[project-baize/baize-chatbot](https://github.com/project-baize/baize-chatbot) | 2673 |
|[hwchase17/chat-langchain](https://github.com/hwchase17/chat-langchain) | 2589 |
|[whitead/paper-qa](https://github.com/whitead/paper-qa) | 2572 |
|[Azure-Samples/azure-search-openai-demo](https://github.com/Azure-Samples/azure-search-openai-demo) | 2366 |
|[GerevAI/gerev](https://github.com/GerevAI/gerev) | 2330 |
|[OpenGVLab/InternGPT](https://github.com/OpenGVLab/InternGPT) | 2289 |
|[ParisNeo/gpt4all-ui](https://github.com/ParisNeo/gpt4all-ui) | 2159 |
|[OpenBMB/BMTools](https://github.com/OpenBMB/BMTools) | 2158 |
|[guangzhengli/ChatFiles](https://github.com/guangzhengli/ChatFiles) | 2005 |
|[h2oai/h2ogpt](https://github.com/h2oai/h2ogpt) | 1939 |
|[Farama-Foundation/PettingZoo](https://github.com/Farama-Foundation/PettingZoo) | 1845 |
|[OpenGVLab/Ask-Anything](https://github.com/OpenGVLab/Ask-Anything) | 1749 |
|[IntelligenzaArtificiale/Free-Auto-GPT](https://github.com/IntelligenzaArtificiale/Free-Auto-GPT) | 1740 |
|[Unstructured-IO/unstructured](https://github.com/Unstructured-IO/unstructured) | 1628 |
|[hwchase17/notion-qa](https://github.com/hwchase17/notion-qa) | 1607 |
|[NVIDIA/NeMo-Guardrails](https://github.com/NVIDIA/NeMo-Guardrails) | 1544 |
|[SamurAIGPT/privateGPT](https://github.com/SamurAIGPT/privateGPT) | 1543 |
|[paulpierre/RasaGPT](https://github.com/paulpierre/RasaGPT) | 1526 |
|[yanqiangmiffy/Chinese-LangChain](https://github.com/yanqiangmiffy/Chinese-LangChain) | 1485 |
|[Kav-K/GPTDiscord](https://github.com/Kav-K/GPTDiscord) | 1402 |
|[vocodedev/vocode-python](https://github.com/vocodedev/vocode-python) | 1387 |
|[Chainlit/chainlit](https://github.com/Chainlit/chainlit) | 1336 |
|[lunasec-io/lunasec](https://github.com/lunasec-io/lunasec) | 1323 |
|[psychic-api/psychic](https://github.com/psychic-api/psychic) | 1248 |
|[agiresearch/OpenAGI](https://github.com/agiresearch/OpenAGI) | 1208 |
|[jina-ai/thinkgpt](https://github.com/jina-ai/thinkgpt) | 1193 |
|[thomas-yanxin/LangChain-ChatGLM-Webui](https://github.com/thomas-yanxin/LangChain-ChatGLM-Webui) | 1182 |
|[ttengwang/Caption-Anything](https://github.com/ttengwang/Caption-Anything) | 1137 |
|[jina-ai/dev-gpt](https://github.com/jina-ai/dev-gpt) | 1135 |
|[greshake/llm-security](https://github.com/greshake/llm-security) | 1086 |
|[keephq/keep](https://github.com/keephq/keep) | 1063 |
|[juncongmoo/chatllama](https://github.com/juncongmoo/chatllama) | 1037 |
|[richardyc/Chrome-GPT](https://github.com/richardyc/Chrome-GPT) | 1035 |
|[visual-openllm/visual-openllm](https://github.com/visual-openllm/visual-openllm) | 997 |
|[mmz-001/knowledge_gpt](https://github.com/mmz-001/knowledge_gpt) | 995 |
|[jina-ai/langchain-serve](https://github.com/jina-ai/langchain-serve) | 949 |
|[irgolic/AutoPR](https://github.com/irgolic/AutoPR) | 936 |
|[microsoft/X-Decoder](https://github.com/microsoft/X-Decoder) | 908 |
|[poe-platform/api-bot-tutorial](https://github.com/poe-platform/api-bot-tutorial) | 902 |
|[peterw/Chat-with-Github-Repo](https://github.com/peterw/Chat-with-Github-Repo) | 875 |
|[cirediatpl/FigmaChain](https://github.com/cirediatpl/FigmaChain) | 822 |
|[homanp/superagent](https://github.com/homanp/superagent) | 806 |
|[seanpixel/Teenage-AGI](https://github.com/seanpixel/Teenage-AGI) | 800 |
|[chatarena/chatarena](https://github.com/chatarena/chatarena) | 796 |
|[hashintel/hash](https://github.com/hashintel/hash) | 795 |
|[SamurAIGPT/Camel-AutoGPT](https://github.com/SamurAIGPT/Camel-AutoGPT) | 786 |
|[rlancemartin/auto-evaluator](https://github.com/rlancemartin/auto-evaluator) | 770 |
|[corca-ai/EVAL](https://github.com/corca-ai/EVAL) | 769 |
|[101dotxyz/GPTeam](https://github.com/101dotxyz/GPTeam) | 755 |
|[noahshinn024/reflexion](https://github.com/noahshinn024/reflexion) | 706 |
|[eyurtsev/kor](https://github.com/eyurtsev/kor) | 695 |
|[cheshire-cat-ai/core](https://github.com/cheshire-cat-ai/core) | 681 |
|[e-johnstonn/BriefGPT](https://github.com/e-johnstonn/BriefGPT) | 656 |
|[run-llama/llama-lab](https://github.com/run-llama/llama-lab) | 635 |
|[griptape-ai/griptape](https://github.com/griptape-ai/griptape) | 583 |
|[namuan/dr-doc-search](https://github.com/namuan/dr-doc-search) | 555 |
|[getmetal/motorhead](https://github.com/getmetal/motorhead) | 550 |
|[kreneskyp/ix](https://github.com/kreneskyp/ix) | 543 |
|[hwchase17/chat-your-data](https://github.com/hwchase17/chat-your-data) | 510 |
|[Anil-matcha/ChatPDF](https://github.com/Anil-matcha/ChatPDF) | 501 |
|[whyiyhw/chatgpt-wechat](https://github.com/whyiyhw/chatgpt-wechat) | 497 |
|[SamurAIGPT/ChatGPT-Developer-Plugins](https://github.com/SamurAIGPT/ChatGPT-Developer-Plugins) | 496 |
|[microsoft/PodcastCopilot](https://github.com/microsoft/PodcastCopilot) | 492 |
|[debanjum/khoj](https://github.com/debanjum/khoj) | 485 |
|[akshata29/chatpdf](https://github.com/akshata29/chatpdf) | 485 |
|[langchain-ai/langchain-aiplugin](https://github.com/langchain-ai/langchain-aiplugin) | 462 |
|[jina-ai/agentchain](https://github.com/jina-ai/agentchain) | 460 |
|[alexanderatallah/window.ai](https://github.com/alexanderatallah/window.ai) | 457 |
|[yeagerai/yeagerai-agent](https://github.com/yeagerai/yeagerai-agent) | 451 |
|[mckaywrigley/repo-chat](https://github.com/mckaywrigley/repo-chat) | 446 |
|[michaelthwan/searchGPT](https://github.com/michaelthwan/searchGPT) | 446 |
|[mpaepper/content-chatbot](https://github.com/mpaepper/content-chatbot) | 441 |
|[freddyaboulton/gradio-tools](https://github.com/freddyaboulton/gradio-tools) | 439 |
|[ruoccofabrizio/azure-open-ai-embeddings-qna](https://github.com/ruoccofabrizio/azure-open-ai-embeddings-qna) | 429 |
|[StevenGrove/GPT4Tools](https://github.com/StevenGrove/GPT4Tools) | 422 |
|[jonra1993/fastapi-alembic-sqlmodel-async](https://github.com/jonra1993/fastapi-alembic-sqlmodel-async) | 407 |
|[msoedov/langcorn](https://github.com/msoedov/langcorn) | 405 |
|[amosjyng/langchain-visualizer](https://github.com/amosjyng/langchain-visualizer) | 395 |
|[ajndkr/lanarky](https://github.com/ajndkr/lanarky) | 384 |
|[mtenenholtz/chat-twitter](https://github.com/mtenenholtz/chat-twitter) | 376 |
|[steamship-core/steamship-langchain](https://github.com/steamship-core/steamship-langchain) | 371 |
|[langchain-ai/auto-evaluator](https://github.com/langchain-ai/auto-evaluator) | 365 |
|[xuwenhao/geektime-ai-course](https://github.com/xuwenhao/geektime-ai-course) | 358 |
|[continuum-llms/chatgpt-memory](https://github.com/continuum-llms/chatgpt-memory) | 357 |
|[opentensor/bittensor](https://github.com/opentensor/bittensor) | 347 |
|[showlab/VLog](https://github.com/showlab/VLog) | 345 |
|[daodao97/chatdoc](https://github.com/daodao97/chatdoc) | 345 |
|[logan-markewich/llama_index_starter_pack](https://github.com/logan-markewich/llama_index_starter_pack) | 332 |
|[poe-platform/poe-protocol](https://github.com/poe-platform/poe-protocol) | 320 |
|[explosion/spacy-llm](https://github.com/explosion/spacy-llm) | 312 |
|[andylokandy/gpt-4-search](https://github.com/andylokandy/gpt-4-search) | 311 |
|[alejandro-ao/langchain-ask-pdf](https://github.com/alejandro-ao/langchain-ask-pdf) | 310 |
|[jupyterlab/jupyter-ai](https://github.com/jupyterlab/jupyter-ai) | 294 |
|[BlackHC/llm-strategy](https://github.com/BlackHC/llm-strategy) | 283 |
|[itamargol/openai](https://github.com/itamargol/openai) | 281 |
|[momegas/megabots](https://github.com/momegas/megabots) | 279 |
|[personoids/personoids-lite](https://github.com/personoids/personoids-lite) | 277 |
|[yvann-hub/Robby-chatbot](https://github.com/yvann-hub/Robby-chatbot) | 267 |
|[Anil-matcha/Website-to-Chatbot](https://github.com/Anil-matcha/Website-to-Chatbot) | 266 |
|[Cheems-Seminar/grounded-segment-any-parts](https://github.com/Cheems-Seminar/grounded-segment-any-parts) | 260 |
|[sullivan-sean/chat-langchainjs](https://github.com/sullivan-sean/chat-langchainjs) | 248 |
|[bborn/howdoi.ai](https://github.com/bborn/howdoi.ai) | 245 |
|[daveebbelaar/langchain-experiments](https://github.com/daveebbelaar/langchain-experiments) | 240 |
|[MagnivOrg/prompt-layer-library](https://github.com/MagnivOrg/prompt-layer-library) | 237 |
|[ur-whitelab/exmol](https://github.com/ur-whitelab/exmol) | 234 |
|[conceptofmind/toolformer](https://github.com/conceptofmind/toolformer) | 234 |
|[recalign/RecAlign](https://github.com/recalign/RecAlign) | 226 |
|[OpenBMB/AgentVerse](https://github.com/OpenBMB/AgentVerse) | 220 |
|[alvarosevilla95/autolang](https://github.com/alvarosevilla95/autolang) | 219 |
|[JohnSnowLabs/nlptest](https://github.com/JohnSnowLabs/nlptest) | 216 |
|[kaleido-lab/dolphin](https://github.com/kaleido-lab/dolphin) | 215 |
|[truera/trulens](https://github.com/truera/trulens) | 208 |
|[NimbleBoxAI/ChainFury](https://github.com/NimbleBoxAI/ChainFury) | 208 |
|[airobotlab/KoChatGPT](https://github.com/airobotlab/KoChatGPT) | 207 |
|[monarch-initiative/ontogpt](https://github.com/monarch-initiative/ontogpt) | 200 |
|[paolorechia/learn-langchain](https://github.com/paolorechia/learn-langchain) | 195 |
|[shaman-ai/agent-actors](https://github.com/shaman-ai/agent-actors) | 185 |
|[Haste171/langchain-chatbot](https://github.com/Haste171/langchain-chatbot) | 184 |
|[plchld/InsightFlow](https://github.com/plchld/InsightFlow) | 182 |
|[su77ungr/CASALIOY](https://github.com/su77ungr/CASALIOY) | 180 |
|[jbrukh/gpt-jargon](https://github.com/jbrukh/gpt-jargon) | 177 |
|[benthecoder/ClassGPT](https://github.com/benthecoder/ClassGPT) | 174 |
|[billxbf/ReWOO](https://github.com/billxbf/ReWOO) | 170 |
|[filip-michalsky/SalesGPT](https://github.com/filip-michalsky/SalesGPT) | 168 |
|[hwchase17/langchain-streamlit-template](https://github.com/hwchase17/langchain-streamlit-template) | 168 |
|[radi-cho/datasetGPT](https://github.com/radi-cho/datasetGPT) | 164 |
|[hardbyte/qabot](https://github.com/hardbyte/qabot) | 164 |
|[gia-guar/JARVIS-ChatGPT](https://github.com/gia-guar/JARVIS-ChatGPT) | 158 |
|[plastic-labs/tutor-gpt](https://github.com/plastic-labs/tutor-gpt) | 154 |
|[yasyf/compress-gpt](https://github.com/yasyf/compress-gpt) | 154 |
|[fengyuli-dev/multimedia-gpt](https://github.com/fengyuli-dev/multimedia-gpt) | 154 |
|[ethanyanjiali/minChatGPT](https://github.com/ethanyanjiali/minChatGPT) | 153 |
|[hwchase17/chroma-langchain](https://github.com/hwchase17/chroma-langchain) | 153 |
|[edreisMD/plugnplai](https://github.com/edreisMD/plugnplai) | 148 |
|[chakkaradeep/pyCodeAGI](https://github.com/chakkaradeep/pyCodeAGI) | 145 |
|[ccurme/yolopandas](https://github.com/ccurme/yolopandas) | 145 |
|[shamspias/customizable-gpt-chatbot](https://github.com/shamspias/customizable-gpt-chatbot) | 144 |
|[realminchoi/babyagi-ui](https://github.com/realminchoi/babyagi-ui) | 143 |
|[PradipNichite/Youtube-Tutorials](https://github.com/PradipNichite/Youtube-Tutorials) | 140 |
|[gustavz/DataChad](https://github.com/gustavz/DataChad) | 140 |
|[Klingefjord/chatgpt-telegram](https://github.com/Klingefjord/chatgpt-telegram) | 140 |
|[Jaseci-Labs/jaseci](https://github.com/Jaseci-Labs/jaseci) | 139 |
|[handrew/browserpilot](https://github.com/handrew/browserpilot) | 137 |
|[jmpaz/promptlib](https://github.com/jmpaz/promptlib) | 137 |
|[SamPink/dev-gpt](https://github.com/SamPink/dev-gpt) | 135 |
|[menloparklab/langchain-cohere-qdrant-doc-retrieval](https://github.com/menloparklab/langchain-cohere-qdrant-doc-retrieval) | 135 |
|[hirokidaichi/wanna](https://github.com/hirokidaichi/wanna) | 135 |
|[Haste171/langchain-chatbot](https://github.com/Haste171/langchain-chatbot) | 134 |
|[jmpaz/promptlib](https://github.com/jmpaz/promptlib) | 130 |
|[Klingefjord/chatgpt-telegram](https://github.com/Klingefjord/chatgpt-telegram) | 130 |
|[filip-michalsky/SalesGPT](https://github.com/filip-michalsky/SalesGPT) | 128 |
|[handrew/browserpilot](https://github.com/handrew/browserpilot) | 128 |
|[shauryr/S2QA](https://github.com/shauryr/S2QA) | 127 |
|[steamship-core/vercel-examples](https://github.com/steamship-core/vercel-examples) | 127 |
|[yasyf/summ](https://github.com/yasyf/summ) | 127 |
|[gia-guar/JARVIS-ChatGPT](https://github.com/gia-guar/JARVIS-ChatGPT) | 126 |
|[jerlendds/osintbuddy](https://github.com/jerlendds/osintbuddy) | 125 |
|[ibiscp/LLM-IMDB](https://github.com/ibiscp/LLM-IMDB) | 124 |
|[Teahouse-Studios/akari-bot](https://github.com/Teahouse-Studios/akari-bot) | 124 |
|[hwchase17/chroma-langchain](https://github.com/hwchase17/chroma-langchain) | 124 |
|[menloparklab/langchain-cohere-qdrant-doc-retrieval](https://github.com/menloparklab/langchain-cohere-qdrant-doc-retrieval) | 123 |
|[peterw/StoryStorm](https://github.com/peterw/StoryStorm) | 123 |
|[chakkaradeep/pyCodeAGI](https://github.com/chakkaradeep/pyCodeAGI) | 123 |
|[petehunt/langchain-github-bot](https://github.com/petehunt/langchain-github-bot) | 115 |
|[su77ungr/CASALIOY](https://github.com/su77ungr/CASALIOY) | 113 |
|[eunomia-bpf/GPTtrace](https://github.com/eunomia-bpf/GPTtrace) | 113 |
|[steamship-core/vercel-examples](https://github.com/steamship-core/vercel-examples) | 134 |
|[pablomarin/GPT-Azure-Search-Engine](https://github.com/pablomarin/GPT-Azure-Search-Engine) | 133 |
|[ibiscp/LLM-IMDB](https://github.com/ibiscp/LLM-IMDB) | 133 |
|[shauryr/S2QA](https://github.com/shauryr/S2QA) | 133 |
|[jerlendds/osintbuddy](https://github.com/jerlendds/osintbuddy) | 132 |
|[yuanjie-ai/ChatLLM](https://github.com/yuanjie-ai/ChatLLM) | 132 |
|[yasyf/summ](https://github.com/yasyf/summ) | 132 |
|[WongSaang/chatgpt-ui-server](https://github.com/WongSaang/chatgpt-ui-server) | 130 |
|[peterw/StoryStorm](https://github.com/peterw/StoryStorm) | 127 |
|[Teahouse-Studios/akari-bot](https://github.com/Teahouse-Studios/akari-bot) | 126 |
|[vaibkumr/prompt-optimizer](https://github.com/vaibkumr/prompt-optimizer) | 125 |
|[preset-io/promptimize](https://github.com/preset-io/promptimize) | 124 |
|[homanp/vercel-langchain](https://github.com/homanp/vercel-langchain) | 124 |
|[petehunt/langchain-github-bot](https://github.com/petehunt/langchain-github-bot) | 123 |
|[eunomia-bpf/GPTtrace](https://github.com/eunomia-bpf/GPTtrace) | 118 |
|[nicknochnack/LangchainDocuments](https://github.com/nicknochnack/LangchainDocuments) | 116 |
|[jiran214/GPT-vup](https://github.com/jiran214/GPT-vup) | 112 |
|[rsaryev/talk-codebase](https://github.com/rsaryev/talk-codebase) | 112 |
|[zenml-io/zenml-projects](https://github.com/zenml-io/zenml-projects) | 112 |
|[pablomarin/GPT-Azure-Search-Engine](https://github.com/pablomarin/GPT-Azure-Search-Engine) | 111 |
|[shamspias/customizable-gpt-chatbot](https://github.com/shamspias/customizable-gpt-chatbot) | 109 |
|[WongSaang/chatgpt-ui-server](https://github.com/WongSaang/chatgpt-ui-server) | 108 |
|[davila7/file-gpt](https://github.com/davila7/file-gpt) | 104 |
|[enhancedocs/enhancedocs](https://github.com/enhancedocs/enhancedocs) | 102 |
|[aurelio-labs/arxiv-bot](https://github.com/aurelio-labs/arxiv-bot) | 101 |
|[microsoft/azure-openai-in-a-day-workshop](https://github.com/microsoft/azure-openai-in-a-day-workshop) | 112 |
|[davila7/file-gpt](https://github.com/davila7/file-gpt) | 112 |
|[prof-frink-lab/slangchain](https://github.com/prof-frink-lab/slangchain) | 111 |
|[aurelio-labs/arxiv-bot](https://github.com/aurelio-labs/arxiv-bot) | 110 |
|[fixie-ai/fixie-examples](https://github.com/fixie-ai/fixie-examples) | 108 |
|[miaoshouai/miaoshouai-assistant](https://github.com/miaoshouai/miaoshouai-assistant) | 105 |
|[flurb18/AgentOoba](https://github.com/flurb18/AgentOoba) | 103 |
|[solana-labs/chatgpt-plugin](https://github.com/solana-labs/chatgpt-plugin) | 102 |
|[Significant-Gravitas/Auto-GPT-Benchmarks](https://github.com/Significant-Gravitas/Auto-GPT-Benchmarks) | 102 |
|[kaarthik108/snowChat](https://github.com/kaarthik108/snowChat) | 100 |
_Generated by [github-dependents-info](https://github.com/nvuillam/github-dependents-info)_
[github-dependents-info --repo hwchase17/langchain --markdownfile dependents.md --minstars 100 --sort stars]
`github-dependents-info --repo hwchase17/langchain --markdownfile dependents.md --minstars 100 --sort stars`

View File

@@ -6,6 +6,11 @@ This section covers several options for that. Note that these options are meant
What follows is a list of template GitHub repositories designed to be easily forked and modified to use your chain. This list is far from exhaustive, and we are EXTREMELY open to contributions here.
## [Anyscale](https://www.anyscale.com/model-serving)
Anyscale is a unified compute platform that makes it easy to develop, deploy, and manage scalable LLM applications in production using Ray.
With Anyscale you can scale the most challenging LLM-based workloads and both develop and deploy LLM-based apps on a single compute platform.
## [Streamlit](https://github.com/hwchase17/langchain-streamlit-template)
This repo serves as a template for how to deploy a LangChain with Streamlit.

View File

@@ -10,7 +10,8 @@
### Tutorials
[LangChain Tutorials](https://www.youtube.com/watch?v=FuqdVNB_8c0&list=PL9V0lbeJ69brU-ojMpU1Y7Ic58Tap0Cw6) by [Edrick](https://www.youtube.com/@edrickdch):
- ⛓ [LangChain, Chroma DB, OpenAI Beginner Guide | ChatGPT with your PDF](https://youtu.be/FuqdVNB_8c0)
- ⛓ [LangChain, Chroma DB, OpenAI Beginner Guide | ChatGPT with your PDF](https://youtu.be/FuqdVNB_8c0)
- ⛓ [LangChain 101: The Complete Beginner's Guide](https://youtu.be/P3MAbZ2eMUI)
[LangChain Crash Course: Build an AutoGPT app in 25 minutes](https://youtu.be/MlK6SIjcjE8) by [Nicholas Renotte](https://www.youtube.com/@NicholasRenotte)

View File

@@ -176,6 +176,8 @@ Additional Resources
- `Gallery <https://github.com/kyrolabs/awesome-langchain>`_: A collection of great projects that use Langchain, compiled by the folks at `Kyrolabs <https://kyrolabs.com>`_. Useful for finding inspiration and example implementations.
- `Deploying LLMs in Production <./additional_resources/deploy_llms.html>`_: A collection of best practices and tutorials for deploying LLMs in production.
- `Tracing <./additional_resources/tracing.html>`_: A guide on using tracing in LangChain to visualize the execution of chains and agents.
- `Model Laboratory <./additional_resources/model_laboratory.html>`_: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
@@ -194,6 +196,8 @@ Additional Resources
:hidden:
LangChainHub <https://github.com/hwchase17/langchain-hub>
./additional_resources/deployments.md
./additional_resources/deploy_llms.rst
Gallery <https://github.com/kyrolabs/awesome-langchain>
./additional_resources/tracing.md
./additional_resources/model_laboratory.ipynb

View File

@@ -18,7 +18,7 @@ from langchain import Bedrock
## Text Embedding Models
See a [usage example](../modules/models/text_embedding/examples/bedrock.ipynb).
See a [usage example](../modules/models/text_embedding/examples/amazon_bedrock.ipynb).
```python
from langchain.embeddings import BedrockEmbeddings
```

View File

@@ -0,0 +1,18 @@
# Annoy
> [Annoy](https://github.com/spotify/annoy) (`Approximate Nearest Neighbors Oh Yeah`) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.
## Installation and Setup
```bash
pip install annoy
```
## Vectorstore
See a [usage example](../modules/indexes/vectorstores/examples/annoy.ipynb).
```python
from langchain.vectorstores import Annoy
```

View File

@@ -0,0 +1,26 @@
# Anthropic
>[Anthropic](https://en.wikipedia.org/wiki/Anthropic) is an American artificial intelligence (AI) startup and
> public-benefit corporation, founded by former members of OpenAI. `Anthropic` specializes in developing general AI
> systems and language models, with a company ethos of responsible AI usage.
> `Anthropic` develops a chatbot, named `Claude`. Similar to `ChatGPT`, `Claude` uses a messaging
> interface where users can submit questions or requests and receive highly detailed and relevant responses.
## Installation and Setup
```bash
pip install anthropic
```
See the [setup documentation](https://console.anthropic.com/docs/access).
## Chat Models
See a [usage example](../modules/models/chat/integrations/anthropic.ipynb)
```python
from langchain.chat_models import ChatAnthropic
```

View File

@@ -26,3 +26,11 @@ See a [usage example](../modules/indexes/document_loaders/examples/arxiv.ipynb).
```python
from langchain.document_loaders import ArxivLoader
```
## Retriever
See a [usage example](../modules/indexes/retrievers/examples/arxiv.ipynb).
```python
from langchain.retrievers import ArxivRetriever
```

View File

@@ -0,0 +1,24 @@
# Azure Cognitive Search
>[Azure Cognitive Search](https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search) (formerly known as `Azure Search`) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
>Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities:
>- A search engine for full text search over a search index containing user-owned content
>- Rich indexing, with lexical analysis and optional AI enrichment for content extraction and transformation
>- Rich query syntax for text search, fuzzy search, autocomplete, geo-search and more
>- Programmability through REST APIs and client libraries in Azure SDKs
>- Azure integration at the data layer, machine learning layer, and AI (Cognitive Services)
## Installation and Setup
See [set up instructions](https://learn.microsoft.com/en-us/azure/search/search-create-service-portal).
## Retriever
See a [usage example](../modules/indexes/retrievers/examples/azure_cognitive_search.ipynb).
```python
from langchain.retrievers import AzureCognitiveSearchRetriever
```

View File

@@ -1,7 +1,8 @@
# Beam
This page covers how to use Beam within LangChain.
It is broken into two parts: installation and setup, and then references to specific Beam wrappers.
>[Beam](https://docs.beam.cloud/introduction) makes it easy to run code on GPUs, deploy scalable web APIs,
> schedule cron jobs, and run massively parallel workloads — without managing any infrastructure.
## Installation and Setup
@@ -9,19 +10,19 @@ It is broken into two parts: installation and setup, and then references to spec
- Install the Beam CLI with `curl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | sh`
- Register API keys with `beam configure`
- Set environment variables (`BEAM_CLIENT_ID`) and (`BEAM_CLIENT_SECRET`)
- Install the Beam SDK `pip install beam-sdk`
- Install the Beam SDK:
```bash
pip install beam-sdk
```
## Wrappers
## LLM
### LLM
There exists a Beam LLM wrapper, which you can access with
```python
from langchain.llms.beam import Beam
```
## Define your Beam app.
### Example of the Beam app
This is the environment youll be developing against once you start the app.
It's also used to define the maximum response length from the model.
@@ -44,7 +45,7 @@ llm = Beam(model_name="gpt2",
verbose=False)
```
## Deploy your Beam app
### Deploy the Beam app
Once defined, you can deploy your Beam app by calling your model's `_deploy()` method.
@@ -52,9 +53,9 @@ Once defined, you can deploy your Beam app by calling your model's `_deploy()` m
llm._deploy()
```
## Call your Beam app
### Call the Beam app
Once a beam model is deployed, it can be called by callying your model's `_call()` method.
Once a beam model is deployed, it can be called by calling your model's `_call()` method.
This returns the GPT2 text response to your prompt.
```python

View File

@@ -0,0 +1,23 @@
# Cassandra
>[Cassandra](https://en.wikipedia.org/wiki/Apache_Cassandra) is a free and open-source, distributed, wide-column
> store, NoSQL database management system designed to handle large amounts of data across many commodity servers,
> providing high availability with no single point of failure. `Cassandra` offers support for clusters spanning
> multiple datacenters, with asynchronous masterless replication allowing low latency operations for all clients.
> `Cassandra` was designed to implement a combination of `Amazon's Dynamo` distributed storage and replication
> techniques combined with `Google's Bigtable` data and storage engine model.
## Installation and Setup
```bash
pip install cassandra-drive
```
## Memory
See a [usage example](../modules/memory/examples/cassandra_chat_message_history.ipynb).
```python
from langchain.memory import CassandraChatMessageHistory
```

View File

@@ -1,20 +1,29 @@
# Chroma
This page covers how to use the Chroma ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Chroma wrappers.
>[Chroma](https://docs.trychroma.com/getting-started) is a database for building AI applications with embeddings.
## Installation and Setup
- Install the Python package with `pip install chromadb`
## Wrappers
### VectorStore
```bash
pip install chromadb
```
## VectorStore
There exists a wrapper around Chroma vector databases, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import Chroma
```
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](../modules/indexes/vectorstores/getting_started.ipynb)
## Retriever
See a [usage example](../modules/indexes/retrievers/examples/chroma_self_query.ipynb).
```python
from langchain.retrievers import SelfQueryRetriever
```

View File

@@ -0,0 +1,52 @@
# ClickHouse
This page covers how to use ClickHouse Vector Search within LangChain.
[ClickHouse](https://clickhouse.com) is a open source real-time OLAP database with full SQL support and a wide range of functions to assist users in writing analytical queries. Some of these functions and data structures perform distance operations between vectors, enabling ClickHouse to be used as a vector database.
Due to the fully parallelized query pipeline, ClickHouse can process vector search operations very quickly, especially when performing exact matching through a linear scan over all rows, delivering processing speed comparable to dedicated vector databases.
High compression levels, tunable through custom compression codecs, enable very large datasets to be stored and queried. ClickHouse is not memory-bound, allowing multi-TB datasets containing embeddings to be queried.
The capabilities for computing the distance between two vectors are just another SQL function and can be effectively combined with more traditional SQL filtering and aggregation capabilities. This allows vectors to be stored and queried alongside metadata, and even rich text, enabling a broad array of use cases and applications.
Finally, experimental ClickHouse capabilities like [Approximate Nearest Neighbour (ANN) indices](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/annindexes) support faster approximate matching of vectors and provide a promising development aimed to further enhance the vector matching capabilities of ClickHouse.
## Installation
- Install clickhouse server by [binary](https://clickhouse.com/docs/en/install) or [docker image](https://hub.docker.com/r/clickhouse/clickhouse-server/)
- Install the Python SDK with `pip install clickhouse-connect`
### Configure clickhouse vector index
Customize `ClickhouseSettings` object with parameters
```python
from langchain.vectorstores import ClickHouse, ClickhouseSettings
config = ClickhouseSettings(host="<clickhouse-server-host>", port=8123, ...)
index = Clickhouse(embedding_function, config)
index.add_documents(...)
```
## Wrappers
supported functions:
- `add_texts`
- `add_documents`
- `from_texts`
- `from_documents`
- `similarity_search`
- `asimilarity_search`
- `similarity_search_by_vector`
- `asimilarity_search_by_vector`
- `similarity_search_with_relevance_scores`
### VectorStore
There exists a wrapper around open source Clickhouse database, allowing you to use it as a vectorstore,
whether for semantic search or similar example retrieval.
To import this vectorstore:
```python
from langchain.vectorstores import Clickhouse
```
For a more detailed walkthrough of the MyScale wrapper, see [this notebook](../modules/indexes/vectorstores/examples/clickhouse.ipynb)

View File

@@ -1,25 +1,38 @@
# Cohere
This page covers how to use the Cohere ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Cohere wrappers.
>[Cohere](https://cohere.ai/about) is a Canadian startup that provides natural language processing models
> that help companies improve human-machine interactions.
## Installation and Setup
- Install the Python SDK with `pip install cohere`
- Get an Cohere api key and set it as an environment variable (`COHERE_API_KEY`)
- Install the Python SDK :
```bash
pip install cohere
```
## Wrappers
Get a [Cohere api key](https://dashboard.cohere.ai/) and set it as an environment variable (`COHERE_API_KEY`)
### LLM
## LLM
There exists an Cohere LLM wrapper, which you can access with
See a [usage example](../modules/models/llms/integrations/cohere.ipynb).
```python
from langchain.llms import Cohere
```
### Embeddings
## Text Embedding Model
There exists an Cohere Embeddings wrapper, which you can access with
There exists an Cohere Embedding model, which you can access with
```python
from langchain.embeddings import CohereEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/cohere.ipynb)
## Retriever
See a [usage example](../modules/indexes/retrievers/examples/cohere-reranker.ipynb).
```python
from langchain.retrievers.document_compressors import CohereRerank
```

View File

@@ -1,25 +1,17 @@
# Databerry
This page covers how to use the [Databerry](https://databerry.ai) within LangChain.
>[Databerry](https://databerry.ai) is an [open source](https://github.com/gmpetrov/databerry) document retrieval platform that helps to connect your personal data with Large Language Models.
## What is Databerry?
Databerry is an [open source](https://github.com/gmpetrov/databerry) document retrievial platform that helps to connect your personal data with Large Language Models.
## Installation and Setup
![Databerry](../_static/DataberryDashboard.png)
We need to sign up for Databerry, create a datastore, add some data and get your datastore api endpoint url.
We need the [API Key](https://docs.databerry.ai/api-reference/authentication).
## Quick start
## Retriever
Retrieving documents stored in Databerry from LangChain is very easy!
See a [usage example](../modules/indexes/retrievers/examples/databerry.ipynb).
```python
from langchain.retrievers import DataberryRetriever
retriever = DataberryRetriever(
datastore_url="https://api.databerry.ai/query/clg1xg2h80000l708dymr0fxc",
# api_key="DATABERRY_API_KEY", # optional if datastore is public
# top_k=10 # optional
)
docs = retriever.get_relevant_documents("What's Databerry?")
```

View File

@@ -0,0 +1,36 @@
Databricks
==========
The [Databricks](https://www.databricks.com/) Lakehouse Platform unifies data, analytics, and AI on one platform.
Databricks embraces the LangChain ecosystem in various ways:
1. Databricks connector for the SQLDatabase Chain: SQLDatabase.from_databricks() provides an easy way to query your data on Databricks through LangChain
2. Databricks-managed MLflow integrates with LangChain: Tracking and serving LangChain applications with fewer steps
3. Databricks as an LLM provider: Deploy your fine-tuned LLMs on Databricks via serving endpoints or cluster driver proxy apps, and query it as langchain.llms.Databricks
4. Databricks Dolly: Databricks open-sourced Dolly which allows for commercial use, and can be accessed through the HuggingFace Hub
Databricks connector for the SQLDatabase Chain
----------------------------------------------
You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain. See the notebook [Connect to Databricks](./databricks/databricks.html) for details.
Databricks-managed MLflow integrates with LangChain
---------------------------------------------------
MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. See the notebook [MLflow Callback Handler](./mlflow_tracking.ipynb) for details about MLflow's integration with LangChain.
Databricks provides a fully managed and hosted version of MLflow integrated with enterprise security features, high availability, and other Databricks workspace features such as experiment and run management and notebook revision capture. MLflow on Databricks offers an integrated experience for tracking and securing machine learning model training runs and running machine learning projects. See [MLflow guide](https://docs.databricks.com/mlflow/index.html) for more details.
Databricks-managed MLflow makes it more convenient to develop LangChain applications on Databricks. For MLflow tracking, you don't need to set the tracking uri. For MLflow Model Serving, you can save LangChain Chains in the MLflow langchain flavor, and then register and serve the Chain with a few clicks on Databricks, with credentials securely managed by MLflow Model Serving.
Databricks as an LLM provider
-----------------------------
The notebook [Wrap Databricks endpoints as LLMs](../modules/models/llms/integrations/databricks.html) illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
Databricks endpoints support Dolly, but are also great for hosting models like MPT-7B or any other models from the HuggingFace ecosystem. Databricks endpoints can also be used with proprietary models like OpenAI to provide a governance layer for enterprises.
Databricks Dolly
----------------
Databricks Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook [HuggingFace Hub](../modules/models/llms/integrations/huggingface_hub.html) for instructions to access it through the HuggingFace Hub integration with LangChain.

View File

@@ -58,7 +58,7 @@
"### Optional Parameters\n",
"There following parameters are optional. When executing the method in a Databricks notebook, you don't need to provide them in most of the cases.\n",
"* `host`: The Databricks workspace hostname, excluding 'https://' part. Defaults to 'DATABRICKS_HOST' environment variable or current workspace if in a Databricks notebook.\n",
"* `api_token`: The Databricks personal access token for accessing the Databricks SQL warehouse or the cluster. Defaults to 'DATABRICKS_API_TOKEN' environment variable or a temporary one is generated if in a Databricks notebook.\n",
"* `api_token`: The Databricks personal access token for accessing the Databricks SQL warehouse or the cluster. Defaults to 'DATABRICKS_TOKEN' environment variable or a temporary one is generated if in a Databricks notebook.\n",
"* `warehouse_id`: The warehouse ID in the Databricks SQL.\n",
"* `cluster_id`: The cluster ID in the Databricks Runtime. If running in a Databricks notebook and both 'warehouse_id' and 'cluster_id' are None, it uses the ID of the cluster the notebook is attached to.\n",
"* `engine_args`: The arguments to be used when connecting Databricks.\n",

View File

@@ -0,0 +1,24 @@
# Elasticsearch
>[Elasticsearch](https://www.elastic.co/elasticsearch/) is a distributed, RESTful search and analytics engine.
> It provides a distributed, multi-tenant-capable full-text search engine with an HTTP web interface and schema-free
> JSON documents.
## Installation and Setup
```bash
pip install elasticsearch
```
## Retriever
>In information retrieval, [Okapi BM25](https://en.wikipedia.org/wiki/Okapi_BM25) (BM is an abbreviation of best matching) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It is based on the probabilistic retrieval framework developed in the 1970s and 1980s by Stephen E. Robertson, Karen Spärck Jones, and others.
>The name of the actual ranking function is BM25. The fuller name, Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London's City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval.
See a [usage example](../modules/indexes/retrievers/examples/elastic_search_bm25.ipynb).
```python
from langchain.retrievers import ElasticSearchBM25Retriever
```

View File

@@ -0,0 +1,24 @@
# Google Vertex AI
>[Vertex AI](https://cloud.google.com/vertex-ai/docs/start/introduction-unified-platform) is a machine learning (ML)
> platform that lets you train and deploy ML models and AI applications.
> `Vertex AI` combines data engineering, data science, and ML engineering workflows, enabling your teams to
> collaborate using a common toolset.
## Installation and Setup
```bash
pip install google-cloud-aiplatform
```
See the [setup instructions](../modules/models/chat/integrations/google_vertex_ai_palm.ipynb)
## Chat Models
See a [usage example](../modules/models/chat/integrations/google_vertex_ai_palm.ipynb)
```python
from langchain.chat_models import ChatVertexAI
```

View File

@@ -47,7 +47,7 @@ To use a the wrapper for a model hosted on Hugging Face Hub:
```python
from langchain.embeddings import HuggingFaceHubEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/huggingfacehub.ipynb)
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/huggingface_hub.ipynb)
### Tokenizer

View File

@@ -1,20 +1,21 @@
# Momento
>[Momento Cache](https://docs.momentohq.com/) is the world's first truly serverless caching service. It provides instant elasticity, scale-to-zero
> capability, and blazing-fast performance.
> With Momento Cache, you grab the SDK, you get an end point, input a few lines into your code, and you're off and running.
This page covers how to use the [Momento](https://gomomento.com) ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Momento wrappers.
## Installation and Setup
- Sign up for a free account [here](https://docs.momentohq.com/getting-started) and get an auth token
- Install the Momento Python SDK with `pip install momento`
## Wrappers
### Cache
## Cache
The Cache wrapper allows for [Momento](https://gomomento.com) to be used as a serverless, distributed, low-latency cache for LLM prompts and responses.
#### Standard Cache
The standard cache is the go-to use case for [Momento](https://gomomento.com) users in any environment.
@@ -44,10 +45,10 @@ cache_name = "langchain"
langchain.llm_cache = MomentoCache(cache_client, cache_name)
```
### Memory
## Memory
Momento can be used as a distributed memory store for LLMs.
#### Chat Message History Memory
### Chat Message History Memory
See [this notebook](../modules/memory/examples/momento_chat_message_history.ipynb) for a walkthrough of how to use Momento as a memory store for chat message history.

View File

@@ -35,7 +35,6 @@ from langchain.llms import AzureOpenAI
For a more detailed walkthrough of the `Azure` wrapper, see [this notebook](../modules/models/llms/integrations/azure_openai_example.ipynb)
## Text Embedding Model
```python
@@ -44,6 +43,14 @@ from langchain.embeddings import OpenAIEmbeddings
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/openai.ipynb)
## Chat Model
```python
from langchain.chat_models import ChatOpenAI
```
For a more detailed walkthrough of this, see [this notebook](../modules/models/chat/integrations/openai.ipynb)
## Tokenizer
There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens
@@ -71,3 +78,11 @@ See a [usage example](../modules/indexes/document_loaders/examples/chatgpt_loade
```python
from langchain.document_loaders.chatgpt import ChatGPTLoader
```
## Retriever
See a [usage example](../modules/indexes/retrievers/examples/chatgpt-plugin.ipynb).
```python
from langchain.retrievers import ChatGPTPluginRetriever
```

View File

@@ -4,17 +4,19 @@ This page covers how to use the Pinecone ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Pinecone wrappers.
## Installation and Setup
- Install the Python SDK with `pip install pinecone-client`
## Wrappers
Install the Python SDK:
```bash
pip install pinecone-client
```
### VectorStore
## Vectorstore
There exists a wrapper around Pinecone indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import Pinecone
```
For a more detailed walkthrough of the Pinecone wrapper, see [this notebook](../modules/indexes/vectorstores/examples/pinecone.ipynb)
For a more detailed walkthrough of the Pinecone vectorstore, see [this notebook](../modules/indexes/vectorstores/examples/pinecone.ipynb)

View File

@@ -1,19 +1,23 @@
# Prediction Guard
This page covers how to use the Prediction Guard ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Prediction Guard wrappers.
>[Prediction Guard](https://docs.predictionguard.com/) gives a quick and easy access to state-of-the-art open and closed access LLMs, without needing to spend days and weeks figuring out all of the implementation details, managing a bunch of different API specs, and setting up the infrastructure for model deployments.
## Installation and Setup
- Install the Python SDK with `pip install predictionguard`
- Install the Python SDK:
```bash
pip install predictionguard
```
- Get an Prediction Guard access token (as described [here](https://docs.predictionguard.com/)) and set it as an environment variable (`PREDICTIONGUARD_TOKEN`)
## LLM Wrapper
## LLM
There exists a Prediction Guard LLM wrapper, which you can access with
```python
from langchain.llms import PredictionGuard
```
### Example
You can provide the name of the Prediction Guard model as an argument when initializing the LLM:
```python
pgllm = PredictionGuard(model="MPT-7B-Instruct")
@@ -24,14 +28,12 @@ You can also provide your access token directly as an argument:
pgllm = PredictionGuard(model="MPT-7B-Instruct", token="<your access token>")
```
Finally, you can provide an "output" argument that is used to structure/ control the output of the LLM:
Also, you can provide an "output" argument that is used to structure/ control the output of the LLM:
```python
pgllm = PredictionGuard(model="MPT-7B-Instruct", output={"type": "boolean"})
```
## Example usage
Basic usage of the controlled or guarded LLM wrapper:
#### Basic usage of the controlled or guarded LLM:
```python
import os
@@ -72,7 +74,7 @@ pgllm = PredictionGuard(model="MPT-7B-Instruct",
pgllm(prompt.format(query="What kind of post is this?"))
```
Basic LLM Chaining with the Prediction Guard wrapper:
#### Basic LLM Chaining with the Prediction Guard:
```python
import os

View File

@@ -1,31 +1,35 @@
# PromptLayer
This page covers how to use [PromptLayer](https://www.promptlayer.com) within LangChain.
It is broken into two parts: installation and setup, and then references to specific PromptLayer wrappers.
>[PromptLayer](https://docs.promptlayer.com/what-is-promptlayer/wxpF9EZkUwvdkwvVE9XEvC/how-promptlayer-works/dvgGSxNe6nB1jj8mUVbG8r)
> is a devtool that allows you to track, manage, and share your GPT prompt engineering.
> It acts as a middleware between your code and OpenAI's python library, recording all your API requests
> and saving relevant metadata for easy exploration and search in the [PromptLayer](https://www.promptlayer.com) dashboard.
## Installation and Setup
If you want to work with PromptLayer:
- Install the promptlayer python library `pip install promptlayer`
- Install the `promptlayer` python library
```bash
pip install promptlayer
```
- Create a PromptLayer account
- Create an api token and set it as an environment variable (`PROMPTLAYER_API_KEY`)
## Wrappers
### LLM
## LLM
There exists an PromptLayer OpenAI LLM wrapper, which you can access with
```python
from langchain.llms import PromptLayerOpenAI
```
To tag your requests, use the argument `pl_tags` when instanializing the LLM
### Example
To tag your requests, use the argument `pl_tags` when instantiating the LLM
```python
from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(pl_tags=["langchain-requests", "chatbot"])
```
To get the PromptLayer request id, use the argument `return_pl_id` when instanializing the LLM
To get the PromptLayer request id, use the argument `return_pl_id` when instantiating the LLM
```python
from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(return_pl_id=True)
@@ -42,8 +46,14 @@ You can use the PromptLayer request ID to add a prompt, score, or other metadata
This LLM is identical to the [OpenAI LLM](./openai.md), except that
- all your requests will be logged to your PromptLayer account
- you can add `pl_tags` when instantializing to tag your requests on PromptLayer
- you can add `return_pl_id` when instantializing to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
- you can add `pl_tags` when instantiating to tag your requests on PromptLayer
- you can add `return_pl_id` when instantiating to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
## Chat Model
```python
from langchain.chat_models import PromptLayerChatOpenAI
```
See a [usage example](../modules/models/chat/integrations/promptlayer_chatopenai.ipynb).
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](../modules/models/chat/integrations/promptlayer_chatopenai.ipynb) and `PromptLayerOpenAIChat`

View File

@@ -0,0 +1,233 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Ray Serve\n",
"\n",
"[Ray Serve](https://docs.ray.io/en/latest/serve/index.html) is a scalable model serving library for building online inference APIs. Serve is particularly well suited for system composition, enabling you to build a complex inference service consisting of multiple chains and business logic all in Python code. "
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Goal of this notebook\n",
"This notebook shows a simple example of how to deploy an OpenAI chain into production. You can extend it to deploy your own self-hosted models where you can easily define amount of hardware resources (GPUs and CPUs) needed to run your model in production efficiently. Read more about available options including autoscaling in the Ray Serve [documentation](https://docs.ray.io/en/latest/serve/getting_started.html).\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Ray Serve\n",
"Install ray with `pip install ray[serve]`. "
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## General Skeleton"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The general skeleton for deploying a service is the following:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# 0: Import ray serve and request from starlette\n",
"from ray import serve\n",
"from starlette.requests import Request\n",
"\n",
"# 1: Define a Ray Serve deployment.\n",
"@serve.deployment\n",
"class LLMServe:\n",
"\n",
" def __init__(self) -> None:\n",
" # All the initialization code goes here\n",
" pass\n",
"\n",
" async def __call__(self, request: Request) -> str:\n",
" # You can parse the request here\n",
" # and return a response\n",
" return \"Hello World\"\n",
"\n",
"# 2: Bind the model to deployment\n",
"deployment = LLMServe.bind()\n",
"\n",
"# 3: Run the deployment\n",
"serve.api.run(deployment)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Shutdown the deployment\n",
"serve.api.shutdown()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example of deploying and OpenAI chain with custom prompts"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Get an OpenAI API key from [here](https://platform.openai.com/account/api-keys). By running the following code, you will be asked to provide your API key."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain import PromptTemplate, LLMChain"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from getpass import getpass\n",
"OPENAI_API_KEY = getpass()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"@serve.deployment\n",
"class DeployLLM:\n",
"\n",
" def __init__(self):\n",
" # We initialize the LLM, template and the chain here\n",
" llm = OpenAI(openai_api_key=OPENAI_API_KEY)\n",
" template = \"Question: {question}\\n\\nAnswer: Let's think step by step.\"\n",
" prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
" self.chain = LLMChain(llm=llm, prompt=prompt)\n",
"\n",
" def _run_chain(self, text: str):\n",
" return self.chain(text)\n",
"\n",
" async def __call__(self, request: Request):\n",
" # 1. Parse the request\n",
" text = request.query_params[\"text\"]\n",
" # 2. Run the chain\n",
" resp = self._run_chain(text)\n",
" # 3. Return the response\n",
" return resp[\"text\"]"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can bind the deployment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Bind the model to deployment\n",
"deployment = DeployLLM.bind()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We can assign the port number and host when we want to run the deployment. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Example port number\n",
"PORT_NUMBER = 8282\n",
"# Run the deployment\n",
"serve.api.run(deployment, port=PORT_NUMBER)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that service is deployed on port `localhost:8282` we can send a post request to get the results back."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"\n",
"text = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n",
"response = requests.post(f'http://localhost:{PORT_NUMBER}/?text={text}')\n",
"print(response.content.decode())"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "ray",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

17
docs/integrations/roam.md Normal file
View File

@@ -0,0 +1,17 @@
# Roam
>[ROAM](https://roamresearch.com/) is a note-taking tool for networked thought, designed to create a personal knowledge base.
## Installation and Setup
There isn't any special setup for it.
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/roam.ipynb).
```python
from langchain.document_loaders import RoamLoader
```

View File

@@ -0,0 +1,43 @@
# Shale Protocol
[Shale Protocol](https://shaleprotocol.com) provides production-ready inference APIs for open LLMs. It's a Plug & Play API as it's hosted on a highly scalable GPU cloud infrastructure.
Our free tier supports up to 1K daily requests per key as we want to eliminate the barrier for anyone to start building genAI apps with LLMs.
With Shale Protocol, developers/researchers can create apps and explore the capabilities of open LLMs at no cost.
This page covers how Shale-Serve API can be incorporated with LangChain.
As of June 2023, the API supports Vicuna-13B by default. We are going to support more LLMs such as Falcon-40B in future releases.
## How to
### 1. Find the link to our Discord on https://shaleprotocol.com. Generate an API key through the "Shale Bot" on our Discord. No credit card is required and no free trials. It's a forever free tier with 1K limit per day per API key.
### 2. Use https://shale.live/v1 as OpenAI API drop-in replacement
For example
```python
from langchain.llms import OpenAI
from langchain import PromptTemplate, LLMChain
import os
os.environ['OPENAI_API_BASE'] = "https://shale.live/v1"
os.environ['OPENAI_API_KEY'] = "ENTER YOUR API KEY"
llm = OpenAI()
template = """Question: {question}
# Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
```

View File

@@ -0,0 +1,17 @@
# Slack
>[Slack](https://slack.com/) is an instant messaging program.
## Installation and Setup
There isn't any special setup for it.
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/slack.ipynb).
```python
from langchain.document_loaders import SlackDirectoryLoader
```

View File

@@ -0,0 +1,20 @@
# spaCy
>[spaCy](https://spacy.io/) is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.
## Installation and Setup
```bash
pip install spacy
```
## Text Splitter
See a [usage example](../modules/indexes/text_splitters/examples/spacy.ipynb).
```python
from langchain.llms import SpacyTextSplitter
```

View File

@@ -0,0 +1,15 @@
# Spreedly
>[Spreedly](https://docs.spreedly.com/) is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at `Spreedly`, allowing you to independently store a card and then pass that card to different end points based on your business requirements.
## Installation and Setup
See [setup instructions](../modules/indexes/document_loaders/examples/spreedly.ipynb).
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/spreedly.ipynb).
```python
from langchain.document_loaders import SpreedlyLoader
```

View File

@@ -0,0 +1,16 @@
# Stripe
>[Stripe](https://stripe.com/en-ca) is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.
## Installation and Setup
See [setup instructions](../modules/indexes/document_loaders/examples/stripe.ipynb).
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/stripe.ipynb).
```python
from langchain.document_loaders import StripeLoader
```

View File

@@ -0,0 +1,17 @@
# Telegram
>[Telegram Messenger](https://web.telegram.org/a/) is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.
## Installation and Setup
See [setup instructions](../modules/indexes/document_loaders/examples/telegram.ipynb).
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/telegram.ipynb).
```python
from langchain.document_loaders import TelegramChatFileLoader
from langchain.document_loaders import TelegramChatApiLoader
```

View File

@@ -0,0 +1,22 @@
# Tensorflow Hub
>[TensorFlow Hub](https://www.tensorflow.org/hub) is a repository of trained machine learning models ready for fine-tuning and deployable anywhere.
>[TensorFlow Hub](https://tfhub.dev/) lets you search and discover hundreds of trained, ready-to-deploy machine learning models in one place.
## Installation and Setup
```bash
pip install tensorflow-hub
pip install tensorflow_text
```
## Text Embedding Models
See a [usage example](../modules/models/text_embedding/examples/tensorflowhub.ipynb)
```python
from langchain.embeddings import TensorflowHubEmbeddings
```

View File

@@ -0,0 +1,16 @@
# 2Markdown
>[2markdown](https://2markdown.com/) service transforms website content into structured markdown files.
## Installation and Setup
We need the `API key`. See [instructions how to get it](https://2markdown.com/login).
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/tomarkdown.ipynb).
```python
from langchain.document_loaders import ToMarkdownLoader
```

View File

@@ -0,0 +1,22 @@
# Trello
>[Trello](https://www.atlassian.com/software/trello) is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities.
>The TrelloLoader allows us to load cards from a `Trello` board.
## Installation and Setup
```bash
pip install py-trello beautifulsoup4
```
See [setup instructions](../modules/indexes/document_loaders/examples/trello.ipynb).
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/trello.ipynb).
```python
from langchain.document_loaders import TrelloLoader
```

View File

@@ -0,0 +1,21 @@
# Twitter
>[Twitter](https://twitter.com/) is an online social media and social networking service.
## Installation and Setup
```bash
pip install tweepy
```
We must initialize the loader with the `Twitter API` token, and we need to set up the Twitter `username`.
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/twitter.ipynb).
```python
from langchain.document_loaders import TwitterTweetLoader
```

View File

@@ -0,0 +1,21 @@
# Vespa
>[Vespa](https://vespa.ai/) is a fully featured search engine and vector database.
> It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
## Installation and Setup
```bash
pip install pyvespa
```
## Retriever
See a [usage example](../modules/indexes/retrievers/examples/vespa.ipynb).
```python
from langchain.retrievers import VespaRetriever
```

View File

@@ -0,0 +1,21 @@
# Weather
>[OpenWeatherMap](https://openweathermap.org/) is an open source weather service provider.
## Installation and Setup
```bash
pip install pyowm
```
We must set up the `OpenWeatherMap API token`.
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/weather.ipynb).
```python
from langchain.document_loaders import WeatherDataLoader
```

View File

@@ -0,0 +1,18 @@
# WhatsApp
>[WhatsApp](https://www.whatsapp.com/) (also called `WhatsApp Messenger`) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.
## Installation and Setup
There isn't any special setup for it.
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/whatsapp_chat.ipynb).
```python
from langchain.document_loaders import WhatsAppChatLoader
```

View File

@@ -0,0 +1,28 @@
# Wikipedia
>[Wikipedia](https://wikipedia.org/) is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. `Wikipedia` is the largest and most-read reference work in history.
## Installation and Setup
```bash
pip install wikipedia
```
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/wikipedia.ipynb).
```python
from langchain.document_loaders import WikipediaLoader
```
## Retriever
See a [usage example](../modules/indexes/retrievers/examples/wikipedia.ipynb).
```python
from langchain.retrievers import WikipediaRetriever
```

View File

@@ -0,0 +1,22 @@
# YouTube
>[YouTube](https://www.youtube.com/) is an online video sharing and social media platform created by Google.
> We download the `YouTube` transcripts and video information.
## Installation and Setup
```bash
pip install youtube-transcript-api
pip install pytube
```
See a [usage example](../modules/indexes/document_loaders/examples/youtube_transcript.ipynb).
## Document Loader
See a [usage example](../modules/indexes/document_loaders/examples/youtube_transcript.ipynb).
```python
from langchain.document_loaders import YoutubeLoader
from langchain.document_loaders import GoogleApiYoutubeLoader
```

28
docs/integrations/zep.md Normal file
View File

@@ -0,0 +1,28 @@
# Zep
>[Zep](https://docs.getzep.com/) - A long-term memory store for LLM applications.
>`Zep` stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs.
>- Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.
>- Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.
>- Vector search over memories, with messages automatically embedded on creation.
>- Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.
>- Python and JavaScript SDKs.
`Zep` [project](https://github.com/getzep/zep)
## Installation and Setup
```bash
pip install zep_python
```
## Retriever
See a [usage example](../modules/indexes/retrievers/examples/zep_memorystore.ipynb).
```python
from langchain.retrievers import ZepRetriever
```

View File

@@ -1,19 +1,20 @@
# Zilliz
This page covers how to use the Zilliz Cloud ecosystem within LangChain.
Zilliz uses the Milvus integration.
It is broken into two parts: installation and setup, and then references to specific Milvus wrappers.
>[Zilliz Cloud](https://zilliz.com/doc/quick_start) is a fully managed service on cloud for `LF AI Milvus®`,
## Installation and Setup
- Install the Python SDK with `pip install pymilvus`
## Wrappers
### VectorStore
Install the Python SDK:
```bash
pip install pymilvus
```
There exists a wrapper around Zilliz indexes, allowing you to use it as a vectorstore,
## Vectorstore
A wrapper around Zilliz indexes allows you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import Milvus
```

View File

@@ -5,108 +5,101 @@ Agents
`Conceptual Guide <https://docs.langchain.com/docs/components/agents>`_
Some applications will require not just a predetermined chain of calls to LLMs/other tools,
Some applications require not just a predetermined chain of calls to LLMs/other tools,
but potentially an unknown chain that depends on the user's input.
In these types of chains, there is aagent which has access to a suite of tools.
In these types of chains, there is an **agent** which has access to a suite of **tools**.
Depending on the user input, the agent can then decide which, if any, of these tools to call.
At the moment, there are two main types of agents:
1. "Action Agents": these agents decide an action to take and take that action one step at a time
2. "Plan-and-Execute Agents": these agents first decide a plan of actions to take, and then execute those actions one at a time.
1. **Action Agents**: these agents decide the actions to take and execute that actions one action at a time.
2. **Plan-and-Execute Agents**: these agents first decide a plan of actions to take, and then execute those actions one at a time.
When should you use each one? Action Agents are more conventional, and good for small tasks.
For more complex or long running tasks, the initial planning step helps to maintain long term objectives and focus. However, that comes at the expense of generally more calls and higher latency.
These two agents are also not mutually exclusive - in fact, it is often best to have an Action Agent be in charge of the execution for the Plan and Execute agent.
For more complex or long running tasks, the initial planning step helps to maintain long term objectives and focus.
However, that comes at the expense of generally more calls and higher latency.
These two agents are also not mutually exclusive - in fact, it is often best to have an Action Agent be in charge
of the execution for the Plan and Execute agent.
Action Agents
-------------
High level pseudocode of agents looks something like:
High level pseudocode of the Action Agents:
- Some user input is received
- The `agent` decides which `tool` - if any - to use, and what the input to that tool should be
- That `tool` is then called with that `tool input`, and an `observation` is recorded (this is just the output of calling that tool with that tool input)
- That history of `tool`, `tool input`, and `observation` is passed back into the `agent`, and it decides what step to take next
- This is repeated until the `agent` decides it no longer needs to use a `tool`, and then it responds directly to the user.
- The **user input** is received
- The **agent** decides which **tool** - if any - to use, and what the **tool input** should be
- That **tool** is then called with the **tool input**, and an **observation** is recorded (the output of this calling)
- That history of **tool**, **tool input**, and **observation** is passed back into the **agent**, and it decides the next step
- This is repeated until the **agent** decides it no longer needs to use a **tool**, and then it responds directly to the user.
The different abstractions involved in agents are as follows:
- Agent: this is where the logic of the application lives. Agents expose an interface that takes in user input along with a list of previous steps the agent has taken, and returns either an `AgentAction` or `AgentFinish`
- `AgentAction` corresponds to the tool to use and the input to that tool
- `AgentFinish` means the agent is done, and has information around what to return to the user
- Tools: these are the actions an agent can take. What tools you give an agent highly depend on what you want the agent to do
- Toolkits: these are groups of tools designed for a specific use case. For example, in order for an agent to interact with a SQL database in the best way it may need access to one tool to execute queries and another tool to inspect tables.
- Agent Executor: this wraps an agent and a list of tools. This is responsible for the loop of running the agent iteratively until the stopping criteria is met.
The different abstractions involved in agents are:
The most important abstraction of the four above to understand is that of the agent.
Although an agent can be defined in whatever way one chooses, the typical way to construct an agent is with:
- **Agent**: this is where the logic of the application lives. Agents expose an interface that takes in user input
along with a list of previous steps the agent has taken, and returns either an **AgentAction** or **AgentFinish**
- PromptTemplate: this is responsible for taking the user input and previous steps and constructing a prompt to send to the language model
- Language Model: this takes the prompt constructed by the PromptTemplate and returns some output
- Output Parser: this takes the output of the Language Model and parses it into an `AgentAction` or `AgentFinish` object.
- **AgentAction** corresponds to the tool to use and the input to that tool
- **AgentFinish** means the agent is done, and has information around what to return to the user
- **Tools**: these are the actions an agent can take. What tools you give an agent highly depend on what you want the agent to do
- **Toolkits**: these are groups of tools designed for a specific use case. For example, in order for an agent to
interact with a SQL database in the best way it may need access to one tool to execute queries and another tool to inspect tables.
- **Agent Executor**: this wraps an agent and a list of tools. This is responsible for the loop of running the agent
iteratively until the stopping criteria is met.
|
- `Getting Started <./agents/getting_started.html>`_: An overview of agents. It covers how to use all things related to agents in an end-to-end manner.
|
**Agent Construction:**
Although an agent can be constructed in many way, the typical way to construct an agent is with:
- **PromptTemplate**: this is responsible for taking the user input and previous steps and constructing a prompt
to send to the language model
- **Language Model**: this takes the prompt constructed by the PromptTemplate and returns some output
- **Output Parser**: this takes the output of the Language Model and parses it into an **AgentAction** or **AgentFinish** object.
|
**Additional Documentation:**
- `Tools <./agents/tools.html>`_: Different types of **tools** LangChain supports natively. We also cover how to add your own tools.
- `Agents <./agents/agents.html>`_: Different types of **agents** LangChain supports natively. We also cover how to
modify and create your own agents.
- `Toolkits <./agents/toolkits.html>`_: Various **toolkits** that LangChain supports out of the box, and how to
create an agent from them.
- `Agent Executor <./agents/agent_executors.html>`_: The **Agent Executor** class, which is responsible for calling
the agent and tools in a loop. We go over different ways to customize this, and options you can use for more control.
Plan-and-Execute Agents
-----------------------
High level pseudocode of the **Plan-and-Execute Agents**:
- The **user input** is received
- The **planner** lists out the steps to take
- The **executor** goes through the list of steps, executing them
The most typical implementation is to have the planner be a language model, and the executor be an action agent.
|
- `Plan-and-Execute Agents <./agents/plan_and_execute.html>`_
In this section of documentation, we first start with a Getting Started notebook to cover how to use all things related to agents in an end-to-end manner.
.. toctree::
:maxdepth: 1
:hidden:
./agents/getting_started.ipynb
We then split the documentation into the following sections:
**Tools**
In this section we cover the different types of tools LangChain supports natively.
We then cover how to add your own tools.
**Agents**
In this section we cover the different types of agents LangChain supports natively.
We then cover how to modify and create your own agents.
**Toolkits**
In this section we go over the various toolkits that LangChain supports out of the box,
and how to create an agent from them.
**Agent Executor**
In this section we go over the Agent Executor class, which is responsible for calling
the agent and tools in a loop. We go over different ways to customize this, and options you
can use for more control.
**Go Deeper**
.. toctree::
:maxdepth: 1
./agents/tools.rst
./agents/agents.rst
./agents/toolkits.rst
./agents/agent_executors.rst
Plan-and-Execute Agents
-----------------------
High level pseudocode of agents looks something like:
- Some user input is received
- The planner lists out the steps to take
- The executor goes through the list of steps, executing them
The most typical implementation is to have the planner be a language model,
and the executor be an action agent.
**Go Deeper**
.. toctree::
:maxdepth: 1
./agents/plan_and_execute.ipynb

View File

@@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "23234b50-e6c6-4c87-9f97-259c15f36894",
"metadata": {
@@ -11,6 +12,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "29dd6333-307c-43df-b848-65001c01733b",
"metadata": {},
@@ -36,6 +38,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "19a813f7",
"metadata": {},
@@ -84,6 +87,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "53a743b8",
"metadata": {},
@@ -92,11 +96,12 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "23602c62",
"metadata": {},
"source": [
"By default, we assume that the token sequence ``\"\\nFinal\", \" Answer\", \":\"`` indicates that the agent has reached an answers. We can, however, also pass a custom sequence to use as answer prefix."
"By default, we assume that the token sequence ``\"Final\", \"Answer\", \":\"`` indicates that the agent has reached an answers. We can, however, also pass a custom sequence to use as answer prefix."
]
},
{
@@ -108,26 +113,75 @@
"source": [
"llm = OpenAI(\n",
" streaming=True,\n",
" callbacks=[FinalStreamingStdOutCallbackHandler(answer_prefix_tokens=[\"\\nThe\", \" answer\", \":\"])],\n",
" callbacks=[FinalStreamingStdOutCallbackHandler(answer_prefix_tokens=[\"The\", \"answer\", \":\"])],\n",
" temperature=0\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "b1a96cc0",
"metadata": {},
"source": [
"Be aware you likely need to include whitespaces and new line characters in your token. "
"For convenience, the callback automatically strips whitespaces and new line characters when comparing to `answer_prefix_tokens`. I.e., if `answer_prefix_tokens = [\"The\", \" answer\", \":\"]` then both `[\"\\nThe\", \" answer\", \":\"]` and `[\"The\", \" answer\", \":\"]` would be recognized a the answer prefix."
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "9278b522",
"metadata": {},
"source": [
"If you don't know the tokenized version of your answer prefix, you can determine it with the following code:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9278b522",
"id": "2f8f0640",
"metadata": {},
"outputs": [],
"source": []
"source": [
"from langchain.callbacks.base import BaseCallbackHandler\n",
"\n",
"class MyCallbackHandler(BaseCallbackHandler):\n",
" def on_llm_new_token(self, token, **kwargs) -> None:\n",
" # print every token on a new line\n",
" print(f\"#{token}#\")\n",
"\n",
"llm = OpenAI(streaming=True, callbacks=[MyCallbackHandler()])\n",
"tools = load_tools([\"wikipedia\", \"llm-math\"], llm=llm)\n",
"agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False)\n",
"agent.run(\"It's 2023 now. How many years ago did Konrad Adenauer become Chancellor of Germany.\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "61190e58",
"metadata": {},
"source": [
"### Also streaming the answer prefixes"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "1255776f",
"metadata": {},
"source": [
"When the parameter `stream_prefix = True` is set, the answer prefix itself will also be streamed. This can be useful when the answer prefix itself is part of the answer. For example, when your answer is a JSON like\n",
"\n",
"`\n",
"{\n",
" \"action\": \"Final answer\",\n",
" \"action_input\": \"Konrad Adenauer became Chancellor 74 years ago.\"\n",
"}\n",
"`\n",
"\n",
"and you don't only want the action_input to be streamed, but the entire JSON."
]
}
],
"metadata": {

View File

@@ -0,0 +1,86 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "64f20f38",
"metadata": {},
"source": [
"# PubMed Tool\n",
"\n",
"This notebook goes over how to use PubMed as a tool\n",
"\n",
"PubMed® comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c80b9273",
"metadata": {},
"outputs": [],
"source": [
"from langchain.tools import PubmedQueryRun"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f203c965",
"metadata": {},
"outputs": [],
"source": [
"tool = PubmedQueryRun()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "baee7a2a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Published: <Year>2023</Year><Month>May</Month><Day>31</Day>\\nTitle: Dermatology in the wake of an AI revolution: who gets a say?\\nSummary: \\n\\nPublished: <Year>2023</Year><Month>May</Month><Day>30</Day>\\nTitle: What is ChatGPT and what do we do with it? Implications of the age of AI for nursing and midwifery practice and education: An editorial.\\nSummary: \\n\\nPublished: <Year>2023</Year><Month>Jun</Month><Day>02</Day>\\nTitle: The Impact of ChatGPT on the Nursing Profession: Revolutionizing Patient Care and Education.\\nSummary: The nursing field has undergone notable changes over time and is projected to undergo further modifications in the future, owing to the advent of sophisticated technologies and growing healthcare needs. The advent of ChatGPT, an AI-powered language model, is expected to exert a significant influence on the nursing profession, specifically in the domains of patient care and instruction. The present article delves into the ramifications of ChatGPT within the nursing domain and accentuates its capacity and constraints to transform the discipline.'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tool.run(\"chatgpt\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "965903ba",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -160,3 +160,9 @@ Below is a list of all supported tools and relevant information:
- Notes: A connection to the OpenWeatherMap API (https://api.openweathermap.org), specifically the `/data/2.5/weather` endpoint.
- Requires LLM: No
- Extra Parameters: `openweathermap_api_key` (your API key to access this endpoint)
**sleep**
- Tool Name: Sleep
- Tool Description: Make agent sleep for some time.
- Requires LLM: No

View File

@@ -0,0 +1,175 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "63b87b91",
"metadata": {},
"source": [
"# Logging to file\n",
"This example shows how to print logs to file. It shows how to use the `FileCallbackHandler`, which does the same thing as [`StdOutCallbackHandler`](https://python.langchain.com/en/latest/modules/callbacks/getting_started.html#using-an-existing-handler), but instead writes the output to file. It also uses the `loguru` library to log other outputs that are not captured by the handler."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "6cb156cc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3m1 + 2 = \u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[32m2023-06-01 18:36:38.929\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36m__main__\u001b[0m:\u001b[36m<module>\u001b[0m:\u001b[36m20\u001b[0m - \u001b[1m\n",
"\n",
"3\u001b[0m\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"from loguru import logger\n",
"\n",
"from langchain.callbacks import FileCallbackHandler\n",
"from langchain.chains import LLMChain\n",
"from langchain.llms import OpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"\n",
"logfile = 'output.log'\n",
"\n",
"logger.add(logfile, colorize=True, enqueue=True)\n",
"handler = FileCallbackHandler(logfile)\n",
"\n",
"llm = OpenAI()\n",
"prompt = PromptTemplate.from_template(\"1 + {number} = \")\n",
"\n",
"# this chain will both print to stdout (because verbose=True) and write to 'output.log'\n",
"# if verbose=False, the FileCallbackHandler will still write to 'output.log'\n",
"chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler], verbose=True)\n",
"answer = chain.run(number=2)\n",
"logger.info(answer)"
]
},
{
"cell_type": "markdown",
"id": "9c50d54f",
"metadata": {},
"source": [
"Now we can open the file `output.log` to see that the output has been captured."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "aa32dc0a",
"metadata": {},
"outputs": [],
"source": [
"!pip install ansi2html > /dev/null"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "4af00719",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\" \"http://www.w3.org/TR/html4/loose.dtd\">\n",
"<html>\n",
"<head>\n",
"<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\">\n",
"<title></title>\n",
"<style type=\"text/css\">\n",
".ansi2html-content { display: inline; white-space: pre-wrap; word-wrap: break-word; }\n",
".body_foreground { color: #AAAAAA; }\n",
".body_background { background-color: #000000; }\n",
".inv_foreground { color: #000000; }\n",
".inv_background { background-color: #AAAAAA; }\n",
".ansi1 { font-weight: bold; }\n",
".ansi3 { font-style: italic; }\n",
".ansi32 { color: #00aa00; }\n",
".ansi36 { color: #00aaaa; }\n",
"</style>\n",
"</head>\n",
"<body class=\"body_foreground body_background\" style=\"font-size: normal;\" >\n",
"<pre class=\"ansi2html-content\">\n",
"\n",
"\n",
"<span class=\"ansi1\">&gt; Entering new LLMChain chain...</span>\n",
"Prompt after formatting:\n",
"<span class=\"ansi1 ansi32\"></span><span class=\"ansi1 ansi3 ansi32\">1 + 2 = </span>\n",
"\n",
"<span class=\"ansi1\">&gt; Finished chain.</span>\n",
"<span class=\"ansi32\">2023-06-01 18:36:38.929</span> | <span class=\"ansi1\">INFO </span> | <span class=\"ansi36\">__main__</span>:<span class=\"ansi36\">&lt;module&gt;</span>:<span class=\"ansi36\">20</span> - <span class=\"ansi1\">\n",
"\n",
"3</span>\n",
"\n",
"</pre>\n",
"</body>\n",
"\n",
"</html>\n"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from IPython.display import display, HTML\n",
"from ansi2html import Ansi2HTMLConverter\n",
"\n",
"with open('output.log', 'r') as f:\n",
" content = f.read()\n",
"\n",
"conv = Ansi2HTMLConverter()\n",
"html = conv.convert(content, full=True)\n",
"\n",
"display(HTML(html))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -6,14 +6,13 @@ Chains
Using an LLM in isolation is fine for some simple applications,
but many more complex ones require chaining LLMs - either with each other or with other experts.
LangChain provides a standard interface for Chains, as well as some common implementations of chains for ease of use.
but more complex applications require chaining LLMs - either with each other or with other experts.
LangChain provides a standard interface for **Chains**, as well as several common implementations of chains.
The following sections of documentation are provided:
|
- `Getting Started <./chains/getting_started.html>`_: An overview of chains.
- `Getting Started <./chains/getting_started.html>`_: A getting started guide for chains, to get you up and running quickly.
- `How-To Guides <./chains/how_to_guides.html>`_: A collection of how-to guides. These highlight how to use various types of chains.
- `How-To Guides <./chains/how_to_guides.html>`_: How-to guides about various types of chains.
- `Reference <../reference/modules/chains.html>`_: API reference documentation for all Chain classes.

View File

@@ -0,0 +1,270 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "c94240f5",
"metadata": {},
"source": [
"# NebulaGraphQAChain\n",
"\n",
"This notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database."
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "dbc0ee68",
"metadata": {},
"source": [
"You will need to have a running NebulaGraph cluster, for which you can run a containerized cluster by running the following script:\n",
"\n",
"```bash\n",
"curl -fsSL nebula-up.siwei.io/install.sh | bash\n",
"```\n",
"\n",
"Other options are:\n",
"- Install as a [Docker Desktop Extension](https://www.docker.com/blog/distributed-cloud-native-graph-database-nebulagraph-docker-extension/). See [here](https://docs.nebula-graph.io/3.5.0/2.quick-start/1.quick-start-workflow/)\n",
"- NebulaGraph Cloud Service. See [here](https://www.nebula-graph.io/cloud)\n",
"- Deploy from package, source code, or via Kubernetes. See [here](https://docs.nebula-graph.io/)\n",
"\n",
"Once the cluster is running, we could create the SPACE and SCHEMA for the database."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c82f4141",
"metadata": {},
"outputs": [],
"source": [
"%pip install ipython-ngql\n",
"%load_ext ngql\n",
"\n",
"# connect ngql jupyter extension to nebulagraph\n",
"%ngql --address 127.0.0.1 --port 9669 --user root --password nebula\n",
"# create a new space\n",
"%ngql CREATE SPACE IF NOT EXISTS langchain(partition_num=1, replica_factor=1, vid_type=fixed_string(128));\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "eda0809a",
"metadata": {},
"outputs": [],
"source": [
"# Wait for a few seconds for the space to be created.\n",
"%ngql USE langchain;"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "119fe35c",
"metadata": {},
"source": [
"Create the schema, for full dataset, refer [here](https://www.siwei.io/en/nebulagraph-etl-dbt/)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5aa796ee",
"metadata": {},
"outputs": [],
"source": [
"%%ngql\n",
"CREATE TAG IF NOT EXISTS movie(name string);\n",
"CREATE TAG IF NOT EXISTS person(name string, birthdate string);\n",
"CREATE EDGE IF NOT EXISTS acted_in();\n",
"CREATE TAG INDEX IF NOT EXISTS person_index ON person(name(128));\n",
"CREATE TAG INDEX IF NOT EXISTS movie_index ON movie(name(128));"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "66e4799a",
"metadata": {},
"source": [
"Wait for schema creation to complete, then we can insert some data."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d8eea530",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"UsageError: Cell magic `%%ngql` not found.\n"
]
}
],
"source": [
"%%ngql\n",
"INSERT VERTEX person(name, birthdate) VALUES \"Al Pacino\":(\"Al Pacino\", \"1940-04-25\");\n",
"INSERT VERTEX movie(name) VALUES \"The Godfather II\":(\"The Godfather II\");\n",
"INSERT VERTEX movie(name) VALUES \"The Godfather Coda: The Death of Michael Corleone\":(\"The Godfather Coda: The Death of Michael Corleone\");\n",
"INSERT EDGE acted_in() VALUES \"Al Pacino\"->\"The Godfather II\":();\n",
"INSERT EDGE acted_in() VALUES \"Al Pacino\"->\"The Godfather Coda: The Death of Michael Corleone\":();"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "62812aad",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.chains import NebulaGraphQAChain\n",
"from langchain.graphs import NebulaGraph"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0928915d",
"metadata": {},
"outputs": [],
"source": [
"graph = NebulaGraph(\n",
" space=\"langchain\",\n",
" username=\"root\",\n",
" password=\"nebula\",\n",
" address=\"127.0.0.1\",\n",
" port=9669,\n",
" session_pool_size=30,\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "58c1a8ea",
"metadata": {},
"source": [
"## Refresh graph schema information\n",
"\n",
"If the schema of database changes, you can refresh the schema information needed to generate nGQL statements."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4e3de44f",
"metadata": {},
"outputs": [],
"source": [
"# graph.refresh_schema()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "1fe76ccd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Node properties: [{'tag': 'movie', 'properties': [('name', 'string')]}, {'tag': 'person', 'properties': [('name', 'string'), ('birthdate', 'string')]}]\n",
"Edge properties: [{'edge': 'acted_in', 'properties': []}]\n",
"Relationships: ['(:person)-[:acted_in]->(:movie)']\n",
"\n"
]
}
],
"source": [
"print(graph.get_schema)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "68a3c677",
"metadata": {},
"source": [
"## Querying the graph\n",
"\n",
"We can now use the graph cypher QA chain to ask question of the graph"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "7476ce98",
"metadata": {},
"outputs": [],
"source": [
"chain = NebulaGraphQAChain.from_llm(\n",
" ChatOpenAI(temperature=0), graph=graph, verbose=True\n",
")\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "ef8ee27b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new NebulaGraphQAChain chain...\u001b[0m\n",
"Generated nGQL:\n",
"\u001b[32;1m\u001b[1;3mMATCH (p:`person`)-[:acted_in]->(m:`movie`) WHERE m.`movie`.`name` == 'The Godfather II'\n",
"RETURN p.`person`.`name`\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m{'p.person.name': ['Al Pacino']}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Al Pacino played in The Godfather II.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Who played in The Godfather II?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -21,7 +21,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"id": "e9db25f3",
"metadata": {},
"outputs": [],
@@ -318,6 +318,141 @@
"chain({\"input_documents\": docs}, return_only_outputs=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "b882e209",
"metadata": {},
"source": [
"## The custom `MapReduceChain`\n",
"\n",
"**Multi input prompt**\n",
"\n",
"You can also use prompt with multi input. In this example, we will use a MapReduce chain to answer specifc question about our code."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "f7ad9ee2",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain\n",
"from langchain.chains.combine_documents.stuff import StuffDocumentsChain\n",
"\n",
"map_template_string = \"\"\"Give the following python code information, generate a description that explains what the code does and also mention the time complexity.\n",
"Code:\n",
"{code}\n",
"\n",
"Return the the description in the following format:\n",
"name of the function: description of the function\n",
"\"\"\"\n",
"\n",
"\n",
"reduce_template_string = \"\"\"Give the following following python fuctions name and their descritpion, answer the following question\n",
"{code_description}\n",
"Question: {question}\n",
"Answer:\n",
"\"\"\"\n",
"\n",
"MAP_PROMPT = PromptTemplate(input_variables=[\"code\"], template=map_template_string)\n",
"REDUCE_PROMPT = PromptTemplate(input_variables=[\"code_description\", \"question\"], template=reduce_template_string)\n",
"\n",
"llm = OpenAI()\n",
"\n",
"map_llm_chain = LLMChain(llm=llm, prompt=MAP_PROMPT)\n",
"reduce_llm_chain = LLMChain(llm=llm, prompt=REDUCE_PROMPT)\n",
"\n",
"generative_result_reduce_chain = StuffDocumentsChain(\n",
" llm_chain=reduce_llm_chain,\n",
" document_variable_name=\"code_description\",\n",
")\n",
"\n",
"combine_documents = MapReduceDocumentsChain(\n",
" llm_chain=map_llm_chain,\n",
" combine_document_chain=generative_result_reduce_chain,\n",
" document_variable_name=\"code\",\n",
")\n",
"\n",
"map_reduce = MapReduceChain(\n",
" combine_documents_chain=combine_documents,\n",
" text_splitter=CharacterTextSplitter(separator=\"\\n##\\n\", chunk_size=100, chunk_overlap=0),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "0d4caccb",
"metadata": {},
"outputs": [],
"source": [
"code = \"\"\"\n",
"def bubblesort(list):\n",
" for iter_num in range(len(list)-1,0,-1):\n",
" for idx in range(iter_num):\n",
" if list[idx]>list[idx+1]:\n",
" temp = list[idx]\n",
" list[idx] = list[idx+1]\n",
" list[idx+1] = temp\n",
" return list\n",
"##\n",
"def insertion_sort(InputList):\n",
" for i in range(1, len(InputList)):\n",
" j = i-1\n",
" nxt_element = InputList[i]\n",
" while (InputList[j] > nxt_element) and (j >= 0):\n",
" InputList[j+1] = InputList[j]\n",
" j=j-1\n",
" InputList[j+1] = nxt_element\n",
" return InputList\n",
"##\n",
"def shellSort(input_list):\n",
" gap = len(input_list) // 2\n",
" while gap > 0:\n",
" for i in range(gap, len(input_list)):\n",
" temp = input_list[i]\n",
" j = i\n",
" while j >= gap and input_list[j - gap] > temp:\n",
" input_list[j] = input_list[j - gap]\n",
" j = j-gap\n",
" input_list[j] = temp\n",
" gap = gap//2\n",
" return input_list\n",
"\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "d5a9a35b",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Created a chunk of size 247, which is longer than the specified 100\n",
"Created a chunk of size 267, which is longer than the specified 100\n"
]
},
{
"data": {
"text/plain": [
"'shellSort has a better time complexity than both bubblesort and insertion_sort, as it has a time complexity of O(n^2), while the other two have a time complexity of O(n^2).'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"map_reduce.run(input_text=code, question=\"Which function has a better time complexity?\")"
]
},
{
"cell_type": "markdown",
"id": "f61350f9",
@@ -470,7 +605,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.8.16"
},
"vscode": {
"interpreter": {

View File

@@ -5,53 +5,41 @@ Indexes
`Conceptual Guide <https://docs.langchain.com/docs/components/indexing>`_
Indexes refer to ways to structure documents so that LLMs can best interact with them.
This module contains utility functions for working with documents, different types of indexes, and then examples for using those indexes in chains.
**Indexes** refer to ways to structure documents so that LLMs can best interact with them.
The most common way that indexes are used in chains is in a "retrieval" step.
This step refers to taking a user's query and returning the most relevant documents.
We draw this distinction because (1) an index can be used for other things besides retrieval, and (2) retrieval can use other logic besides an index to find relevant documents.
We therefore have a concept of a "Retriever" interface - this is the interface that most chains work with.
We draw this distinction because (1) an index can be used for other things besides retrieval, and
(2) retrieval can use other logic besides an index to find relevant documents.
We therefore have a concept of a **Retriever** interface - this is the interface that most chains work with.
Most of the time when we talk about indexes and retrieval we are talking about indexing and retrieving
unstructured data (like text documents).
For interacting with structured data (SQL tables, etc) or APIs, please see the corresponding use case
sections for links to relevant functionality.
|
- `Getting Started <./indexes/getting_started.html>`_: An overview of the indexes.
Index Types
---------------------
- `Document Loaders <./indexes/document_loaders.html>`_: How to load documents from a variety of sources.
- `Text Splitters <./indexes/text_splitters.html>`_: An overview and different types of the **Text Splitters**.
- `VectorStores <./indexes/vectorstores.html>`_: An overview and different types of the **Vector Stores**.
- `Retrievers <./indexes/retrievers.html>`_: An overview and different types of the **Retrievers**.
Most of the time when we talk about indexes and retrieval we are talking about indexing and retrieving unstructured data (like text documents).
For interacting with structured data (SQL tables, etc) or APIs, please see the corresponding use case sections for links to relevant functionality.
The primary index and retrieval types supported by LangChain are currently centered around vector databases, and therefore
a lot of the functionality we dive deep on those topics.
For an overview of everything related to this, please see the below notebook for getting started:
.. toctree::
:maxdepth: 1
:hidden:
./indexes/getting_started.ipynb
We then provide a deep dive on the four main components.
**Document Loaders**
How to load documents from a variety of sources.
**Text Splitters**
An overview of the abstractions and implementions around splitting text.
**VectorStores**
An overview of VectorStores and the many integrations LangChain provides.
**Retrievers**
An overview of Retrievers and the implementations LangChain provides.
Go Deeper
---------
.. toctree::
:maxdepth: 1
./indexes/document_loaders.rst
./indexes/text_splitters.rst
./indexes/vectorstores.rst

View File

@@ -30,12 +30,14 @@ For detailed instructions on how to get set up with Unstructured, see installati
:maxdepth: 1
:glob:
./document_loaders/examples/audio.ipynb
./document_loaders/examples/conll-u.ipynb
./document_loaders/examples/copypaste.ipynb
./document_loaders/examples/csv.ipynb
./document_loaders/examples/email.ipynb
./document_loaders/examples/epub.ipynb
./document_loaders/examples/evernote.ipynb
./document_loaders/examples/excel.ipynb
./document_loaders/examples/facebook_chat.ipynb
./document_loaders/examples/file_directory.ipynb
./document_loaders/examples/html.ipynb

File diff suppressed because one or more lines are too long

View File

@@ -1,15 +1,18 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"source": [
"# Confluence\n",
"\n",
">[Confluence](https://www.atlassian.com/software/confluence) is a wiki collaboration platform that saves and organizes all of the project-related material. `Confluence` is a knowledge base that primarily handles content management activities. \n",
"\n",
"A loader for `Confluence` pages currently supports both `username/api_key` and `Oauth2 login`.\n",
"See [instructions](https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/).\n",
"A loader for `Confluence` pages.\n",
"\n",
"\n",
"This currently supports `username/api_key`, `Oauth2 login`. Additionally, on-prem installations also support `token` authentication. \n",
"\n",
"\n",
"Specify a list `page_id`-s and/or `space_key` to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned.\n",
@@ -20,9 +23,17 @@
"Hint: `space_key` and `page_id` can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Before using ConfluenceLoader make sure you have the latest version of the atlassian-python-api package installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {
"tags": []
},
@@ -31,6 +42,29 @@
"#!pip install atlassian-python-api"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Examples"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Username and Password or Username and API Token (Atlassian Cloud only)\n",
"\n",
"This example authenticates using either a username and password or, if you're connecting to an Atlassian Cloud hosted version of Confluence, a username and an API Token.\n",
"You can generate an API token at: https://id.atlassian.com/manage-profile/security/api-tokens.\n",
"\n",
"The `limit` parameter specifies how many documents will be retrieved in a single call, not how many documents will be retrieved in total.\n",
"By default the code will return up to 1000 documents in 50 documents batches. To control the total number of documents use the `max_pages` parameter. \n",
"Plese note the maximum value for the `limit` parameter in the atlassian-python-api package is currently 100. "
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -46,6 +80,34 @@
")\n",
"documents = loader.load(space_key=\"SPACE\", include_attachments=True, limit=50)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Personal Access Token (Server/On-Prem only)\n",
"\n",
"This method is valid for the Data Center/Server on-prem edition only.\n",
"For more information on how to generate a Personal Access Token (PAT) check the official Confluence documentation at: https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html.\n",
"When using a PAT you provide only the token value, you cannot provide a username. \n",
"Please note that ConfluenceLoader will run under the permissions of the user that generated the PAT and will only be able to load documents for which said user has access to. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import ConfluenceLoader\n",
"\n",
"loader = ConfluenceLoader(\n",
" url=\"https://yoursite.atlassian.com/wiki\",\n",
" token=\"12345\"\n",
")\n",
"documents = loader.load(space_key=\"SPACE\", include_attachments=True, limit=50, max_pages=50)"
]
}
],
"metadata": {
@@ -64,7 +126,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.9.13"
},
"vscode": {
"interpreter": {

View File

@@ -29,7 +29,6 @@
"cell_type": "code",
"execution_count": 26,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
@@ -45,7 +44,6 @@
"cell_type": "code",
"execution_count": 27,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
@@ -76,7 +74,6 @@
"cell_type": "code",
"execution_count": 28,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
@@ -96,7 +93,6 @@
"cell_type": "code",
"execution_count": 29,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
@@ -152,6 +148,211 @@
"source": [
"print(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## `UnstructuredCSVLoader`\n",
"\n",
"You can also load the table using the `UnstructuredCSVLoader`. One advantage of using `UnstructuredCSVLoader` is that if you use it in `\"elements\"` mode, an HTML representation of the table will be available in the metadata."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders.csv_loader import UnstructuredCSVLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"loader = UnstructuredCSVLoader(file_path='example_data/mlb_teams_2012.csv', mode=\"elements\")\n",
"docs = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<table border=\"1\" class=\"dataframe\">\n",
" <tbody>\n",
" <tr>\n",
" <td>Nationals</td>\n",
" <td>81.34</td>\n",
" <td>98</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Reds</td>\n",
" <td>82.20</td>\n",
" <td>97</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Yankees</td>\n",
" <td>197.96</td>\n",
" <td>95</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Giants</td>\n",
" <td>117.62</td>\n",
" <td>94</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Braves</td>\n",
" <td>83.31</td>\n",
" <td>94</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Athletics</td>\n",
" <td>55.37</td>\n",
" <td>94</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Rangers</td>\n",
" <td>120.51</td>\n",
" <td>93</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Orioles</td>\n",
" <td>81.43</td>\n",
" <td>93</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Rays</td>\n",
" <td>64.17</td>\n",
" <td>90</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Angels</td>\n",
" <td>154.49</td>\n",
" <td>89</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Tigers</td>\n",
" <td>132.30</td>\n",
" <td>88</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Cardinals</td>\n",
" <td>110.30</td>\n",
" <td>88</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Dodgers</td>\n",
" <td>95.14</td>\n",
" <td>86</td>\n",
" </tr>\n",
" <tr>\n",
" <td>White Sox</td>\n",
" <td>96.92</td>\n",
" <td>85</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Brewers</td>\n",
" <td>97.65</td>\n",
" <td>83</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Phillies</td>\n",
" <td>174.54</td>\n",
" <td>81</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Diamondbacks</td>\n",
" <td>74.28</td>\n",
" <td>81</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Pirates</td>\n",
" <td>63.43</td>\n",
" <td>79</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Padres</td>\n",
" <td>55.24</td>\n",
" <td>76</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Mariners</td>\n",
" <td>81.97</td>\n",
" <td>75</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Mets</td>\n",
" <td>93.35</td>\n",
" <td>74</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Blue Jays</td>\n",
" <td>75.48</td>\n",
" <td>73</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Royals</td>\n",
" <td>60.91</td>\n",
" <td>72</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Marlins</td>\n",
" <td>118.07</td>\n",
" <td>69</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Red Sox</td>\n",
" <td>173.18</td>\n",
" <td>69</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Indians</td>\n",
" <td>78.43</td>\n",
" <td>68</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Twins</td>\n",
" <td>94.08</td>\n",
" <td>66</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Rockies</td>\n",
" <td>78.06</td>\n",
" <td>64</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Cubs</td>\n",
" <td>88.19</td>\n",
" <td>61</td>\n",
" </tr>\n",
" <tr>\n",
" <td>Astros</td>\n",
" <td>60.65</td>\n",
" <td>55</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n"
]
}
],
"source": [
"print(docs[0].metadata[\"text_as_html\"])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -170,7 +371,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.8.13"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,79 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "22a849cc",
"metadata": {},
"source": [
"# Microsoft Excel\n",
"\n",
"The `UnstructuredExcelLoader` is used to load `Microsoft Excel` files. The loader works with both `.xlsx` and `.xls` files. The page content will be the raw text of the Excel file. If you use the loader in `\"elements\"` mode, an HTML representation of the Excel file will be available in the document metadata under the `text_as_html` key."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e6616e3a",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import UnstructuredExcelLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a654e4d9",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='\\n \\n \\n Team\\n Location\\n Stanley Cups\\n \\n \\n Blues\\n STL\\n 1\\n \\n \\n Flyers\\n PHI\\n 2\\n \\n \\n Maple Leafs\\n TOR\\n 13\\n \\n \\n', metadata={'source': 'example_data/stanley-cups.xlsx', 'filename': 'stanley-cups.xlsx', 'file_directory': 'example_data', 'filetype': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'page_number': 1, 'page_name': 'Stanley Cups', 'text_as_html': '<table border=\"1\" class=\"dataframe\">\\n <tbody>\\n <tr>\\n <td>Team</td>\\n <td>Location</td>\\n <td>Stanley Cups</td>\\n </tr>\\n <tr>\\n <td>Blues</td>\\n <td>STL</td>\\n <td>1</td>\\n </tr>\\n <tr>\\n <td>Flyers</td>\\n <td>PHI</td>\\n <td>2</td>\\n </tr>\\n <tr>\\n <td>Maple Leafs</td>\\n <td>TOR</td>\\n <td>13</td>\\n </tr>\\n </tbody>\\n</table>', 'category': 'Table'})"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader = UnstructuredExcelLoader(\n",
" \"example_data/stanley-cups.xlsx\",\n",
" mode=\"elements\"\n",
")\n",
"docs = loader.load()\n",
"docs[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9ab94bde",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,84 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Fauna\n",
"\n",
">[Fauna](https://fauna.com/) is a Document Database.\n",
"\n",
"Query `Fauna` documents"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#!pip install fauna"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Query data example"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders.fauna import FaunaLoader\n",
"\n",
"secret = \"<enter-valid-fauna-secret>\"\n",
"query = \"Item.all()\" # Fauna query. Assumes that the collection is called \"Item\"\n",
"field = \"text\" # The field that contains the page content. Assumes that the field is called \"text\"\n",
"\n",
"loader = FaunaLoader(query, field, secret)\n",
"docs = loader.lazy_load()\n",
"\n",
"for value in docs:\n",
" print(value)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Query with Pagination\n",
"You get a `after` value if there are more data. You can get values after the curcor by passing in the `after` string in query. \n",
"\n",
"To learn more following [this link](https://fqlx-beta--fauna-docs.netlify.app/fqlx/beta/reference/schema_entities/set/static-paginate)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"query = \"\"\"\n",
"Item.paginate(\"hs+DzoPOg ... aY1hOohozrV7A\")\n",
"Item.all()\n",
"\"\"\"\n",
"loader = FaunaLoader(query, field, secret)"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -22,6 +22,16 @@
"Load .docx using `Docx2txt` into a document."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "7b80ea891",
"metadata": {},
"outputs": [],
"source": [
"!pip install docx2txt "
]
},
{
"cell_type": "code",
"execution_count": 3,

View File

@@ -146,6 +146,73 @@
"documents[0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Add custom scraping rules\n",
"\n",
"The `SitemapLoader` uses `beautifulsoup4` for the scraping process, and it scrapes every element on the page by default. The `SitemapLoader` constructor accepts a custom scraping function. This feature can be helpful to tailor the scraping process to your specific needs; for example, you might want to avoid scraping headers or navigation elements.\n",
"\n",
" The following example shows how to develop and use a custom function to avoid navigation and header elements."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Import the `beautifulsoup4` library and define the custom function."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pip install beautifulsoup4"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from bs4 import BeautifulSoup\n",
"\n",
"def remove_nav_and_header_elements(content: BeautifulSoup) -> str:\n",
" # Find all 'nav' and 'header' elements in the BeautifulSoup object\n",
" nav_elements = content.find_all('nav')\n",
" header_elements = content.find_all('header')\n",
"\n",
" # Remove each 'nav' and 'header' element from the BeautifulSoup object\n",
" for element in nav_elements + header_elements:\n",
" element.decompose()\n",
"\n",
" return str(content.get_text())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Add your custom function to the `SitemapLoader` object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"loader = SitemapLoader(\n",
" \"https://langchain.readthedocs.io/sitemap.xml\",\n",
" filter_urls=[\"https://python.langchain.com/en/latest/\"],\n",
" parsing_function=remove_nav_and_header_elements\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -41,7 +41,7 @@
"source": [
"# Optionally set your Slack URL. This will give you proper URLs in the docs sources.\n",
"SLACK_WORKSPACE_URL = \"https://xxx.slack.com\"\n",
"LOCAL_ZIPFILE = \"\" # Paste the local paty to your Slack zip file here.\n",
"LOCAL_ZIPFILE = \"\" # Paste the local path to your Slack zip file here.\n",
"\n",
"loader = SlackDirectoryLoader(LOCAL_ZIPFILE, SLACK_WORKSPACE_URL)"
]

View File

@@ -54,7 +54,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "3e64cac2",
"metadata": {},
@@ -117,7 +116,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.13"
"version": "3.10.6"
}
},
"nbformat": 4,

View File

@@ -171,7 +171,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.6"
},
"vscode": {
"interpreter": {

View File

@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### WhatsApp Chat\n",
"# WhatsApp Chat\n",
"\n",
">[WhatsApp](https://www.whatsapp.com/) (also called `WhatsApp Messenger`) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.\n",
"\n",

View File

@@ -0,0 +1,296 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e48afb8d",
"metadata": {},
"source": [
"# Loading documents from a YouTube url\n",
"\n",
"Building chat or QA applications on YouTube videos is a topic of high interest.\n",
"\n",
"Below we show how to easily go from a YouTube url to text to chat!\n",
"\n",
"We wil use the `OpenAIWhisperParser`, which will use the OpenAI Whisper API to transcribe audio to text.\n",
"\n",
"Note: You will need to have an `OPENAI_API_KEY` supplied."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "5f34e934",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders.generic import GenericLoader\n",
"from langchain.document_loaders.parsers import OpenAIWhisperParser\n",
"from langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader"
]
},
{
"cell_type": "markdown",
"id": "85fc12bd",
"metadata": {},
"source": [
"We will use `yt_dlp` to download audio for YouTube urls.\n",
"\n",
"We will use `pydub` to split downloaded audio files (such that we adhere to Whisper API's 25MB file size limit)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fb5a6606",
"metadata": {},
"outputs": [],
"source": [
"! pip install yt_dlp\n",
"! pip install pydub"
]
},
{
"cell_type": "markdown",
"id": "b0e119f4",
"metadata": {},
"source": [
"### YouTube url to text\n",
"\n",
"Use `YoutubeAudioLoader` to fetch / download the audio files.\n",
"\n",
"Then, ues `OpenAIWhisperParser()` to transcribe them to text.\n",
"\n",
"Let's take the first lecture of Andrej Karpathy's YouTube course as an example! "
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "23e1e134",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[youtube] Extracting URL: https://youtu.be/kCc8FmEb1nY\n",
"[youtube] kCc8FmEb1nY: Downloading webpage\n",
"[youtube] kCc8FmEb1nY: Downloading android player API JSON\n",
"[info] kCc8FmEb1nY: Downloading 1 format(s): 140\n",
"[dashsegments] Total fragments: 11\n",
"[download] Destination: /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT from scratch, in code, spelled out..m4a\n",
"[download] 100% of 107.73MiB in 00:00:18 at 5.92MiB/s \n",
"[FixupM4a] Correcting container of \"/Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT from scratch, in code, spelled out..m4a\"\n",
"[ExtractAudio] Not converting audio /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT from scratch, in code, spelled out..m4a; file is already in target format m4a\n",
"[youtube] Extracting URL: https://youtu.be/VMj-3S1tku0\n",
"[youtube] VMj-3S1tku0: Downloading webpage\n",
"[youtube] VMj-3S1tku0: Downloading android player API JSON\n",
"[info] VMj-3S1tku0: Downloading 1 format(s): 140\n",
"[download] /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/The spelled-out intro to neural networks and backpropagation building micrograd.m4a has already been downloaded\n",
"[download] 100% of 134.98MiB\n",
"[ExtractAudio] Not converting audio /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/The spelled-out intro to neural networks and backpropagation building micrograd.m4a; file is already in target format m4a\n"
]
}
],
"source": [
"# Two Karpathy lecture videos\n",
"urls = [\"https://youtu.be/kCc8FmEb1nY\",\n",
" \"https://youtu.be/VMj-3S1tku0\"]\n",
"\n",
"# Directory to save audio files \n",
"save_dir = \"~/Downloads/YouTube\"\n",
"\n",
"# Transcribe the videos to text\n",
"loader = GenericLoader(YoutubeAudioLoader(urls,save_dir),OpenAIWhisperParser())\n",
"docs = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "72a94fd8",
"metadata": {
"scrolled": false
},
"outputs": [
{
"data": {
"text/plain": [
"\"Hello, my name is Andrej and I've been training deep neural networks for a bit more than a decade. And in this lecture I'd like to show you what neural network training looks like under the hood. So in particular we are going to start with a blank Jupyter notebook and by the end of this lecture we will define and train a neural net and you'll get to see everything that goes on under the hood and exactly sort of how that works on an intuitive level. Now specifically what I would like to do is I w\""
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Returns a list of Documents, which can be easily viewed or parsed\n",
"docs[0].page_content[0:500]"
]
},
{
"cell_type": "markdown",
"id": "93be6b49",
"metadata": {},
"source": [
"### Building a chat app from YouTube video\n",
"\n",
"Given `Documents`, we can easily enable chat / question+answering."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "1823f042",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain.vectorstores import FAISS\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "7257cda1",
"metadata": {},
"outputs": [],
"source": [
"# Combine doc\n",
"combined_docs = [doc.page_content for doc in docs]\n",
"text = \" \".join(combined_docs)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "147c0c55",
"metadata": {},
"outputs": [],
"source": [
"# Split them\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size = 1500, chunk_overlap = 150)\n",
"splits = text_splitter.split_text(text)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "f3556703",
"metadata": {},
"outputs": [],
"source": [
"# Build an index\n",
"embeddings = OpenAIEmbeddings()\n",
"vectordb = FAISS.from_texts(splits,embeddings)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "beaa99db",
"metadata": {},
"outputs": [],
"source": [
"# Build a QA chain\n",
"qa_chain = RetrievalQA.from_chain_type(llm = ChatOpenAI(model_name=\"gpt-3.5-turbo\", temperature=0),\n",
" chain_type=\"stuff\",\n",
" retriever=vectordb.as_retriever())"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "f2239a62",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"We need to zero out the gradient before backprop at each step because the backward pass accumulates gradients in the grad attribute of each parameter. If we don't reset the grad to zero before each backward pass, the gradients will accumulate and add up, leading to incorrect updates and slower convergence. By resetting the grad to zero before each backward pass, we ensure that the gradients are calculated correctly and that the optimization process works as intended.\""
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Ask a question!\n",
"query = \"Why do we need to zero out the gradient before backprop at each step?\"\n",
"qa_chain.run(query)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "a8d01098",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'In the context of transformers, an encoder is a component that reads in a sequence of input tokens and generates a sequence of hidden representations. On the other hand, a decoder is a component that takes in a sequence of hidden representations and generates a sequence of output tokens. The main difference between the two is that the encoder is used to encode the input sequence into a fixed-length representation, while the decoder is used to decode the fixed-length representation into an output sequence. In machine translation, for example, the encoder reads in the source language sentence and generates a fixed-length representation, which is then used by the decoder to generate the target language sentence.'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"What is the difference between an encoder and decoder?\"\n",
"qa_chain.run(query)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "fe1e77dd",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'For any token, x is the input vector that contains the private information of that token, k and q are the key and query vectors respectively, which are produced by forwarding linear modules on x, and v is the vector that is calculated by propagating the same linear module on x again. The key vector represents what the token contains, and the query vector represents what the token is looking for. The vector v is the information that the token will communicate to other tokens if it finds them interesting, and it gets aggregated for the purposes of the self-attention mechanism.'"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"For any token, what are x, k, v, and q?\"\n",
"qa_chain.run(query)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "df770c72",
"metadata": {},
@@ -55,11 +56,12 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "6b278a1b",
"metadata": {},
"source": [
"## Add video info"
"### Add video info"
]
},
{
@@ -79,20 +81,36 @@
"metadata": {},
"outputs": [],
"source": [
"loader = YoutubeLoader.from_youtube_url(\"https://www.youtube.com/watch?v=QsYGlZkevEg\", add_video_info=True)"
"loader = YoutubeLoader.from_youtube_url(\"https://www.youtube.com/watch?v=QsYGlZkevEg\", add_video_info=True)\n",
"loader.load()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "fc417e31",
"metadata": {},
"source": [
"### Add language preferences\n",
"\n",
"Language param : It's a list of language codes in a descending priority, `en` by default.\n",
"\n",
"translation param : It's a translate preference when the youtube does'nt have your select language, `en` by default."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "97b98e92",
"id": "08510625",
"metadata": {},
"outputs": [],
"source": [
"loader = YoutubeLoader.from_youtube_url(\"https://www.youtube.com/watch?v=QsYGlZkevEg\", add_video_info=True, language=['en','id'], translation='en')\n",
"loader.load()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "65796cc5",
"metadata": {},

View File

@@ -5,7 +5,16 @@
"id": "1edb9e6b",
"metadata": {},
"source": [
"# Azure Cognitive Search Retriever\n",
"# Azure Cognitive Search\n",
"\n",
">[Azure Cognitive Search](https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search) (formerly known as `Azure Search`) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.\n",
"\n",
">Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities:\n",
">- A search engine for full text search over a search index containing user-owned content\n",
">- Rich indexing, with lexical analysis and optional AI enrichment for content extraction and transformation\n",
">- Rich query syntax for text search, fuzzy search, autocomplete, geo-search and more\n",
">- Programmability through REST APIs and client libraries in Azure SDKs\n",
">- Azure integration at the data layer, machine learning layer, and AI (Cognitive Services)\n",
"\n",
"This notebook shows how to use Azure Cognitive Search (ACS) within LangChain."
]
@@ -120,7 +129,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.6"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,80 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "3df0dcf8",
"metadata": {},
"source": [
"# PubMed Retriever\n",
"\n",
"This notebook goes over how to use PubMed as a retriever\n",
"\n",
"PubMed® comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "aecaff63",
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers import PubMedRetriever"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f2f7e8d3",
"metadata": {},
"outputs": [],
"source": [
"retriever = PubMedRetriever()"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "ed115aa1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='', metadata={'uid': '37268021', 'title': 'Dermatology in the wake of an AI revolution: who gets a say?', 'pub_date': '<Year>2023</Year><Month>May</Month><Day>31</Day>'}),\n",
" Document(page_content='', metadata={'uid': '37267643', 'title': 'What is ChatGPT and what do we do with it? Implications of the age of AI for nursing and midwifery practice and education: An editorial.', 'pub_date': '<Year>2023</Year><Month>May</Month><Day>30</Day>'}),\n",
" Document(page_content='The nursing field has undergone notable changes over time and is projected to undergo further modifications in the future, owing to the advent of sophisticated technologies and growing healthcare needs. The advent of ChatGPT, an AI-powered language model, is expected to exert a significant influence on the nursing profession, specifically in the domains of patient care and instruction. The present article delves into the ramifications of ChatGPT within the nursing domain and accentuates its capacity and constraints to transform the discipline.', metadata={'uid': '37266721', 'title': 'The Impact of ChatGPT on the Nursing Profession: Revolutionizing Patient Care and Education.', 'pub_date': '<Year>2023</Year><Month>Jun</Month><Day>02</Day>'})]"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.get_relevant_documents(\"chatgpt\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,7 +1,6 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "9fc6205b",
"metadata": {},
@@ -14,7 +13,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "51489529-5dcd-4b86-bda6-de0a39d8ffd1",
"metadata": {},
@@ -23,7 +21,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "1435c804-069d-4ade-9a7b-006b97b767c1",
"metadata": {},
@@ -44,7 +41,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "6c15470b-a16b-4e0d-bc6a-6998bafbb5a4",
"metadata": {},
@@ -58,7 +54,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "ae3c3d16",
"metadata": {},
@@ -67,7 +62,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "6fafb73b-d6ec-4822-b161-edf0aaf5224a",
"metadata": {},
@@ -151,7 +145,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "2670363b-3806-4c7e-b14d-90a4d5d2a200",
"metadata": {},
@@ -273,7 +266,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -2,21 +2,15 @@
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Zep Memory\n",
"# Zep\n",
"\n",
"## Retriever Example\n",
">[Zep](https://docs.getzep.com/) - A long-term memory store for LLM applications.\n",
"\n",
"This notebook demonstrates how to search historical chat message histories using the [Zep Long-term Memory Store](https://getzep.github.io/).\n",
"More on `Zep`:\n",
"\n",
"We'll demonstrate:\n",
"\n",
"1. Adding conversation history to the Zep memory store.\n",
"2. Vector search over the conversation history.\n",
"\n",
"More on Zep:\n",
"\n",
"Zep stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs.\n",
"`Zep` stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs.\n",
"\n",
"Key Features:\n",
"\n",
@@ -28,15 +22,37 @@
"\n",
"Zep's Go Extractor model is easily extensible, with a simple, clean interface available to build new enrichment functionality, such as summarizers, entity extractors, embedders, and more.\n",
"\n",
"Zep project: [https://github.com/getzep/zep](https://github.com/getzep/zep)\n"
],
"metadata": {
"collapsed": false
}
"`Zep` project: [https://github.com/getzep/zep](https://github.com/getzep/zep)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Retriever Example\n",
"\n",
"This notebook demonstrates how to search historical chat message histories using the [Zep Long-term Memory Store](https://getzep.github.io/).\n",
"\n",
"We'll demonstrate:\n",
"\n",
"1. Adding conversation history to the Zep memory store.\n",
"2. Vector search over the conversation history.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2023-05-25T15:03:27.863217Z",
"start_time": "2023-05-25T15:03:25.690273Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from langchain.memory.chat_message_histories import ZepChatMessageHistory\n",
@@ -45,29 +61,30 @@
"\n",
"# Set this to your Zep server URL\n",
"ZEP_API_URL = \"http://localhost:8000\""
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-05-25T15:03:27.863217Z",
"start_time": "2023-05-25T15:03:25.690273Z"
}
}
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Initialize the Zep Chat Message History Class and add a chat message history to the memory store\n",
"\n",
"**NOTE:** Unlike other Retrievers, the content returned by the Zep Retriever is session/user specific. A `session_id` is required when instantiating the Retriever."
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2023-05-25T15:03:29.118416Z",
"start_time": "2023-05-25T15:03:29.022464Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"session_id = str(uuid4()) # This is a unique identifier for the user/session\n",
@@ -77,18 +94,21 @@
" session_id=session_id,\n",
" url=ZEP_API_URL,\n",
")"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-05-25T15:03:29.118416Z",
"start_time": "2023-05-25T15:03:29.022464Z"
}
}
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"ExecuteTime": {
"end_time": "2023-05-25T15:03:30.271181Z",
"start_time": "2023-05-25T15:03:30.180442Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"# Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization.\n",
@@ -157,35 +177,42 @@
" if msg[\"role\"] == \"human\"\n",
" else AIMessage(content=msg[\"content\"])\n",
" )\n"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-05-25T15:03:30.271181Z",
"start_time": "2023-05-25T15:03:30.180442Z"
}
}
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Use the Zep Retriever to vector search over the Zep memory\n",
"\n",
"Zep provides native vector search over historical conversation memory. Embedding happens automatically.\n",
"\n",
"NOTE: Embedding of messages occurs asynchronously, so the first query may not return results. Subsequent queries will return results as the embeddings are generated."
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"ExecuteTime": {
"end_time": "2023-05-25T15:03:32.979155Z",
"start_time": "2023-05-25T15:03:32.590310Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"text/plain": "[Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759001673780126, 'uuid': '3a82a02f-056e-4c6a-b960-67ebdf3b2b93', 'created_at': '2023-05-25T15:03:30.2041Z', 'role': 'human', 'token_count': 8}),\n Document(page_content=\"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\", metadata={'score': 0.7602262941130749, 'uuid': 'a2fc9c21-0897-46c8-bef7-6f5c0f71b04a', 'created_at': '2023-05-25T15:03:30.248065Z', 'role': 'ai', 'token_count': 27}),\n Document(page_content='Who were her contemporaries?', metadata={'score': 0.757553366415519, 'uuid': '41f9c41a-a205-41e1-b48b-a0a4cd943fc8', 'created_at': '2023-05-25T15:03:30.243995Z', 'role': 'human', 'token_count': 8}),\n Document(page_content='Octavia Estelle Butler (June 22, 1947 February 24, 2006) was an American science fiction author.', metadata={'score': 0.7546211059317948, 'uuid': '34678311-0098-4f1a-8fd4-5615ac692deb', 'created_at': '2023-05-25T15:03:30.231427Z', 'role': 'ai', 'token_count': 31}),\n Document(page_content='Which books of hers were made into movies?', metadata={'score': 0.7496714959247069, 'uuid': '18046c3a-9666-4d3e-b4f0-43d1394732b7', 'created_at': '2023-05-25T15:03:30.236837Z', 'role': 'human', 'token_count': 11})]"
"text/plain": [
"[Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759001673780126, 'uuid': '3a82a02f-056e-4c6a-b960-67ebdf3b2b93', 'created_at': '2023-05-25T15:03:30.2041Z', 'role': 'human', 'token_count': 8}),\n",
" Document(page_content=\"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\", metadata={'score': 0.7602262941130749, 'uuid': 'a2fc9c21-0897-46c8-bef7-6f5c0f71b04a', 'created_at': '2023-05-25T15:03:30.248065Z', 'role': 'ai', 'token_count': 27}),\n",
" Document(page_content='Who were her contemporaries?', metadata={'score': 0.757553366415519, 'uuid': '41f9c41a-a205-41e1-b48b-a0a4cd943fc8', 'created_at': '2023-05-25T15:03:30.243995Z', 'role': 'human', 'token_count': 8}),\n",
" Document(page_content='Octavia Estelle Butler (June 22, 1947 February 24, 2006) was an American science fiction author.', metadata={'score': 0.7546211059317948, 'uuid': '34678311-0098-4f1a-8fd4-5615ac692deb', 'created_at': '2023-05-25T15:03:30.231427Z', 'role': 'ai', 'token_count': 31}),\n",
" Document(page_content='Which books of hers were made into movies?', metadata={'score': 0.7496714959247069, 'uuid': '18046c3a-9666-4d3e-b4f0-43d1394732b7', 'created_at': '2023-05-25T15:03:30.236837Z', 'role': 'human', 'token_count': 11})]"
]
},
"execution_count": 4,
"metadata": {},
@@ -202,31 +229,38 @@
")\n",
"\n",
"await zep_retriever.aget_relevant_documents(\"Who wrote Parable of the Sower?\")"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-05-25T15:03:32.979155Z",
"start_time": "2023-05-25T15:03:32.590310Z"
}
}
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also use the Zep sync API to retrieve results:"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"ExecuteTime": {
"end_time": "2023-05-25T15:03:34.713354Z",
"start_time": "2023-05-25T15:03:34.577974Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"text/plain": "[Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8897321402776546, 'uuid': '1c09603a-52c1-40d7-9d69-29f26256029c', 'created_at': '2023-05-25T15:03:30.268257Z', 'role': 'ai', 'token_count': 56}),\n Document(page_content=\"Write a short synopsis of Butler's book, Parable of the Sower. What is it about?\", metadata={'score': 0.8857628682610436, 'uuid': 'f6706e8c-6c91-452f-8c1b-9559fd924657', 'created_at': '2023-05-25T15:03:30.265302Z', 'role': 'human', 'token_count': 23}),\n Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759670375149477, 'uuid': '3a82a02f-056e-4c6a-b960-67ebdf3b2b93', 'created_at': '2023-05-25T15:03:30.2041Z', 'role': 'human', 'token_count': 8}),\n Document(page_content=\"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\", metadata={'score': 0.7602854653476563, 'uuid': 'a2fc9c21-0897-46c8-bef7-6f5c0f71b04a', 'created_at': '2023-05-25T15:03:30.248065Z', 'role': 'ai', 'token_count': 27}),\n Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7595293992240313, 'uuid': 'f22f2498-6118-4c74-8718-aa89ccd7e3d6', 'created_at': '2023-05-25T15:03:30.261198Z', 'role': 'ai', 'token_count': 18})]"
"text/plain": [
"[Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8897321402776546, 'uuid': '1c09603a-52c1-40d7-9d69-29f26256029c', 'created_at': '2023-05-25T15:03:30.268257Z', 'role': 'ai', 'token_count': 56}),\n",
" Document(page_content=\"Write a short synopsis of Butler's book, Parable of the Sower. What is it about?\", metadata={'score': 0.8857628682610436, 'uuid': 'f6706e8c-6c91-452f-8c1b-9559fd924657', 'created_at': '2023-05-25T15:03:30.265302Z', 'role': 'human', 'token_count': 23}),\n",
" Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759670375149477, 'uuid': '3a82a02f-056e-4c6a-b960-67ebdf3b2b93', 'created_at': '2023-05-25T15:03:30.2041Z', 'role': 'human', 'token_count': 8}),\n",
" Document(page_content=\"Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.\", metadata={'score': 0.7602854653476563, 'uuid': 'a2fc9c21-0897-46c8-bef7-6f5c0f71b04a', 'created_at': '2023-05-25T15:03:30.248065Z', 'role': 'ai', 'token_count': 27}),\n",
" Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7595293992240313, 'uuid': 'f22f2498-6118-4c74-8718-aa89ccd7e3d6', 'created_at': '2023-05-25T15:03:30.261198Z', 'role': 'ai', 'token_count': 18})]"
]
},
"execution_count": 5,
"metadata": {},
@@ -235,48 +269,44 @@
],
"source": [
"zep_retriever.get_relevant_documents(\"Who wrote Parable of the Sower?\")"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-05-25T15:03:34.713354Z",
"start_time": "2023-05-25T15:03:34.577974Z"
}
}
]
},
{
"cell_type": "code",
"execution_count": 5,
"outputs": [],
"source": [],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-05-18T20:09:21.298710Z",
"start_time": "2023-05-18T20:09:21.297169Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
}
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 0
"nbformat_minor": 4
}

View File

@@ -0,0 +1,131 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "73dbcdb9",
"metadata": {},
"source": [
"# SentenceTransformersTokenTextSplitter\n",
"\n",
"This notebook demonstrates how to use the `SentenceTransformersTokenTextSplitter` text splitter.\n",
"\n",
"Language models have a token limit. You should not exceed the token limit. When you split your text into chunks it is therefore a good idea to count the number of tokens. There are many tokenizers. When you count tokens in your text you should use the same tokenizer as used in the language model. \n",
"\n",
"The `SentenceTransformersTokenTextSplitter` is a specialized text splitter for use with the sentence-transformer models. The default behaviour is to split the text into chunks that fit the token window of the sentence transformer model that you would like to use."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9dd5419e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import SentenceTransformersTokenTextSplitter"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "b43e5d54",
"metadata": {},
"outputs": [],
"source": [
"splitter = SentenceTransformersTokenTextSplitter(chunk_overlap=0)\n",
"text = \"Lorem \""
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "1df84cb4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"2\n"
]
}
],
"source": [
"count_start_and_stop_tokens = 2\n",
"text_token_count = splitter.count_tokens(text=text) - count_start_and_stop_tokens\n",
"print(text_token_count)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d7ad2213",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tokens in text to split: 514\n"
]
}
],
"source": [
"token_multiplier = splitter.maximum_tokens_per_chunk // text_token_count + 1\n",
"\n",
"# `text_to_split` does not fit in a single chunk\n",
"text_to_split = text * token_multiplier\n",
"\n",
"print(f\"tokens in text to split: {splitter.count_tokens(text=text_to_split)}\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "818aea04",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"lorem\n"
]
}
],
"source": [
"text_chunks = splitter.split_text(text=text_to_split)\n",
"\n",
"print(text_chunks[1])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e9ba4f23",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -151,6 +151,15 @@
"## Similarity search with score"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "346347d7",
"metadata": {},
"source": [
"The returned distance score is cosine distance. Therefore, a lower score is better."
]
},
{
"cell_type": "code",
"execution_count": 10,

View File

@@ -0,0 +1,399 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
"source": [
"# ClickHouse Vector Search\n",
"\n",
"> [ClickHouse](https://clickhouse.com/) is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like `L2Distance`) as well as [approximate nearest neighbor search indexes](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/annindexes) enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL.\n",
"\n",
"This notebook shows how to use functionality related to the `ClickHouse` vector search."
]
},
{
"cell_type": "markdown",
"id": "43ead5d5-2c1f-4dce-a69a-cb00e4f9d6f0",
"metadata": {},
"source": [
"## Setting up envrionments"
]
},
{
"cell_type": "markdown",
"id": "b2c434bc",
"metadata": {},
"source": [
"Setting up local clickhouse server with docker (optional)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "249a7751",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-03T08:43:43.035606Z",
"start_time": "2023-06-03T08:43:42.618531Z"
}
},
"outputs": [],
"source": [
"! docker run -d -p 8123:8123 -p9000:9000 --name langchain-clickhouse-server --ulimit nofile=262144:262144 clickhouse/clickhouse-server:23.4.2.11"
]
},
{
"cell_type": "markdown",
"id": "7bd3c1c0",
"metadata": {},
"source": [
"Setup up clickhouse client driver"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9d614bf8",
"metadata": {},
"outputs": [],
"source": [
"!pip install clickhouse-connect"
]
},
{
"cell_type": "markdown",
"id": "15a1d477-9cdb-4d82-b019-96951ecb2b72",
"metadata": {},
"source": [
"We want to use OpenAIEmbeddings so we have to get the OpenAI API Key."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "91003ea5-0c8c-436c-a5de-aaeaeef2f458",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-03T08:49:35.383673Z",
"start_time": "2023-06-03T08:49:33.984547Z"
}
},
"outputs": [],
"source": [
"import os\n",
"import getpass\n",
"\n",
"if not os.environ['OPENAI_API_KEY']:\n",
" os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "aac9563e",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-03T08:33:31.554934Z",
"start_time": "2023-06-03T08:33:31.549590Z"
},
"tags": []
},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores import Clickhouse, ClickhouseSettings"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a3c3999a",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-03T08:33:32.527387Z",
"start_time": "2023-06-03T08:33:32.501312Z"
},
"tags": []
},
"outputs": [],
"source": [
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../../state_of_the_union.txt')\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "6e104aee",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-03T08:33:35.503823Z",
"start_time": "2023-06-03T08:33:33.745832Z"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Inserting data...: 100%|██████████| 42/42 [00:00<00:00, 2801.49it/s]\n"
]
}
],
"source": [
"for d in docs:\n",
" d.metadata = {'some': 'metadata'}\n",
"settings = ClickhouseSettings(table=\"clickhouse_vector_search_example\")\n",
"docsearch = Clickhouse.from_documents(docs, embeddings, config=settings)\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = docsearch.similarity_search(query)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9c608226",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n"
]
}
],
"source": [
"print(docs[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "e3a8b105",
"metadata": {},
"source": [
"## Get connection info and data schema"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "69996818",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-03T08:28:58.252991Z",
"start_time": "2023-06-03T08:28:58.197560Z"
},
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[92m\u001b[1mdefault.clickhouse_vector_search_example @ localhost:8123\u001b[0m\n",
"\n",
"\u001b[1musername: None\u001b[0m\n",
"\n",
"Table Schema:\n",
"---------------------------------------------------\n",
"|\u001b[94mid \u001b[0m|\u001b[96mNullable(String) \u001b[0m|\n",
"|\u001b[94mdocument \u001b[0m|\u001b[96mNullable(String) \u001b[0m|\n",
"|\u001b[94membedding \u001b[0m|\u001b[96mArray(Float32) \u001b[0m|\n",
"|\u001b[94mmetadata \u001b[0m|\u001b[96mObject('json') \u001b[0m|\n",
"|\u001b[94muuid \u001b[0m|\u001b[96mUUID \u001b[0m|\n",
"---------------------------------------------------\n",
"\n"
]
}
],
"source": [
"print(str(docsearch))"
]
},
{
"cell_type": "markdown",
"id": "324ac147",
"metadata": {},
"source": [
"### Clickhouse table schema"
]
},
{
"cell_type": "markdown",
"id": "b5bd7c5b",
"metadata": {},
"source": [
"> Clickhouse table will be automatically created if not exist by default. Advanced users could pre-create the table with optimized settings. For distributed Clickhouse cluster with sharding, table engine should be configured as `Distributed`."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "54f4f561",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Clickhouse Table DDL:\n",
"\n",
"CREATE TABLE IF NOT EXISTS default.clickhouse_vector_search_example(\n",
" id Nullable(String),\n",
" document Nullable(String),\n",
" embedding Array(Float32),\n",
" metadata JSON,\n",
" uuid UUID DEFAULT generateUUIDv4(),\n",
" CONSTRAINT cons_vec_len CHECK length(embedding) = 1536,\n",
" INDEX vec_idx embedding TYPE annoy(100,'L2Distance') GRANULARITY 1000\n",
") ENGINE = MergeTree ORDER BY uuid SETTINGS index_granularity = 8192\n"
]
}
],
"source": [
"print(f\"Clickhouse Table DDL:\\n\\n{docsearch.schema}\")"
]
},
{
"cell_type": "markdown",
"id": "f59360c0",
"metadata": {},
"source": [
"## Filtering\n",
"\n",
"You can have direct access to ClickHouse SQL where statement. You can write `WHERE` clause following standard SQL.\n",
"\n",
"**NOTE**: Please be aware of SQL injection, this interface must not be directly called by end-user.\n",
"\n",
"If you custimized your `column_map` under your setting, you search with filter like this:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "232055f6",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-03T08:29:36.680805Z",
"start_time": "2023-06-03T08:29:34.963676Z"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Inserting data...: 100%|██████████| 42/42 [00:00<00:00, 6939.56it/s]\n"
]
}
],
"source": [
"from langchain.vectorstores import Clickhouse, ClickhouseSettings\n",
"from langchain.document_loaders import TextLoader\n",
"\n",
"loader = TextLoader('../../../state_of_the_union.txt')\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"\n",
"for i, d in enumerate(docs):\n",
" d.metadata = {'doc_id': i}\n",
"\n",
"docsearch = Clickhouse.from_documents(docs, embeddings)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "ddbcee77",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-03T08:29:43.487436Z",
"start_time": "2023-06-03T08:29:43.040831Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0.6779101415357189 {'doc_id': 0} Madam Speaker, Madam...\n",
"0.6997970363474885 {'doc_id': 8} And so many families...\n",
"0.7044504914336727 {'doc_id': 1} Groups of citizens b...\n",
"0.7053558702165094 {'doc_id': 6} And Im taking robus...\n"
]
}
],
"source": [
"meta = docsearch.metadata_column\n",
"output = docsearch.similarity_search_with_relevance_scores('What did the president say about Ketanji Brown Jackson?', \n",
" k=4, where_str=f\"{meta}.doc_id<10\")\n",
"for d, dist in output:\n",
" print(dist, d.metadata, d.page_content[:20] + '...')"
]
},
{
"cell_type": "markdown",
"id": "a359ed74",
"metadata": {},
"source": [
"## Deleting your data"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "fb6a9d36",
"metadata": {
"ExecuteTime": {
"end_time": "2023-06-03T08:30:24.822384Z",
"start_time": "2023-06-03T08:30:24.798571Z"
}
},
"outputs": [],
"source": [
"docsearch.drop()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "2ce41f46-5711-4311-b04d-2fe233ac5b1b",
"metadata": {},
@@ -13,6 +14,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "7ee37d28",
"metadata": {},
@@ -55,6 +57,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "8dbb6de2",
"metadata": {
@@ -98,6 +101,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "ed6f905b-4853-4a44-9730-614aa8e22b78",
"metadata": {},
@@ -145,6 +149,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "3febb987-e903-416f-af26-6897d84c8d61",
"metadata": {},
@@ -152,6 +157,15 @@
"### Similarity search with score"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "bb1df11a",
"metadata": {},
"source": [
"The returned distance score is cosine distance. Therefore, a lower score is better."
]
},
{
"cell_type": "code",
"execution_count": 7,

View File

@@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "a3afefb0-7e99-4912-a222-c6b186da11af",
"metadata": {},
@@ -13,6 +14,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "5031a3ec",
"metadata": {},
@@ -54,6 +56,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "6e57a389-f637-4b8f-9ab2-759ae7485f78",
"metadata": {},
@@ -95,6 +98,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "efbb6684-3846-4332-a624-ddd4d75844c1",
"metadata": {},
@@ -142,6 +146,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "43896697-f99e-47b6-9117-47a25e9afa9c",
"metadata": {},
@@ -149,6 +154,15 @@
"### Similarity search with score"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "414a9bc9",
"metadata": {},
"source": [
"The returned distance score is cosine distance. Therefore, a lower score is better."
]
},
{
"cell_type": "code",
"execution_count": 7,

View File

@@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
@@ -29,6 +30,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "38237514-b3fa-44a4-9cff-30cd6bf50073",
"metadata": {},
@@ -45,7 +47,7 @@
},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API Key: ········\n"
@@ -137,12 +139,13 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "f13473b5",
"metadata": {},
"source": [
"## Similarity Search with score\n",
"There are some FAISS specific methods. One of them is `similarity_search_with_score`, which allows you to return not only the documents but also the similarity score of the query to them."
"There are some FAISS specific methods. One of them is `similarity_search_with_score`, which allows you to return not only the documents but also the distance score of the query to them. The returned distance score is L2 distance. Therefore, a lower score is better."
]
},
{
@@ -178,6 +181,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "f34420cf",
"metadata": {},
@@ -197,6 +201,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "31bda7fd",
"metadata": {},
@@ -257,6 +262,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "57da60d4",
"metadata": {},

View File

@@ -5,7 +5,8 @@
"id": "683953b3",
"metadata": {},
"source": [
"# MongoDB Atlas Vector Search\n",
"#### Commented out until further notice\n",
"MongoDB Atlas Vector Search\n",
"\n",
">[MongoDB Atlas](https://www.mongodb.com/docs/atlas/) is a document database managed in the cloud. It also enables Lucene and its vector search feature.\n",
"\n",
@@ -43,7 +44,7 @@
},
{
"cell_type": "markdown",
"id": "320af802-9271-46ee-948f-d2453933d44b",
"id": "457ace44-1d95-4001-9dd5-78811ab208ad",
"metadata": {},
"source": [
"We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key. Make sure the environment variable `OPENAI_API_KEY` is set up before proceeding."
@@ -118,15 +119,14 @@
"\n",
"db_name = \"lanchain_db\"\n",
"collection_name = \"langchain_col\"\n",
"namespace = f\"{db_name}.{collection_name}\"\n",
"collection = client[db_name][collection_name]\n",
"index_name = \"langchain_demo\"\n",
"\n",
"# insert the documents in MongoDB Atlas with their embedding\n",
"docsearch = MongoDBAtlasVectorSearch.from_documents(\n",
" docs,\n",
" embeddings,\n",
" client=client,\n",
" namespace=namespace,\n",
" collection=collection,\n",
" index_name=index_name\n",
")\n",
"\n",
@@ -144,6 +144,47 @@
"source": [
"print(docs[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "851a2ec9-9390-49a4-8412-3e132c9f789d",
"metadata": {},
"source": [
"You can reuse vector index you created before, make sure environment variable `OPENAI_API_KEY` is set up, then create another file."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6336fe79-3e73-48be-b20a-0ff1bb6a4399",
"metadata": {},
"outputs": [],
"source": [
"from pymongo import MongoClient\n",
"from langchain.vectorstores import MongoDBAtlasVectorSearch\n",
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"import os\n",
"\n",
"MONGODB_ATLAS_URI = os.environ['MONGODB_ATLAS_URI']\n",
"\n",
"# initialize MongoDB python client\n",
"client = MongoClient(MONGODB_ATLAS_URI)\n",
"\n",
"db_name = \"langchain_db\"\n",
"collection_name = \"langchain_col\"\n",
"collection = client[db_name][collection_name]\n",
"index_name = \"langchain_index\"\n",
"\n",
"# initialize vector store\n",
"vectorStore = MongoDBAtlasVectorSearch(\n",
" collection, OpenAIEmbeddings(), index_name=index_name)\n",
"\n",
"# perform a similarity search between the embedding of the query and the embeddings of the documents\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = vectorStore.similarity_search(query)\n",
"\n",
"print(docs[0].page_content)"
]
}
],
"metadata": {

View File

@@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
@@ -13,6 +14,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "43ead5d5-2c1f-4dce-a69a-cb00e4f9d6f0",
"metadata": {},
@@ -33,6 +35,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "15a1d477-9cdb-4d82-b019-96951ecb2b72",
"metadata": {},
@@ -54,6 +57,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "a9d16fa3",
"metadata": {},
@@ -169,6 +173,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "e3a8b105",
"metadata": {},
@@ -187,6 +192,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "f59360c0",
"metadata": {},
@@ -231,6 +237,24 @@
"docsearch = MyScale.from_documents(docs, embeddings)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "8d867b05",
"metadata": {},
"source": [
"### Similarity search with score"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "9ec25cc5",
"metadata": {},
"source": [
"The returned distance score is cosine distance. Therefore, a lower score is better."
]
},
{
"cell_type": "code",
"execution_count": 16,
@@ -257,6 +281,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "a359ed74",
"metadata": {},

View File

@@ -24,13 +24,48 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 60,
"metadata": {
"tags": []
},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: pgvector in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (0.1.8)\n",
"Requirement already satisfied: numpy in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from pgvector) (1.24.3)\n",
"Requirement already satisfied: openai in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (0.27.7)\n",
"Requirement already satisfied: requests>=2.20 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from openai) (2.28.2)\n",
"Requirement already satisfied: tqdm in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from openai) (4.65.0)\n",
"Requirement already satisfied: aiohttp in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from openai) (3.8.4)\n",
"Requirement already satisfied: charset-normalizer<4,>=2 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (3.1.0)\n",
"Requirement already satisfied: idna<4,>=2.5 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (3.4)\n",
"Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (1.26.15)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (2023.5.7)\n",
"Requirement already satisfied: attrs>=17.3.0 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (23.1.0)\n",
"Requirement already satisfied: multidict<7.0,>=4.5 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (6.0.4)\n",
"Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (4.0.2)\n",
"Requirement already satisfied: yarl<2.0,>=1.0 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.9.2)\n",
"Requirement already satisfied: frozenlist>=1.1.1 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.3.3)\n",
"Requirement already satisfied: aiosignal>=1.1.2 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.3.1)\n",
"Requirement already satisfied: psycopg2-binary in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (2.9.6)\n",
"Requirement already satisfied: tiktoken in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (0.4.0)\n",
"Requirement already satisfied: regex>=2022.1.18 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from tiktoken) (2023.5.5)\n",
"Requirement already satisfied: requests>=2.26.0 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from tiktoken) (2.28.2)\n",
"Requirement already satisfied: charset-normalizer<4,>=2 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.26.0->tiktoken) (3.1.0)\n",
"Requirement already satisfied: idna<4,>=2.5 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.26.0->tiktoken) (3.4)\n",
"Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.26.0->tiktoken) (1.26.15)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.26.0->tiktoken) (2023.5.7)\n"
]
}
],
"source": [
"!pip install pgvector"
"# Pip install necessary package\n",
"!pip install pgvector\n",
"!pip install openai\n",
"!pip install psycopg2-binary\n",
"!pip install tiktoken"
]
},
{
@@ -42,9 +77,17 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 19,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API Key:········\n"
]
}
],
"source": [
"import os\n",
"import getpass\n",
@@ -54,7 +97,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 61,
"metadata": {
"tags": []
},
@@ -65,7 +108,7 @@
"False"
]
},
"execution_count": 1,
"execution_count": 61,
"metadata": {},
"output_type": "execute_result"
}
@@ -79,7 +122,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 62,
"metadata": {
"tags": []
},
@@ -94,7 +137,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 63,
"metadata": {},
"outputs": [],
"source": [
@@ -129,6 +172,29 @@
"# postgresql+psycopg2://username:password@localhost:5432/database_name"
]
},
{
"cell_type": "code",
"execution_count": 64,
"metadata": {},
"outputs": [],
"source": [
"# ## PGVector needs the connection string to the database.\n",
"# ## We will load it from the environment variables.\n",
"# import os\n",
"# CONNECTION_STRING = PGVector.connection_string_from_db_params(\n",
"# driver=os.environ.get(\"PGVECTOR_DRIVER\", \"psycopg2\"),\n",
"# host=os.environ.get(\"PGVECTOR_HOST\", \"localhost\"),\n",
"# port=int(os.environ.get(\"PGVECTOR_PORT\", \"5432\")),\n",
"# database=os.environ.get(\"PGVECTOR_DATABASE\", \"rd-embeddings\"),\n",
"# user=os.environ.get(\"PGVECTOR_USER\", \"admin\"),\n",
"# password=os.environ.get(\"PGVECTOR_PASSWORD\", \"password\"),\n",
"# )\n",
"\n",
"\n",
"# ## Example\n",
"# # postgresql+psycopg2://username:password@localhost:5432/database_name"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -145,7 +211,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 69,
"metadata": {},
"outputs": [],
"source": [
@@ -165,7 +231,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 70,
"metadata": {},
"outputs": [
{
@@ -173,7 +239,7 @@
"output_type": "stream",
"text": [
"--------------------------------------------------------------------------------\n",
"Score: 0.6076628081132506\n",
"Score: 0.6076804864602984\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
@@ -183,7 +249,7 @@
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"--------------------------------------------------------------------------------\n",
"--------------------------------------------------------------------------------\n",
"Score: 0.6076628081132506\n",
"Score: 0.6076804864602984\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
@@ -193,24 +259,32 @@
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"--------------------------------------------------------------------------------\n",
"--------------------------------------------------------------------------------\n",
"Score: 0.6076804780049968\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"Score: 0.659062774389974\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"We can do both. At our border, weve installed new technology like cutting-edge scanners to better detect drug smuggling. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"Weve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n",
"\n",
"Were putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n",
"\n",
"Were securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n",
"--------------------------------------------------------------------------------\n",
"--------------------------------------------------------------------------------\n",
"Score: 0.6076804780049968\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"Score: 0.659062774389974\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"We can do both. At our border, weve installed new technology like cutting-edge scanners to better detect drug smuggling. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"Weve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n",
"\n",
"Were putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n",
"\n",
"Were securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n",
"--------------------------------------------------------------------------------\n"
]
}
@@ -224,7 +298,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -232,7 +305,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -241,12 +313,14 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 55,
"metadata": {},
"outputs": [],
"source": [
"data=docs\n",
"api_key=os.environ['OPENAI_API_KEY']\n",
"db = PGVector.from_documents(\n",
" documents=data,\n",
" documents=docs,\n",
" embedding=embeddings,\n",
" collection_name=collection_name,\n",
" connection_string=connection_string,\n",
@@ -257,7 +331,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -266,10 +339,14 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 56,
"metadata": {},
"outputs": [],
"source": [
"connection_string = CONNECTION_STRING \n",
"embedding=embeddings\n",
"collection_name=\"state_of_the_union\"\n",
"from langchain.vectorstores.pgvector import DistanceStrategy\n",
"store = PGVector(\n",
" connection_string=connection_string, \n",
" embedding_function=embedding, \n",
@@ -279,6 +356,127 @@
"\n",
"retriever = store.as_retriever()"
]
},
{
"cell_type": "code",
"execution_count": 57,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"vectorstore=<langchain.vectorstores.pgvector.PGVector object at 0x7fe9a1b1c670> search_type='similarity' search_kwargs={}\n"
]
}
],
"source": [
"print(retriever)"
]
},
{
"cell_type": "code",
"execution_count": 83,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}), 0.6075870262188066), (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}), 0.6075870262188066), (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \\n\\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \\n\\nWe can do both. At our border, weve installed new technology like cutting-edge scanners to better detect drug smuggling. \\n\\nWeve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \\n\\nWere putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \\n\\nWere securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt'}), 0.6589478388546668), (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \\n\\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \\n\\nWe can do both. At our border, weve installed new technology like cutting-edge scanners to better detect drug smuggling. \\n\\nWeve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \\n\\nWere putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \\n\\nWere securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt'}), 0.6589478388546668)]\n"
]
}
],
"source": [
"# When we have an existing PG VEctor \n",
"DEFAULT_DISTANCE_STRATEGY = DistanceStrategy.EUCLIDEAN\n",
"db1 = PGVector.from_existing_index(\n",
" embedding=embeddings,\n",
" collection_name=\"state_of_the_union\",\n",
" distance_strategy=DEFAULT_DISTANCE_STRATEGY,\n",
" pre_delete_collection = False,\n",
" connection_string=CONNECTION_STRING,\n",
")\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs_with_score: List[Tuple[Document, float]] = db1.similarity_search_with_score(query)\n",
"print(docs_with_score)"
]
},
{
"cell_type": "code",
"execution_count": 81,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--------------------------------------------------------------------------------\n",
"Score: 0.6075870262188066\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"--------------------------------------------------------------------------------\n",
"--------------------------------------------------------------------------------\n",
"Score: 0.6075870262188066\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"--------------------------------------------------------------------------------\n",
"--------------------------------------------------------------------------------\n",
"Score: 0.6589478388546668\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n",
"\n",
"We can do both. At our border, weve installed new technology like cutting-edge scanners to better detect drug smuggling. \n",
"\n",
"Weve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n",
"\n",
"Were putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n",
"\n",
"Were securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n",
"--------------------------------------------------------------------------------\n",
"--------------------------------------------------------------------------------\n",
"Score: 0.6589478388546668\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n",
"\n",
"We can do both. At our border, weve installed new technology like cutting-edge scanners to better detect drug smuggling. \n",
"\n",
"Weve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n",
"\n",
"Were putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n",
"\n",
"Were securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"for doc, score in docs_with_score:\n",
" print(\"-\" * 80)\n",
" print(\"Score: \", score)\n",
" print(doc.page_content)\n",
" print(\"-\" * 80)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -297,7 +495,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.9.7"
}
},
"nbformat": 4,

View File

@@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
@@ -33,6 +34,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "7b2f111b-357a-4f42-9730-ef0603bdc1b5",
"metadata": {},
@@ -49,7 +51,7 @@
},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API Key: ········\n"
@@ -104,6 +106,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "eeead681",
"metadata": {},
@@ -140,6 +143,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "59f0b954",
"metadata": {},
@@ -170,6 +174,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "749658ce",
"metadata": {},
@@ -200,6 +205,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "c9e21ce9",
"metadata": {},
@@ -231,6 +237,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "93540013",
"metadata": {},
@@ -279,6 +286,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "1f9215c8",
"metadata": {
@@ -341,13 +349,15 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "1bda9bf5",
"metadata": {},
"source": [
"## Similarity search with score\n",
"\n",
"Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result."
"Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result. \n",
"The returned distance score is cosine distance. Therefore, a lower score is better."
]
},
{
@@ -400,6 +410,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "525e3582",
"metadata": {},
@@ -410,6 +421,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "1c2c58dc",
"metadata": {},
@@ -423,6 +435,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "c58c30bf",
"metadata": {
@@ -503,6 +516,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "691a82d6",
"metadata": {},
@@ -540,6 +554,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "0c851b4f",
"metadata": {},
@@ -602,6 +617,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "0358ecde",
"metadata": {},

View File

@@ -0,0 +1,139 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2b9582dc",
"metadata": {},
"source": [
"# SingleStoreDB vector search\n",
"[SingleStore DB](https://singlestore.com) is a high-performance distributed database that supports deployment both in the [cloud](https://www.singlestore.com/cloud/) and on-premises. For a significant duration, it has provided support for vector functions such as [dot_product](https://docs.singlestore.com/managed-service/en/reference/sql-reference/vector-functions/dot_product.html), thereby positioning itself as an ideal solution for AI applications that require text similarity matching. \n",
"This tutorial illustrates how to utilize the features of the SingleStore DB Vector Store."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e4a61a4d",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# Establishing a connection to the database is facilitated through the singlestoredb Python connector.\n",
"# Please ensure that this connector is installed in your working environment.\n",
"!pip install singlestoredb"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "39a0132a",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import getpass\n",
"\n",
"# We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.\n",
"os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6104fde8",
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores import SingleStoreDB\n",
"from langchain.document_loaders import TextLoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7b45113c",
"metadata": {},
"outputs": [],
"source": [
"# Load text samples \n",
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../../state_of_the_union.txt')\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "markdown",
"id": "535b2687",
"metadata": {},
"source": [
"There are several ways to establish a [connection](https://singlestoredb-python.labs.singlestore.com/generated/singlestoredb.connect.html) to the database. You can either set up environment variables or pass named parameters to the `SingleStoreDB constructor`. Alternatively, you may provide these parameters to the `from_documents` and `from_texts` methods."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d0b316bf",
"metadata": {},
"outputs": [],
"source": [
"# Setup connection url as environment variable\n",
"os.environ['SINGLESTOREDB_URL'] = 'root:pass@localhost:3306/db'\n",
"\n",
"# Load documents to the store\n",
"docsearch = SingleStoreDB.from_documents(\n",
" docs,\n",
" embeddings,\n",
" table_name = \"noteook\", # use table with a custom name \n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0eaa4297",
"metadata": {},
"outputs": [],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = docsearch.similarity_search(query) # Find documents that correspond to the query\n",
"print(docs[0].page_content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "86efff90",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
@@ -9,6 +10,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "cc80fa84-1f2f-48b4-bd39-3e6412f012f1",
"metadata": {},
@@ -85,6 +87,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "69bff365-3039-4ff8-a641-aa190166179d",
"metadata": {},
@@ -236,6 +239,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "18152965",
"metadata": {},
@@ -243,6 +247,15 @@
"## Similarity search with score\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "ea13e80a",
"metadata": {},
"source": [
"The returned distance score is cosine distance. Therefore, a lower score is better."
]
},
{
"cell_type": "code",
"execution_count": 9,
@@ -276,6 +289,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "794a7552",
"metadata": {},

View File

@@ -0,0 +1,199 @@
{
"cells": [
{
"cell_type": "markdown",
"source": [
"# Tigris\n",
"\n",
"> [Tigris](htttps://tigrisdata.com) is an open source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.\n",
"> Tigris eliminates the infrastructure complexity of managing, operating, and synchronizing multiple tools, allowing you to focus on building great applications instead."
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"This notebook guides you how to use Tigris as your VectorStore"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"**Pre requisites**\n",
"1. An OpenAI account. You can sign up for an account [here](https://platform.openai.com/)\n",
"2. [Sign up for a free Tigris account](https://console.preview.tigrisdata.cloud). Once you have signed up for the Tigris account, create a new project called `vectordemo`. Next, make a note of the *Uri* for the region you've created your project in, the **clientId** and **clientSecret**. You can get all this information from the **Application Keys** section of the project."
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"Let's first install our dependencies:"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"!pip install tigrisdb openapi-schema-pydantic openai tiktoken"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"We will load the `OpenAI` api key and `Tigris` credentials in our environment"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"import os\n",
"import getpass\n",
"\n",
"os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')\n",
"os.environ['TIGRIS_PROJECT'] = getpass.getpass('Tigris Project Name:')\n",
"os.environ['TIGRIS_CLIENT_ID'] = getpass.getpass('Tigris Client Id:')\n",
"os.environ['TIGRIS_CLIENT_SECRET'] = getpass.getpass('Tigris Client Secret:')"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores import Tigris\n",
"from langchain.document_loaders import TextLoader"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"### Initialize Tigris vector store\n",
"Let's import our test dataset:"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"loader = TextLoader('../../../state_of_the_union.txt')\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"vector_store = Tigris.from_documents(docs, embeddings, index_name=\"my_embeddings\")"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"### Similarity Search"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"found_docs = vector_store.similarity_search(query)\n",
"print(found_docs)"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"### Similarity Search with score (vector distance)"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = vector_store.similarity_search_with_score(query)\n",
"for (doc, score) in result:\n",
" print(f\"document={doc}, score={score}\")"
],
"metadata": {
"collapsed": false
}
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -1,21 +1,23 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
"source": [
"# Vectara\n",
"\n",
">[Vectara](https://Vectara.com/docs/) is a API platform for building LLM-powered applications. It provides a simple to use API for document indexing and query that is managed by Vectara and is optimized for performance and accuracy. \n",
">[Vectara](https://vectara.com/) is a API platform for building LLM-powered applications. It provides a simple to use API for document indexing and query that is managed by Vectara and is optimized for performance and accuracy. \n",
"\n",
"\n",
"This notebook shows how to use functionality related to the `Vectara` vector database. \n",
"\n",
"See the [Vectara API documentation ](https://Vectara.com/docs/) for more information on how to use the API."
"See the [Vectara API documentation ](https://docs.vectara.com/docs/) for more information on how to use the API."
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "7b2f111b-357a-4f42-9730-ef0603bdc1b5",
"metadata": {},
@@ -87,6 +89,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "eeead681",
"metadata": {},
@@ -113,6 +116,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "1f9215c8",
"metadata": {
@@ -169,6 +173,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "1bda9bf5",
"metadata": {},
@@ -222,6 +227,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "691a82d6",
"metadata": {},

View File

@@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
@@ -47,6 +48,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "6b34828d-e627-4d85-aabd-eeb15d9f4b00",
"metadata": {},
@@ -165,6 +167,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "a15863ee",
"metadata": {},
@@ -172,6 +175,16 @@
"## Similarity search with score"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "64e03db8",
"metadata": {},
"source": [
"Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result. \n",
"The returned distance score is cosine distance. Therefore, a lower score is better."
]
},
{
"cell_type": "code",
"execution_count": 10,
@@ -214,6 +227,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "05fd146c",
"metadata": {},

View File

@@ -9,16 +9,15 @@ By default, Chains and Agents are stateless,
meaning that they treat each incoming query independently (as are the underlying LLMs and chat models).
In some applications (chatbots being a GREAT example) it is highly important
to remember previous interactions, both at a short term but also at a long term level.
The concept of “Memory” exists to do exactly that.
The **Memory** does exactly that.
LangChain provides memory components in two forms.
First, LangChain provides helper utilities for managing and manipulating previous chat messages.
These are designed to be modular and useful regardless of how they are used.
Secondly, LangChain provides easy ways to incorporate these utilities into chains.
The following sections of documentation are provided:
- `Getting Started <./memory/getting_started.html>`_: An overview of how to get started with different types of memory.
|
- `Getting Started <./memory/getting_started.html>`_: An overview of different types of memory.
- `How-To Guides <./memory/how_to_guides.html>`_: A collection of how-to guides. These highlight different types of memory, as well as how to use memory in chains.
@@ -28,6 +27,7 @@ The following sections of documentation are provided:
:maxdepth: 1
:caption: Memory
:name: Memory
:hidden:
./memory/getting_started.ipynb
./memory/getting_started.html
./memory/how_to_guides.rst

View File

@@ -121,7 +121,7 @@
"\n",
"Human: Hi there my friend\n",
"AI: Hi there, how are you doing today?\n",
"Human: Not to bad - how are you?\n",
"Human: Not too bad - how are you?\n",
"Chatbot:\u001b[0m\n",
"\n",
"\u001b[1m> Finished LLMChain chain.\u001b[0m\n"

View File

@@ -5,7 +5,7 @@
"id": "91c6a7ef",
"metadata": {},
"source": [
"# Momento\n",
"# Momento Chat Message History\n",
"\n",
"This notebook goes over how to use [Momento Cache](https://gomomento.com) to store chat message history using the `MomentoChatMessageHistory` class. See the Momento [docs](https://docs.momentohq.com/getting-started) for more detail on how to get set up with Momento.\n",
"\n",
@@ -27,7 +27,7 @@
"\n",
"session_id = \"foo\"\n",
"cache_name = \"langchain\"\n",
"ttl = timedelta(days=1),\n",
"ttl = timedelta(days=1)\n",
"history = MomentoChatMessageHistory.from_client_params(\n",
" session_id, \n",
" cache_name,\n",

View File

@@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "d31df93e",
"metadata": {},
@@ -9,7 +10,7 @@
"\n",
"This notebook walks through how LangChain thinks about memory. \n",
"\n",
"Memory involves keeping a concept of state around throughout a user's interactions with an language model. A user's interactions with a language model are captured in the concept of ChatMessages, so this boils down to ingesting, capturing, transforming and extracting knowledge from a sequence of chat messages. There are many different ways to do this, each of which exists as its own memory type.\n",
"Memory involves keeping a concept of state around throughout a user's interactions with a language model. A user's interactions with a language model are captured in the concept of ChatMessages, so this boils down to ingesting, capturing, transforming and extracting knowledge from a sequence of chat messages. There are many different ways to do this, each of which exists as its own memory type.\n",
"\n",
"In general, for each type of memory there are two ways to understanding using memory. These are the standalone functions which extract information from a sequence of messages, and then there is the way you can use this type of memory in a chain. \n",
"\n",
@@ -25,7 +26,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"id": "87235cf1",
"metadata": {},
"outputs": [],
@@ -41,18 +42,18 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 3,
"id": "be030822",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='hi!', additional_kwargs={}),\n",
" AIMessage(content='whats up?', additional_kwargs={})]"
"[HumanMessage(content='hi!', additional_kwargs={}, example=False),\n",
" AIMessage(content='whats up?', additional_kwargs={}, example=False)]"
]
},
"execution_count": 5,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -75,7 +76,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 4,
"id": "a382b160",
"metadata": {},
"outputs": [],
@@ -85,7 +86,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 5,
"id": "a280d337",
"metadata": {},
"outputs": [],
@@ -97,7 +98,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 7,
"id": "1b739c0a",
"metadata": {},
"outputs": [
@@ -107,7 +108,7 @@
"{'history': 'Human: hi!\\nAI: whats up?'}"
]
},
"execution_count": 12,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -126,7 +127,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 8,
"id": "798ceb1c",
"metadata": {},
"outputs": [],
@@ -138,18 +139,18 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 9,
"id": "698688fd",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': [HumanMessage(content='hi!', additional_kwargs={}),\n",
" AIMessage(content='whats up?', additional_kwargs={})]}"
"{'history': [HumanMessage(content='hi!', additional_kwargs={}, example=False),\n",
" AIMessage(content='whats up?', additional_kwargs={}, example=False)]}"
]
},
"execution_count": 14,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
@@ -169,7 +170,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 10,
"id": "54301321",
"metadata": {},
"outputs": [],
@@ -188,7 +189,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 11,
"id": "ae046bff",
"metadata": {},
"outputs": [
@@ -216,7 +217,7 @@
"\" Hi there! It's nice to meet you. How can I help you today?\""
]
},
"execution_count": 16,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@@ -227,7 +228,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 12,
"id": "d8e2a6ff",
"metadata": {},
"outputs": [
@@ -256,7 +257,7 @@
"\" That's great! It's always nice to have a conversation with someone new. What would you like to talk about?\""
]
},
"execution_count": 17,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@@ -267,7 +268,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 13,
"id": "15eda316",
"metadata": {},
"outputs": [
@@ -298,7 +299,7 @@
"\" Sure! I'm an AI created to help people with their everyday tasks. I'm programmed to understand natural language and provide helpful information. I'm also constantly learning and updating my knowledge base so I can provide more accurate and helpful answers.\""
]
},
"execution_count": 18,
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
@@ -319,7 +320,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 14,
"id": "b5acbc4b",
"metadata": {},
"outputs": [],
@@ -338,7 +339,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 15,
"id": "7812ee21",
"metadata": {},
"outputs": [],
@@ -348,18 +349,20 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 16,
"id": "3ed6e6a0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'type': 'human', 'data': {'content': 'hi!', 'additional_kwargs': {}}},\n",
" {'type': 'ai', 'data': {'content': 'whats up?', 'additional_kwargs': {}}}]"
"[{'type': 'human',\n",
" 'data': {'content': 'hi!', 'additional_kwargs': {}, 'example': False}},\n",
" {'type': 'ai',\n",
" 'data': {'content': 'whats up?', 'additional_kwargs': {}, 'example': False}}]"
]
},
"execution_count": 3,
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
@@ -370,7 +373,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 17,
"id": "cdf4ebd2",
"metadata": {},
"outputs": [],
@@ -380,18 +383,18 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 18,
"id": "9724e24b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[HumanMessage(content='hi!', additional_kwargs={}),\n",
" AIMessage(content='whats up?', additional_kwargs={})]"
"[HumanMessage(content='hi!', additional_kwargs={}, example=False),\n",
" AIMessage(content='whats up?', additional_kwargs={}, example=False)]"
]
},
"execution_count": 5,
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
@@ -407,14 +410,6 @@
"source": [
"And that's it for the getting started! There are plenty of different types of memory, check out our examples to see them all"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3dd37d93",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -433,7 +428,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.9"
}
},
"nbformat": 4,

View File

@@ -11,38 +11,28 @@ but we have individual pages for each model type.
The pages contain more detailed "how-to" guides for working with that model,
as well as a list of different model providers.
**LLMs**
Large Language Models (LLMs) are the first type of models we cover.
These models take a text string as input, and return a text string as output.
|
- `Getting Started <./models/getting_started.html>`_: An overview of the models.
**Chat Models**
Model Types
-----------
Chat Models are the second type of models we cover.
These models are usually backed by a language model, but their APIs are more structured.
Specifically, these models take a list of Chat Messages as input, and return a Chat Message.
- `LLMs <./models/llms.html>`_: **Large Language Models (LLMs)** take a text string as input and return a text string as output.
**Text Embedding Models**
- `Chat Models <./models/chat.html>`_: **Chat Models** are usually backed by a language model, but their APIs are more structured.
Specifically, these models take a list of Chat Messages as input, and return a Chat Message.
The third type of models we cover are text embedding models.
These models take text as input and return a list of floats.
- `Text Embedding Models <./models/text_embedding.html>`_: **Text embedding models** take text as input and return a list of floats.
Getting Started
---------------
.. toctree::
:maxdepth: 1
./models/getting_started.ipynb
Go Deeper
---------
.. toctree::
:maxdepth: 1
:caption: Models
:name: models
:hidden:
./models/getting_started.html
./models/llms.rst
./models/chat.rst
./models/text_embedding.rst

View File

@@ -7,7 +7,12 @@
"source": [
"# Anthropic\n",
"\n",
"This notebook covers how to get started with Anthropic chat models."
"\n",
">[Anthropic](https://en.wikipedia.org/wiki/Anthropic) is an American artificial intelligence (AI) startup and \n",
"> public-benefit corporation, founded by former members of OpenAI. `Anthropic` specializes in developing general AI \n",
"> systems and language models, with a company ethos of responsible AI usage.\n",
"> `Anthropic` develops a chatbot, named `Claude`. Similar to `ChatGPT`, `Claude` uses a messaging \n",
"> interface where users can submit questions or requests and receive highly detailed and relevant responses.\n"
]
},
{
@@ -171,7 +176,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.6"
}
},
"nbformat": 4,

View File

@@ -4,9 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Cloud Platform Vertex AI PaLM \n",
"# Google Vertex AI PaLM \n",
"\n",
"Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
">[Vertex AI](https://cloud.google.com/vertex-ai/docs/start/introduction-unified-platform) is a machine learning (ML) \n",
"> platform that lets you train and deploy ML models and AI applications. \n",
"> `Vertex AI` combines data engineering, data science, and ML engineering workflows, enabling your teams to \n",
"> collaborate using a common toolset.\n",
"\n",
"**Note:** This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
"\n",
"PaLM API on Vertex AI is a Preview offering, subject to the Pre-GA Offerings Terms of the [GCP Service Specific Terms](https://cloud.google.com/terms/service-terms). \n",
"\n",
@@ -157,7 +162,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.6"
},
"vscode": {
"interpreter": {

View File

@@ -1,18 +1,19 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "959300d4",
"metadata": {},
"source": [
"# PromptLayer ChatOpenAI\n",
"\n",
"This example showcases how to connect to [PromptLayer](https://www.promptlayer.com) to start recording your ChatOpenAI requests."
">[PromptLayer](https://docs.promptlayer.com/what-is-promptlayer/wxpF9EZkUwvdkwvVE9XEvC/how-promptlayer-works/dvgGSxNe6nB1jj8mUVbG8r) \n",
"> is a devtool that allows you to track, manage, and share your GPT prompt engineering. \n",
"> It acts as a middleware between your code and OpenAI's python library, recording all your API requests \n",
"> and saving relevant metadata for easy exploration and search in the [PromptLayer](https://www.promptlayer.com) dashboard."
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "6a45943e",
"metadata": {},
@@ -56,7 +57,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "8564ce7d",
"metadata": {},
@@ -78,7 +78,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "bf0294de",
"metadata": {},
@@ -110,7 +109,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "a2d76826",
"metadata": {},
@@ -125,7 +123,6 @@
"source": []
},
{
"attachments": {},
"cell_type": "markdown",
"id": "c43803d1",
"metadata": {},
@@ -161,7 +158,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "base",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -175,7 +172,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8 (default, Apr 13 2021, 12:59:45) \n[Clang 10.0.0 ]"
"version": "3.10.6"
},
"vscode": {
"interpreter": {

View File

@@ -0,0 +1,103 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9597802c",
"metadata": {},
"source": [
"# Aviary\n",
"\n",
"[Aviary](https://www.anyscale.com/) is an open source tooklit for evaluating and deploying production open source LLMs. \n",
"\n",
"This example goes over how to use LangChain to interact with `Aviary`. You can try Aviary out [https://aviary.anyscale.com](here).\n",
"\n",
"You can find out more about Aviary at https://github.com/ray-project/aviary. \n",
"\n",
"One Aviary instance can serve multiple models. You can get a list of the available models by using the cli:\n",
"\n",
"`% aviary models`\n",
"\n",
"Or you can connect directly to the endpoint and get a list of available models by using the `/models` endpoint.\n",
"\n",
"The constructor requires a url for an Aviary backend, and optionally a token to validate the connection. \n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "6fb585dd",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"from langchain.llms import Aviary\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "3fec5a59",
"metadata": {},
"outputs": [],
"source": [
"llm = Aviary(model='amazon/LightGPT', aviary_url=os.environ['AVIARY_URL'], aviary_token=os.environ['AVIARY_TOKEN'])"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "4efd54dd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Love is an emotion that involves feelings of attraction, affection and empathy for another person. It can also refer to a deep bond between two people or groups of people. Love can be expressed in many different ways, such as through words, actions, gestures, music, art, literature, and other forms of communication.\n"
]
}
],
"source": [
"result = llm.predict('What is the meaning of love?')\n",
"print(result) "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "27e526b6",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
},
"vscode": {
"interpreter": {
"hash": "a0a0263b650d907a3bfe41c0f8d6a63a071b884df3cfdc1579f00cdc1aed6b03"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -17,8 +17,8 @@
"```bash\n",
"# Set this to `azure`\n",
"export OPENAI_API_TYPE=azure\n",
"# The API version you want to use: set this to `2022-12-01` for the released version.\n",
"export OPENAI_API_VERSION=2022-12-01\n",
"# The API version you want to use: set this to `2023-03-15-preview` for the released version.\n",
"export OPENAI_API_VERSION=2023-03-15-preview\n",
"# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.\n",
"export OPENAI_API_BASE=https://your-resource-name.openai.azure.com\n",
"# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.\n",
@@ -70,7 +70,7 @@
"source": [
"import os\n",
"os.environ[\"OPENAI_API_TYPE\"] = \"azure\"\n",
"os.environ[\"OPENAI_API_VERSION\"] = \"2022-12-01\"\n",
"os.environ[\"OPENAI_API_VERSION\"] = \"2023-03-15-preview\"\n",
"os.environ[\"OPENAI_API_BASE\"] = \"...\"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"...\""
]

View File

@@ -6,7 +6,11 @@
"id": "J-yvaDTmTTza"
},
"source": [
"# Beam integration for langchain\n",
"# Beam\n",
"\n",
">[Beam](https://docs.beam.cloud/introduction) makes it easy to run code on GPUs, deploy scalable web APIs, \n",
"> schedule cron jobs, and run massively parallel workloads — without managing any infrastructure.\n",
"\n",
"\n",
"Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API.\n",
"\n",
@@ -151,9 +155,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 1
"nbformat_minor": 4
}

Some files were not shown because too many files have changed in this diff Show More