Compare commits

...

208 Commits

Author SHA1 Message Date
Davis Chase
2d5588c5f0 bump 179 (#5200) 2023-05-24 07:55:27 -07:00
Saba Sturua
47e4ee4370 adjust docarray docstrings (#5185)
Follow up of https://github.com/hwchase17/langchain/pull/5015

Thanks for catching this! 

Just a small PR to adjust couple of strings to these changes

Signed-off-by: jupyterjazz <saba.sturua@jina.ai>
2023-05-24 07:50:35 -07:00
Jeff Vestal
cf19a2a59f example usage (#5182)
Adding example usage for elasticsearch knn embeddings
[per](https://github.com/hwchase17/langchain/pull/3401#issuecomment-1548518389)


https://github.com/hwchase17/langchain/blob/master/langchain/embeddings/elasticsearch.py
2023-05-24 07:47:15 -07:00
Ikko Eltociear Ashimine
fff21a0b35 Update rellm_experimental.ipynb (#5189)
# Your PR Title (What it does)

HuggingFace -> Hugging Face
2023-05-24 11:41:00 +00:00
Nolan Tremelling
faa26650c9 Beam (#4996)
# Beam

Calls the Beam API wrapper to deploy and make subsequent calls to an
instance of the gpt2 LLM in a cloud deployment. Requires installation of
the Beam library and registration of Beam Client ID and Client Secret.
Additional calls can then be made through the instance of the large
language model in your code or by calling the Beam API.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-24 01:25:18 -07:00
Ofer Mendelevitch
c81fb88035 Vectara (#5069)
# Vectara Integration

This PR provides integration with Vectara. Implemented here are:
* langchain/vectorstore/vectara.py
* tests/integration_tests/vectorstores/test_vectara.py
* langchain/retrievers/vectara_retriever.py
And two IPYNB notebooks to do more testing:
* docs/modules/chains/index_examples/vectara_text_generation.ipynb
* docs/modules/indexes/vectorstores/examples/vectara.ipynb

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-24 01:24:58 -07:00
Jason Bosco
9c4b43b494 Add Typesense vector store (#1674)
Closes #931.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-23 23:20:45 -07:00
Leonid Ganeline
33929489b9 docs: added missed document_loaders examples (#5150)
# DOCS added missed document_loader examples

Added missed examples: `JSON`, `Open Document Format (ODT)`,
`Wikipedia`, `tomarkdown`.
Updated them to a consistent format.

## Who can review?

@hwchase17 
@dev2049
2023-05-23 21:56:41 -07:00
Daniel Quinteros
c111134a55 Clarification of the reference to the "get_text_legth" function in ge… (#5154)
# Clarification of the reference to the "get_text_legth" function in
getting_started.md

Reference to the function "get_text_legth" in the documentation did not
make sense. Comment added for clarification.

@hwchase17
2023-05-23 20:43:38 -07:00
Daniel Quinteros
de4ef24f75 Docs: updated getting_started.md (#5151)
# Docs: updated getting_started.md

Just accommodating some unnecessary spaces in the example of "pass few
shot examples to a prompt template".

@vowelparrot
2023-05-23 20:43:26 -07:00
mbchang
b1b7f3541c fix: fix current_time=Now bug for aadd_documents in TimeWeightedRetriever (#5155)
# Same as PR #5045, but for async

<!--
Thank you for contributing to LangChain! Your PR will appear in our next
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

<!-- Remove if not applicable -->

Fixes #4825 

I had forgotten to update the asynchronous counterpart `aadd_documents`
with the bug fix from PR #5045, so this PR also fixes `aadd_documents`
too.

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
@dev2049

<!-- For a quicker response, figure out the right person to tag with @

        @hwchase17 - project lead

        Tracing / Callbacks
        - @agola11

        Async
        - @agola11

        DataLoaders
        - @eyurtsev

        Models
        - @hwchase17
        - @agola11

        Agents / Tools / Toolkits
        - @vowelparrot
        
        VectorStores / Retrievers / Memory
        - @dev2049
        
 -->
2023-05-23 20:31:45 -07:00
Jeremiah Lowin
925dd3e59e Add async versions of predict() and predict_messages() (#4867)
# Add async versions of predict() and predict_messages()

#4615 introduced a unifying interface for "base" and "chat" LLM models
via the new `predict()` and `predict_messages()` methods that allow both
types of models to operate on string and message-based inputs,
respectively.

This PR adds async versions of the same (`apredict()` and
`apredict_messages()`) that are identical except for their use of
`agenerate()` in place of `generate()`, which means they repurpose all
existing work on the async backend.


## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
        @hwchase17 (follows his work on #4615)
        @agola11 (async)

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-05-23 17:22:49 -07:00
Junlin Zhou
9242998db1 Empty check before pop (#4929)
# Check whether 'other' is empty before popping

This PR could fix a potential 'popping empty set' error.

Co-authored-by: Junlin Zhou <jlzhou@zjuici.com>
2023-05-23 16:46:50 -07:00
Daniel King
de6e6c764e Add MosaicML inference endpoints (#4607)
# Add MosaicML inference endpoints
This PR adds support in langchain for MosaicML inference endpoints. We
both serve a select few open source models, and allow customers to
deploy their own models using our inference service. Docs are here
(https://docs.mosaicml.com/en/latest/inference.html), and sign up form
is here (https://forms.mosaicml.com/demo?utm_source=langchain). I'm not
intimately familiar with the details of langchain, or the contribution
process, so please let me know if there is anything that needs fixing or
this is the wrong way to submit a new integration, thanks!

I'm also not sure what the procedure is for integration tests. I have
tested locally with my api key.

## Who can review?
@hwchase17

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-05-23 15:59:08 -07:00
Adheeban Manoharan
68f0d45485 Adding Weather Loader (#5056)
Co-authored-by: Tyler Hutcherson <tyler.hutcherson@redis.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-23 15:57:33 -07:00
Jeff Vestal
0b542a9706 Add ElasticsearchEmbeddings class for generating embeddings using Elasticsearch models (#3401)
This PR introduces a new module, `elasticsearch_embeddings.py`, which
provides a wrapper around Elasticsearch embedding models. The new
ElasticsearchEmbeddings class allows users to generate embeddings for
documents and query texts using a [model deployed in an Elasticsearch
cluster](https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-model-ref.html#ml-nlp-model-ref-text-embedding).

### Main features:

1. The ElasticsearchEmbeddings class initializes with an Elasticsearch
connection object and a model_id, providing an interface to interact
with the Elasticsearch ML client through
[infer_trained_model](https://elasticsearch-py.readthedocs.io/en/v8.7.0/api.html?highlight=trained%20model%20infer#elasticsearch.client.MlClient.infer_trained_model)
.
2. The `embed_documents()` method generates embeddings for a list of
documents, and the `embed_query()` method generates an embedding for a
single query text.
3. The class supports custom input text field names in case the deployed
model expects a different field name than the default `text_field`.
4. The implementation is compatible with any model deployed in
Elasticsearch that generates embeddings as output.

### Benefits:

1. Simplifies the process of generating embeddings using Elasticsearch
models.
2. Provides a clean and intuitive interface to interact with the
Elasticsearch ML client.
3. Allows users to easily integrate Elasticsearch-generated embeddings.

Related issue https://github.com/hwchase17/langchain/issues/3400

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-23 14:50:33 -07:00
Theodore Rolle
754b5133e9 Improve PlanningOutputParser whitespace handling (#5143)
Some LLM's will produce numbered lists with leading whitespace, i.e. in
response to "What is the sum of 2 and 3?":
```
Plan:
  1. Add 2 and 3.
  2. Given the above steps taken, please respond to the users original question.
```
This commit updates the PlanningOutputParser regex to ignore leading
whitespace before the step number, enabling it to correctly parse this
format.
2023-05-23 12:47:26 -07:00
Tommaso De Lorenzo
5002f3ae35 solving #2887 (#5127)
# Allowing openAI fine-tuned models
Very simple fix that checks whether a openAI `model_name` is a
fine-tuned model when loading `context_size` and when computing call's
cost in the `openai_callback`.

Fixes #2887 
---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-23 11:18:03 -07:00
Myeongseop Kim
7a75bb2121 docs: fix minor typo + add wikipedia package installation part in human_input_llm.ipynb (#5118)
# Fix typo + add wikipedia package installation part in
human_input_llm.ipynb
This PR
1. Fixes typo ("the the human input LLM"), 
2. Addes wikipedia package installation part (in accordance with
`WikipediaQueryRun`
[documentation](https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html))

in `human_input_llm.ipynb`
(`docs/modules/models/llms/examples/human_input_llm.ipynb`)
2023-05-23 10:59:30 -07:00
Davis Chase
753f4cfc26 bump 178 (#5130) 2023-05-23 07:43:56 -07:00
Ayan Bandyopadhyay
5c87dbf5a8 Add link to Psychic from document loaders documentation page (#5115)
# Add link to Psychic from document loaders documentation page

In my previous PR I forgot to update `document_loaders.rst` to link to
`psychic.ipynb` to make it discoverable from the main documentation.
2023-05-23 06:47:23 -07:00
Tian Wei
d7f807b71f Add AzureCognitiveServicesToolkit to call Azure Cognitive Services API (#5012)
# Add AzureCognitiveServicesToolkit to call Azure Cognitive Services
API: achieve some multimodal capabilities

This PR adds a toolkit named AzureCognitiveServicesToolkit which bundles
the following tools:
- AzureCogsImageAnalysisTool: calls Azure Cognitive Services image
analysis API to extract caption, objects, tags, and text from images.
- AzureCogsFormRecognizerTool: calls Azure Cognitive Services form
recognizer API to extract text, tables, and key-value pairs from
documents.
- AzureCogsSpeech2TextTool: calls Azure Cognitive Services speech to
text API to transcribe speech to text.
- AzureCogsText2SpeechTool: calls Azure Cognitive Services text to
speech API to synthesize text to speech.

This toolkit can be used to process image, document, and audio inputs.
---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-23 06:45:48 -07:00
Jamie Broomall
d4fd589638 WhyLabs callback (#4906)
# Add a WhyLabs callback handler

* Adds a simple WhyLabsCallbackHandler
* Add required dependencies as optional
* protect against missing modules with imports
* Add docs/ecosystem basic example

based on initial prototype from @andrewelizondo

> this integration gathers privacy preserving telemetry on text with
whylogs and sends stastical profiles to WhyLabs platform to monitoring
these metrics over time. For more information on what WhyLabs is see:
https://whylabs.ai

After you run the notebook (if you have env variables set for the API
Keys, org_id and dataset_id) you get something like this in WhyLabs:
![Screenshot
(443)](https://github.com/hwchase17/langchain/assets/88007022/6bdb3e1c-4243-4ae8-b974-23a8bb12edac)

Co-authored-by: Andre Elizondo <andre@whylabs.ai>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-22 20:29:47 -07:00
Eugene Yurtsev
d56313acba Improve effeciency of TextSplitter.split_documents, iterate once (#5111)
# Improve TextSplitter.split_documents, collect page_content and
metadata in one iteration

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

@eyurtsev In the case where documents is a generator that can only be
iterated once making this change is a huge help. Otherwise a silent
issue happens where metadata is empty for all documents when documents
is a generator. So we expand the argument from `List[Document]` to
`Union[Iterable[Document], Sequence[Document]]`

---------

Co-authored-by: Steven Tartakovsky <tartakovsky.developer@gmail.com>
2023-05-22 23:00:24 -04:00
Jettro Coenradie
b950022894 Fixes issue #5072 - adds additional support to Weaviate (#5085)
Implementation is similar to search_distance and where_filter

# adds 'additional' support to Weaviate queries

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-22 18:57:10 -07:00
Zander Chase
87bba2e8d3 Pass Dataset Name by Name not Position (#5108)
Pass dataset name by name
2023-05-23 01:21:39 +00:00
Matt Rickard
de6a401a22 Add OpenLM LLM multi-provider (#4993)
OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call
different inference endpoints directly via HTTP. It implements the
OpenAI Completion class so that it can be used as a drop-in replacement
for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added
code.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-22 18:09:53 -07:00
Gergely Imreh
69de33e024 Add Mastodon toots loader (#5036)
# Add Mastodon toots loader.

Loader works either with public toots, or Mastodon app credentials. Toot
text and user info is loaded.

I've also added integration test for this new loader as it works with
public data, and a notebook with example output run now.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-22 16:43:07 -07:00
mbchang
e173e032bc fix: assign current_time to datetime.now() if current_time is None (#5045)
# Assign `current_time` to `datetime.now()` if it `current_time is None`
in `time_weighted_retriever`

Fixes #4825 

As implemented, `add_documents` in `TimeWeightedVectorStoreRetriever`
assigns `doc.metadata["last_accessed_at"]` and
`doc.metadata["created_at"]` to `datetime.datetime.now()` if
`current_time` is not in `kwargs`.
```python
    def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:
        """Add documents to vectorstore."""
        current_time = kwargs.get("current_time", datetime.datetime.now())
        # Avoid mutating input documents
        dup_docs = [deepcopy(d) for d in documents]
        for i, doc in enumerate(dup_docs):
            if "last_accessed_at" not in doc.metadata:
                doc.metadata["last_accessed_at"] = current_time
            if "created_at" not in doc.metadata:
                doc.metadata["created_at"] = current_time
            doc.metadata["buffer_idx"] = len(self.memory_stream) + i
        self.memory_stream.extend(dup_docs)
        return self.vectorstore.add_documents(dup_docs, **kwargs)
``` 
However, from the way `add_documents` is being called from
`GenerativeAgentMemory`, `current_time` is set as a `kwarg`, but it is
given a value of `None`:
```python
    def add_memory(
        self, memory_content: str, now: Optional[datetime] = None
    ) -> List[str]:
        """Add an observation or memory to the agent's memory."""
        importance_score = self._score_memory_importance(memory_content)
        self.aggregate_importance += importance_score
        document = Document(
            page_content=memory_content, metadata={"importance": importance_score}
        )
        result = self.memory_retriever.add_documents([document], current_time=now)
```
The default of `now` was set in #4658 to be None. The proposed fix is
the following:
```python
    def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:
        """Add documents to vectorstore."""
        current_time = kwargs.get("current_time", datetime.datetime.now())
        # `current_time` may exist in kwargs, but may still have the value of None.
        if current_time is None:
            current_time = datetime.datetime.now()
```
Alternatively, we could just set the default of `now` to be
`datetime.datetime.now()` everywhere instead. Thoughts @hwchase17? If we
still want to keep the default to be `None`, then this PR should fix the
above issue. If we want to set the default to be
`datetime.datetime.now()` instead, I can update this PR with that
alternative fix. EDIT: seems like from #5018 it looks like we would
prefer to keep the default to be `None`, in which case this PR should
fix the error.
2023-05-22 15:47:03 -07:00
Leonid Ganeline
c28cc0f1ac changed ValueError to ImportError (#5103)
# changed ValueError to ImportError

Code cleaning.
Fixed inconsistencies in ImportError handling. Sometimes it raises
ImportError and sometime ValueError.
I've changed all cases to the `raise ImportError`
Also:
- added installation instruction in the error message, where it missed;
- fixed several installation instructions in the error message;
- fixed several error handling in regards to the ImportError
2023-05-22 15:24:45 -07:00
venetisgr
5e47c648ed Update serpapi.py (#4947)
Added link option in  _process_response

<!--
In _process_respons "snippet" provided non working links for the case
that "links" had the correct answer. Thus added an elif statement before
snippet
-->

<!-- Remove if not applicable -->

Fixes # (issue)
In _process_response link provided correct answers while the snippet
reply provided non working links

@vowelparrot 
## Before submitting

<!-- If you're adding a new integration, include an integration test and
an example notebook showing its use! -->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

        @hwchase17 - project lead

        Tracing / Callbacks
        - @agola11

        Async
        - @agola11

        DataLoaders
        - @eyurtsev

        Models
        - @hwchase17
        - @agola11

        Agents / Tools / Toolkits
        - @vowelparrot
        
        VectorStores / Retrievers / Memory
        - @dev2049
        
 -->

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-22 13:34:36 -07:00
Ankit Arya
5b2b436fab Fixed import error for AutoGPT e.g. from langchain.experimental.auton… (#5101)
`from langchain.experimental.autonomous_agents.autogpt.agent import
AutoGPT` results in an import error as AutoGPT is not defined in the
__init__.py file

https://python.langchain.com/en/latest/use_cases/autonomous_agents/marathon_times.html

An Alternate, way would be to be directly update the import statement to
be `from langchain.experimental import AutoGPT`

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-22 13:26:25 -07:00
Ankush Gola
467ca6f025 update langchainplus client and docker file to reflect port changes (#5005)
# Currently, only the dev images are updated
2023-05-22 12:53:05 -07:00
Shawn91
9e649462ce fix: add_texts method of Weaviate vector store creats wrong embeddings (#4933)
# fix a bug in the add_texts method of Weaviate vector store that creats
wrong embeddings

The following is the original code in the `add_texts` method of the
Weaviate vector store, from line 131 to 153, which contains a bug. The
code here includes some extra explanations in the form of comments and
some omissions.

```python
            for i, doc in enumerate(texts):

                # some code omitted

                if self._embedding is not None:
                    # variable texts is a list of string and doc here is just a string. 
                    # list(doc) actually breaks up the string into characters.
                    # so, embeddings[0] is just the embedding of the first character
                    embeddings = self._embedding.embed_documents(list(doc))
                    batch.add_data_object(
                        data_object=data_properties,
                        class_name=self._index_name,
                        uuid=_id,
                        vector=embeddings[0],
                    )
```

To fix this bug, I pulled the embedding operation out of the for loop
and embed all texts at once.

Co-authored-by: Shawn91 <zyx199199@qq.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-22 12:35:52 -07:00
Eduard van Valkenburg
1cb04f2b26 PowerBI major refinement in working of tool and tweaks in the rest (#5090)
# PowerBI major refinement in working of tool and tweaks in the rest

I've gained some experience with more complex sets and the earlier
implementation had too many tries by the agent to create DAX, so
refactored the code to run the LLM to create dax based on a question and
then immediately run the same against the dataset, with retries and a
prompt that includes the error for the retry. This works much better!

Also did some other refactoring of the inner workings, making things
clearer, more concise and faster.
2023-05-22 11:58:28 -07:00
hwaking
e57ebf3922 add get_top_k_cosine_similarity method to get max top k score and index (#5059)
# Row-wise cosine similarity between two equal-width matrices and return
the max top_k score and index, the score all greater than
threshold_score.

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-22 11:55:48 -07:00
Donger
039f8f1abb Add the usage of SSL certificates for Elasticsearch and user password authentication (#5058)
Enhance the code to support SSL authentication for Elasticsearch when
using the VectorStore module, as previous versions did not provide this
capability.
@dev2049

---------

Co-authored-by: caidong <zhucaidong1992@gmail.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-22 11:51:32 -07:00
Andreas Liebschner
44dc959584 Improve pinecone hybrid search retriever adding metadata support (#5098)
# Improve pinecone hybrid search retriever adding metadata support

I simply remove the hardwiring of metadata to the existing
implementation allowing one to pass `metadatas` attribute to the
constructors and in `get_relevant_documents`. I also add one missing pip
install to the accompanying notebook (I am not adding dependencies, they
were pre-existing).

First contribution, just hoping to help, feel free to critique :) 
my twitter username is `@andreliebschner`

While looking at hybrid search I noticed #3043 and #1743. I think the
former can be closed as following the example right now (even prior to
my improvements) works just fine, the latter I think can be also closed
safely, maybe pointing out the relevant classes and example. Should I
reply those issues mentioning someone?

@dev2049, @hwchase17

---------

Co-authored-by: Andreas Liebschner <a.liebschner@shopfully.com>
2023-05-22 11:42:54 -07:00
Deepak S V
5cd12102be Improving Resilience of MRKL Agent (#5014)
This is a highly optimized update to the pull request
https://github.com/hwchase17/langchain/pull/3269

Summary:
1) Added ability to MRKL agent to self solve the ValueError(f"Could not
parse LLM output: `{llm_output}`") error, whenever llm (especially
gpt-3.5-turbo) does not follow the format of MRKL Agent, while returning
"Action:" & "Action Input:".
2) The way I am solving this error is by responding back to the llm with
the messages "Invalid Format: Missing 'Action:' after 'Thought:'" &
"Invalid Format: Missing 'Action Input:' after 'Action:'" whenever
Action: and Action Input: are not present in the llm output
respectively.

For a detailed explanation, look at the previous pull request.

New Updates:
1) Since @hwchase17 , requested in the previous PR to communicate the
self correction (error) message, using the OutputParserException, I have
added new ability to the OutputParserException class to store the
observation & previous llm_output in order to communicate it to the next
Agent's prompt. This is done, without breaking/modifying any of the
functionality OutputParserException previously performs (i.e.
OutputParserException can be used in the same way as before, without
passing any observation & previous llm_output too).

---------

Co-authored-by: Deepak S V <svdeepak99@users.noreply.github.com>
2023-05-22 11:08:08 -07:00
Michael Landis
6eacd88ae7 fix: revert docarray explicit transitive dependencies and use extras instead (#5015)
tldr: The docarray [integration
PR](https://github.com/hwchase17/langchain/pull/4483) introduced a
pinned dependency to protobuf. This is a docarray dependency, not a
langchain dependency. Since this is handled by the docarray
dependencies, it is unnecessary here.

Further, as a pinned dependency, this quickly leads to incompatibilities
with application code that consumes the library. Much less with a
heavily used library like protobuf.

Detail: as we see in the [docarray

integration](https://github.com/hwchase17/langchain/pull/4483/files#diff-50c86b7ed8ac2cf95bd48334961bf0530cdc77b5a56f852c5c61b89d735fd711R81-R83),
the transitive dependencies of docarray were also listed as langchain
dependencies. This is unnecessary as the docarray project has an
appropriate
[extras](a01a05542d/pyproject.toml (L70)).
The docarray project also does not require this _pinned_ version of
protobuf, rather [a minimum
version](a01a05542d/pyproject.toml (L41)).
So this pinned version was likely in error.

To fix this, this PR reverts the explicit hnswlib and protobuf
dependencies and adds the hnswlib extras install for docarray (which
installs hnswlib and protobuf, as originally intended). Because version
`0.32.0`
of the docarray hnswlib extras added protobuf, we bump the docarray
dependency from `^0.31.0` to `^0.32.0`.

# revert docarray explicit transitive dependencies and use extras
instead

## Who can review?

@dev2049 -- reviewed the original PR
@eyurtsev -- bumped the pinned protobuf dependency a few days ago

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-22 12:48:09 -04:00
Davis Chase
fcd88bccb3 Bump 177 (#5095) 2023-05-22 08:19:06 -07:00
Harrison Chase
10ba201d05 Harrison/neo4j (#5078)
Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-22 07:31:48 -07:00
Deepak S V
49ca02711e Improved query, print & exception handling in REPL Tool (#4997)
Update to pull request https://github.com/hwchase17/langchain/pull/3215

Summary:
1) Improved the sanitization of query (using regex), by removing python
command (since gpt-3.5-turbo sometimes assumes python console as a
terminal, and runs python command first which causes error). Also
sometimes 1 line python codes contain single backticks.
2) Added 7 new test cases.

For more details, view the previous pull request.

---------

Co-authored-by: Deepak S V <svdeepak99@users.noreply.github.com>
2023-05-22 13:43:44 +00:00
Zander Chase
785502edb3 Add 'get_token_ids' method (#4784)
Let user inspect the token ids in addition to getting th enumber of tokens

---------

Co-authored-by: Zach Schillaci <40636930+zachschillaci27@users.noreply.github.com>
2023-05-22 13:17:26 +00:00
Zander Chase
ef7d015be5 Separate Runner Functions from Client (#5079)
Extract the methods specific to running an LLM or Chain on a dataset to
separate utility functions.

This simplifies the client a bit and lets us separate concerns of LCP
details from running examples (e.g., for evals)
2023-05-22 05:28:47 +00:00
Leonid Ganeline
443ebe22f4 docs: Deployments page moved into Ecosystem/ (#4949)
# docs: `deployments` page moved into `ecosystem/`

The `Deployments` page moved into the `Ecosystem/` group

Small fixes:
- `index` page: fixed order of items in the `Modules` list, in the `Use
Cases` list
- item `References/Installation` was lost in the `index` page (not on
the Navbar!). Restored it.
- added `|` marker in several places.

NOTE: I also thought about moving the `Additional Resources/Gallery`
page into the `Ecosystem` group but decided to leave it unchanged.
Please, advise on this.

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
@dev2049
2023-05-21 21:18:22 -07:00
Hans van Dam
a395ff7c90 preserve language in conversation retrieval (#4969)
Without the addition of 'in its original language', the condensing
response, more often than not, outputs the rephrased question in
English, even when the conversation is in another language. This
question in English then transfers to the question in the retrieval
prompt and the chatbot is stuck in English.

I'm sometimes surprised that this does not happen more often, but
apparently the GPT models are smart enough to understand that when the
template contains

Question: ....
Answer:

then the answer should be in in the language of the question.
2023-05-21 21:16:03 -07:00
Matt Robinson
bf3f554357 feat: batch multiple files in a single Unstructured API request (#4525)
### Submit Multiple Files to the Unstructured API

Enables batching multiple files into a single Unstructured API requests.
Support for requests with multiple files was added to both
`UnstructuredAPIFileLoader` and `UnstructuredAPIFileIOLoader`. Note that
if you submit multiple files in "single" mode, the result will be
concatenated into a single document. We recommend using this feature in
"elements" mode.

### Testing

The following should load both documents, using two of the example docs
from the integration tests folder.

```python
    from langchain.document_loaders import UnstructuredAPIFileLoader

    file_paths = ["examples/layout-parser-paper.pdf",  "examples/whatsapp_chat.txt"]

    loader = UnstructuredAPIFileLoader(
        file_paths=file_paths,
        api_key="FAKE_API_KEY",
        strategy="fast",
        mode="elements",
    )
    docs = loader.load()
```
2023-05-21 20:48:20 -07:00
Harrison Chase
0c3de0a0b3 Merge branch 'master' of github.com:hwchase17/langchain 2023-05-21 09:22:43 -07:00
Harrison Chase
224f73e978 move docs 2023-05-21 09:22:35 -07:00
Harrison Chase
6c25f860fd bump to 176 (#5064) 2023-05-21 09:19:25 -07:00
Harrison Chase
b0431c672b Harrison/psychic (#5063)
Co-authored-by: Ayan Bandyopadhyay <ayanb9440@gmail.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-21 09:13:20 -07:00
Harrison Chase
8c661baefb change to type checking (#5062) 2023-05-21 09:09:49 -07:00
Jeffrey Zheng
424a573266 DOC: Misspelling in agents.rst documentation (#5038)
# Corrected Misspelling in agents.rst Documentation

<!--
Thank you for contributing to LangChain! Your PR will appear in our next
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get
-->

In the
[documentation](https://python.langchain.com/en/latest/modules/agents.html)
it says "in fact, it is often best to have an Action Agent be in
**change** of the execution for the Plan and Execute agent."

**Suggested Change:** I propose correcting change to charge.

Fix for issue: #5039
2023-05-20 22:24:08 -07:00
Gengliang Wang
f9f08c4b69 Add documentation for Databricks integration (#5013)
# Add documentation for Databricks integration

This is a follow-up of https://github.com/hwchase17/langchain/pull/4702
It documents the details of how to integrate Databricks using langchain.
It also provides examples in a notebook.


## Who can review?
@dev2049 @hwchase17 since you are aware of the context. We will promote
the integration after this doc is ready. Thanks in advance!
2023-05-20 22:06:24 -07:00
tornikeo
a6ef20d7fe Fix annoying typo in docs (#5029)
# Fixes an annoying typo in docs

<!--
Thank you for contributing to LangChain! Your PR will appear in our next
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

<!-- Remove if not applicable -->

Fixes Annoying typo in docs - "Therefor" -> "Therefore". It's so
annoying to read that I just had to make this PR.
2023-05-20 22:02:21 -07:00
Davis Chase
9d1280d451 bump v175 (#5041) 2023-05-20 09:24:17 -07:00
UmerHA
7388248b3e Streaming only final output of agent (#2483) (#4630)
# Streaming only final output of agent (#2483)
As requested in issue #2483, this Callback allows to stream only the
final output of an agent (ie not the intermediate steps).

Fixes #2483

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-20 09:20:17 -07:00
Davis Chase
3bc0bf0079 fix prompt saving (#4987)
will add unit tests
2023-05-20 08:21:52 -07:00
Zander Chase
27e63b977a Add logs command (#5007)
to the plus server
2023-05-20 00:06:17 +00:00
Marcus Winter
2aa3754024 Check for single prompt in __call__ method of the BaseLLM class (#4892)
# Ensuring that users pass a single prompt when calling a LLM 

- This PR adds a check to the `__call__` method of the `BaseLLM` class
to ensure that it is called with a single prompt
- Raises a `ValueError` if users try to call a LLM with a list of prompt
and instructs them to use the `generate` method instead

## Why this could be useful

I stumbled across this by accident. I accidentally called the OpenAI LLM
with a list of prompts instead of a single string and still got a
result:

```
>>> from langchain.llms import OpenAI
>>> llm = OpenAI()
>>> llm(["Tell a joke"]*2)
"\n\nQ: Why don't scientists trust atoms?\nA: Because they make up everything!"
```

It might be better to catch such a scenario preventing unnecessary costs
and irritation for the user.

## Proposed behaviour

```
>>> from langchain.llms import OpenAI
>>> llm = OpenAI()
>>> llm(["Tell a joke"]*2)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/marcus/Projects/langchain/langchain/llms/base.py", line 291, in __call__
    raise ValueError(
ValueError: Argument `prompt` is expected to be a single string, not a list. If you want to run the LLM on multiple prompts, use `generate` instead.
```
2023-05-19 16:54:26 -07:00
domchan
6c60251f52 Add self query translator for weaviate vectorstore (#4804)
# Add self query translator for weaviate vectorstore

Adds support for the EQ comparator and the AND/OR operators. 

Co-authored-by: Dominic Chan <dchan@cppib.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-19 16:41:12 -07:00
Davis Chase
9928fb2193 Revert "API update: Engines -> Models (#4915)" (#5008)
This reverts commit 8c28ad6dac.

Seems to be causing #5001
2023-05-19 16:38:08 -07:00
SimFG
f07b9fde74 Update the GPTCache example (#4985)
# Update the GPTCache example

Fixes #4757
2023-05-19 16:35:36 -07:00
Leonid Ganeline
ddc2d4c21e added instruction about pip install google-gerativeai (#5004)
# added instruction about pip install google-gerativeai

added instruction about pip install google-gerativeai
2023-05-19 15:32:24 -07:00
Nicolas
02632d52b3 docs: Big Mendable Improvements (#4964)
- Higher accuracy on the responses
- New redesigned UI
- Pretty Sources: display the sources by title / sub-section instead of
long URL.
- Fixed Reset Button bugs and some other UI issues
- Other tweaks
2023-05-19 15:31:48 -07:00
Leonid Ganeline
2ab0e1d526 changed ValueError to ImportError (#5006)
# changed ValueError to ImportError in except

Several places with this bug. ValueError does not catch ImportError.
2023-05-19 15:28:08 -07:00
Davis Chase
080eb1b3fc Fix graphql tool (#4984)
Fix construction and add unit test.
2023-05-19 15:27:50 -07:00
Mike McGarry
ddd595fe81 feature/4493 Improve Evernote Document Loader (#4577)
# Improve Evernote Document Loader

When exporting from Evernote you may export more than one note.
Currently the Evernote loader concatenates the content of all notes in
the export into a single document and only attaches the name of the
export file as metadata on the document.

This change ensures that each note is loaded as an independent document
and all available metadata on the note e.g. author, title, created,
updated are added as metadata on each document.

It also uses an existing optional dependency of `html2text` instead of
`pypandoc` to remove the need to download the pandoc application via
`download_pandoc()` to be able to use the `pypandoc` python bindings.

Fixes #4493 

Co-authored-by: Mike McGarry <mike.mcgarry@finbourne.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-19 14:28:17 -07:00
Juanma Tristancho
729e935ea4 PGVector logger message level (#4920)
# Change the logger message level

The library is logging at `error` level a situation that is not an
error.
We noticed this error in our logs, but from our point of view it's an
expected behavior and the log level should be `warning`.
2023-05-19 14:01:26 -07:00
Peng Wang
62d0a01a0f Update python.py (#4971)
# Delete a useless "print"
2023-05-19 13:57:16 -07:00
Eugene Yurtsev
0ff59569dc Adds 'IN' metadata filter for pgvector for checking set presence (#4982)
# Adds "IN" metadata filter for pgvector to all checking for set
presence

PGVector currently supports metadata filters of the form:
```
{"filter": {"key": "value"}}
```
which will return documents where the "key" metadata field is equal to
"value".

This PR adds support for metadata filters of the form:
```
{"filter": {"key": { "IN" : ["list", "of", "values"]}}}
```

Other vector stores support this via an "$in" syntax. I chose to use
"IN" to match postgres' syntax, though happy to switch.
Tested locally with PGVector and ChatVectorDBChain.


@dev2049

---------

Co-authored-by: jade@spanninglabs.com <jade@spanninglabs.com>
2023-05-19 13:53:23 -07:00
Davis Chase
56cb77a828 Make test gha workflow manually runnable (#4998)
if https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#workflow_dispatch
is to be believed this should make it possible to manually kick of test
workflow, but i don't know much about these things
2023-05-19 13:46:33 -07:00
Jiaping(JP) Zhang
22d844dc07 Add async search with relevance score (#4558)
Add the async version for the search with relevance score

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-19 13:05:24 -07:00
Adheeban Manoharan
616e9a93e0 Bug fixes and error handling in Redis - Vectorstore (#4932)
# Bug fixes in Redis - Vectorstore (Added the version of redis to the
error message and removed the cls argument from a classmethod)


Co-authored-by: Tyler Hutcherson <tyler.hutcherson@redis.com>
2023-05-19 13:02:03 -07:00
Gengliang Wang
a87a2524c7 Remove autoreload in examples (#4994)
# Remove autoreload in examples
Remove the `autoreload` in examples since it is not necessary for most
users:
```
%load_ext autoreload,
%autoreload 2
```
2023-05-19 17:35:58 +00:00
Davis Chase
2abf6b9f17 bump v0.0.174 (#4988) 2023-05-19 09:34:28 -07:00
Eugene Yurtsev
06e524416c power bi api wrapper integration tests & bug fix (#4983)
# Powerbi API wrapper bug fix + integration tests

- Bug fix by removing `TYPE_CHECKING` in in utilities/powerbi.py
- Added integration test for power bi api in
utilities/test_powerbi_api.py
- Added integration test for power bi agent in
agent/test_powerbi_agent.py
- Edited .env.examples to help set up power bi related environment
variables
- Updated demo notebook with working code in
docs../examples/powerbi.ipynb - AzureOpenAI -> ChatOpenAI

Notes: 

Chat models (gpt3.5, gpt4) are much more capable than davinci at writing
DAX queries, so that is important to getting the agent to work properly.
Interestingly, gpt3.5-turbo needed the examples=DEFAULT_FEWSHOT_EXAMPLES
to write consistent DAX queries, so gpt4 seems necessary as the smart
llm.

Fixes #4325

## Before submitting

Azure-core and Azure-identity are necessary dependencies

check integration tests with the following:
`pytest tests/integration_tests/utilities/test_powerbi_api.py`
`pytest tests/integration_tests/agent/test_powerbi_agent.py`

You will need a power bi account with a dataset id + table name in order
to test. See .env.examples for details.

## Who can review?
@hwchase17
@vowelparrot

---------

Co-authored-by: aditya-pethe <adityapethe1@gmail.com>
2023-05-19 11:25:52 -04:00
Viswanadh Rayavarapu
e68dfa7062 Update planner_prompt.py (#4967)
Typos in the OpenAPI agent Prompt.
2023-05-19 11:17:10 -04:00
Edrick Da Corte Henriquez
e80585bab0 Update tutorials.md (#4960)
# Added a YouTube Tutorial

Added a LangChain tutorial playlist aimed at onboarding newcomers to
LangChain and its use cases.

I've shared the video in the #tutorials channel and it seemed to be well
received. I think this could be useful to the greater community.

## Who can review?

@dev2049
2023-05-19 10:40:14 -04:00
Rahul Rao
13c376345e Fixed assumptions misspelling (#4961)
Fixed assumptions misspelling in the link mentioned below:-


https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html


![image](https://github.com/hwchase17/langchain/assets/16189966/94cf2be0-b3d0-495b-98ad-e1f44331727e)

Fix for Issue:- #4959 

@hwchase17
2023-05-19 10:40:04 -04:00
Gengliang Wang
bf5a3c6dec Support Databricks in SQLDatabase (#4702)
This PR adds support for Databricks runtime and Databricks SQL by using
[Databricks SQL Connector for
Python](https://docs.databricks.com/dev-tools/python-sql-connector.html).
As a cloud data platform, accessing Databricks requires a URL as follows

`databricks://token:{api_token}@{hostname}?http_path={http_path}&catalog={catalog}&schema={schema}`.

**The URL is **complicated** and it may take users a while to figure it
out**. Since the fields `api_token`/`hostname`/`http_path` fields are
known in the Databricks notebook, I am proposing a new method
`from_databricks` to simplify the connection to Databricks.

## In Databricks Notebook
After changes, Databricks users only need to specify the `catalog` and
`schema` field when using langchain.
<img width="881" alt="image"
src="https://github.com/hwchase17/langchain/assets/1097932/984b4c57-4c2d-489d-b060-5f4918ef2f37">

## In Jupyter Notebook
The method can be used on the local setup as well:
<img width="678" alt="image"
src="https://github.com/hwchase17/langchain/assets/1097932/142e8805-a6ef-4919-b28e-9796ca31ef19">
2023-05-19 00:42:06 -07:00
Harrison Chase
88a3a56c1a Add Spark SQL support (#4602) (#4956)
# Add Spark SQL support 
* Add Spark SQL support. It can connect to Spark via building a
local/remote SparkSession.
* Include a notebook example

I tried some complicated queries (window function, table joins), and the
tool works well.
Compared to the [Spark Dataframe

agent](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/spark.html),
this tool is able to generate queries across multiple tables.

---------

# Your PR Title (What it does)

<!--
Thank you for contributing to LangChain! Your PR will appear in our next
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

<!-- Remove if not applicable -->

Fixes # (issue)

## Before submitting

<!-- If you're adding a new integration, include an integration test and
an example notebook showing its use! -->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

        @hwchase17 - project lead

        Tracing / Callbacks
        - @agola11

        Async
        - @agola11

        DataLoaders
        - @eyurtsev

        Models
        - @hwchase17
        - @agola11

        Agents / Tools / Toolkits
        - @vowelparrot
        
        VectorStores / Retrievers / Memory
        - @dev2049
        
 -->

---------

Co-authored-by: Gengliang Wang <gengliang@apache.org>
Co-authored-by: Mike W <62768671+skcoirz@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: UmerHA <40663591+UmerHA@users.noreply.github.com>
Co-authored-by: 张城铭 <z@hyperf.io>
Co-authored-by: assert <zhangchengming@kkguan.com>
Co-authored-by: blob42 <spike@w530>
Co-authored-by: Yuekai Zhang <zhangyuekai@foxmail.com>
Co-authored-by: Richard He <he.yucheng@outlook.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
Co-authored-by: Leonid Ganeline <leo.gan.57@gmail.com>
Co-authored-by: Alexey Nominas <60900649+Chae4ek@users.noreply.github.com>
Co-authored-by: elBarkey <elbarkey@gmail.com>
Co-authored-by: Davis Chase <130488702+dev2049@users.noreply.github.com>
Co-authored-by: Jeffrey D <1289344+verygoodsoftwarenotvirus@users.noreply.github.com>
Co-authored-by: so2liu <yangliu35@outlook.com>
Co-authored-by: Viswanadh Rayavarapu <44315599+vishwa-rn@users.noreply.github.com>
Co-authored-by: Chakib Ben Ziane <contact@blob42.xyz>
Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com>
Co-authored-by: Daniel Chalef <daniel.chalef@private.org>
Co-authored-by: Jari Bakken <jari.bakken@gmail.com>
Co-authored-by: escafati <scafatieugenio@gmail.com>
2023-05-18 20:53:08 -07:00
Harrison Chase
5feb60f426 Harrison/spell executor (#4914)
Co-authored-by: Jan Minar <rdancer@rdancer.org>
2023-05-18 20:43:33 -07:00
Aidan Boland
c06973261a Fix for syntax when setting search_path for Snowflake database (#4747)
# Fixes syntax for setting Snowflake database search_path

An error occurs when using a Snowflake database and providing a schema
argument.
I have updated the syntax to run a Snowflake specific query when the
database dialect is 'snowflake'.
2023-05-18 20:30:38 -07:00
Mike Wang
db6f7ed0ba [nit] Simplify Spark Creation Validation Check A Little Bit (#4761)
- simplify the validation check a little bit.
- re-tested in jupyter notebook.

Reviewer: @hwchase17
2023-05-18 18:57:54 -07:00
escafati
e027a38f33 NIT: Instead of hardcoding k in each definition, define it as a param above. (#2675)
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
Co-authored-by: Davis Chase <130488702+dev2049@users.noreply.github.com>
2023-05-18 17:35:31 -07:00
Jari Bakken
3df2d831f9 Fix get_num_tokens for Anthropic models (#4911)
The Anthropic classes used `BaseLanguageModel.get_num_tokens` because of
an issue with multiple inheritance. Fixed by moving the method from
`_AnthropicCommon` to both its subclasses.

This change will significantly speed up token counting for Anthropic
users.
2023-05-18 16:32:27 -07:00
Daniel Chalef
c8c2276ccb Zep Retriever - Vector Search Over Chat History (#4533)
# Zep Retriever - Vector Search Over Chat History with the Zep Long-term
Memory Service

More on Zep: https://github.com/getzep/zep

Note: This PR is related to and relies on
https://github.com/hwchase17/langchain/pull/4834. I did not want to
modify the `pyproject.toml` file to add the `zep-python` dependency a
second time.

Co-authored-by: Daniel Chalef <daniel.chalef@private.org>
2023-05-18 16:27:18 -07:00
Chakib Ben Ziane
5525b704cc Chatconv agent: output parser exception (#4923)
the output parser form chat conversational agent now raises
`OutputParserException` like the rest.

The `raise OutputParserExeption(...) from e` form also carries through
the original error details on what went wrong.

I added the `ValueError` as a base class to `OutputParserException` to
avoid breaking code that was relying on `ValueError` as a way to catch
exceptions from the agent. So catching ValuError still works. Not sure
if this is a good idea though ?
2023-05-18 16:20:35 -07:00
Leonid Ganeline
a9bb3147d7 docs: vectorstores, different updates and fixes (#4939)
# docs: vectorstores, different updates and fixes

Multiple updates:
- added/improved descriptions
- fixed header levels
- added headers
- fixed headers
2023-05-18 15:35:47 -07:00
Leonid Ganeline
8f8593aac5 docs: added ecosystem/dependents page (#4941)
# docs: added `ecosystem/dependents` page

Added `ecosystem/dependents` page. Can we propose a better page name?
2023-05-18 13:11:08 -07:00
Viswanadh Rayavarapu
c9f963e295 Update custom_multi_action_agent.ipynb (#4931)
Updated the docs from 
"An agent consists of three parts:" to 
"An agent consists of two parts:" since there are only two parts in the
documentation
2023-05-18 11:53:12 -07:00
so2liu
3002c1d508 fix: error in gptcache example nb (#4930) 2023-05-18 11:49:45 -07:00
Jeffrey D
7e8e21c914 Correct typo in APIChain example notebook (Farenheit -> Fahrenheit) (#4938)
Correct typo in APIChain example notebook (Farenheit -> Fahrenheit)
2023-05-18 11:48:02 -07:00
Leonid Ganeline
c75c0775e1 docs supabase update (#4935)
# docs: updated `Supabase` notebook

- the title of the notebook was inconsistent (included redundant
"Vectorstore"). Removed this "Vectorstore"
- added `Postgress` to the title. It is important. The `Postgres` name
is much more popular than `Supabase`.
- added description for the `Postrgress`
- added more info to the `Supabase` description
2023-05-18 10:42:08 -07:00
Davis Chase
55baa0d153 Update redis integration tests (#4937) 2023-05-18 10:22:17 -07:00
Davis Chase
440b8761f4 Redis kwargs fix (#4936)
cc @tylerhutcherson
2023-05-18 10:02:46 -07:00
elBarkey
a8ded21b69 FIX: GPTCache cache_obj creation loop (#4827)
_get_gptcache method keep creating new gptcache instance, here's the fix

# Fix GPTCache cache_obj creation loop

Fixes #4830 

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-18 09:42:35 -07:00
Alexey Nominas
c9e2a01875 Update GPT4ALL integration (#4567)
# Update GPT4ALL integration

GPT4ALL have completely changed their bindings. They use a bit odd
implementation that doesn't fit well into base.py and it will probably
be changed again, so it's a temporary solution.

Fixes #3839, #4628
2023-05-18 09:38:54 -07:00
Leonid Ganeline
e2d7677526 docs: compound ecosystem and integrations (#4870)
# Docs: compound ecosystem and integrations

**Problem statement:** We have a big overlap between the
References/Integrations and Ecosystem/LongChain Ecosystem pages. It
confuses users. It creates a situation when new integration is added
only on one of these pages, which creates even more confusion.
- removed References/Integrations page (but move all its information
into the individual integration pages - in the next PR).
- renamed Ecosystem/LongChain Ecosystem into Integrations/Integrations.
I like the Ecosystem term. It is more generic and semantically richer
than the Integration term. But it mentally overloads users. The
`integration` term is more concrete.
UPDATE: after discussion, the Ecosystem is the term.
Ecosystem/Integrations is the page (in place of Ecosystem/LongChain
Ecosystem).

As a result, a user gets a single place to start with the individual
integration.
2023-05-18 09:29:57 -07:00
Harrison Chase
d5a0704544 dont error on sql import (#4647)
this makes it so we dont throw errors when importing langchain when
sqlalchemy==1.3.1

we dont really want to support 1.3.1 (seems like unneccessary maintance
cost) BUT we would like it to not terribly error should someone decide
to run on it
2023-05-18 09:27:09 -07:00
Harrison Chase
c9a362e482 add alias for model (#4553)
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-18 09:12:23 -07:00
Richard He
7642f2159c Add human message as input variable to chat agent prompt creation (#4542)
# Add human message as input variable to chat agent prompt creation

This PR adds human message and system message input to
`CHAT_ZERO_SHOT_REACT_DESCRIPTION` agent, similar to [conversational
chat
agent](7bcf238a1a/langchain/agents/conversational_chat/base.py (L64-L71)).

I met this issue trying to use `create_prompt` function when using the
[BabyAGI agent with tools
notebook](https://python.langchain.com/en/latest/use_cases/autonomous_agents/baby_agi_with_agent.html),
since BabyAGI uses “task” instead of “input” input variable. For normal
zero shot react agent this is fine because I can manually change the
suffix to “{input}/n/n{agent_scratchpad}” just like the notebook, but I
cannot do this with conversational chat agent, therefore blocking me to
use BabyAGI with chat zero shot agent.

I tested this in my own project
[Chrome-GPT](https://github.com/richardyc/Chrome-GPT) and this fix
worked.

## Request for review
Agents / Tools / Toolkits
- @vowelparrot
2023-05-18 09:09:31 -07:00
Yuekai Zhang
1ed4228822 Fix bilibili (#4860)
# Fix bilibili api import error

bilibili-api package is depracated and there is no sync module.

<!--
Thank you for contributing to LangChain! Your PR will appear in our next
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

<!-- Remove if not applicable -->

Fixes #2673 #2724 

## Before submitting

<!-- If you're adding a new integration, include an integration test and
an example notebook showing its use! -->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
@vowelparrot  @liaokongVFX 

<!-- For a quicker response, figure out the right person to tag with @

        @hwchase17 - project lead

        Tracing / Callbacks
        - @agola11

        Async
        - @agola11

        DataLoaders
        - @eyurtsev

        Models
        - @hwchase17
        - @agola11

        Agents / Tools / Toolkits
        - @vowelparrot
        
        VectorStores / Retrievers / Memory
        - @dev2049
        
 -->
2023-05-18 09:56:51 -04:00
Eugene Yurtsev
e46202829f feat #4479: TextLoader auto detect encoding and improved exceptions (#4927)
# TextLoader auto detect encoding and enhanced exception handling

- Add an option to enable encoding detection on `TextLoader`. 
- The detection is done using `chardet`
- The loading is done by trying all detected encodings by order of
confidence or raise an exception otherwise.

### New Dependencies:
- `chardet`

Fixes #4479 

## Before submitting

<!-- If you're adding a new integration, include an integration test and
an example notebook showing its use! -->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

- @eyurtsev

---------

Co-authored-by: blob42 <spike@w530>
2023-05-18 09:55:14 -04:00
张城铭
8c28ad6dac API update: Engines -> Models (#4915)
# API update: Engines -> Models

see: https://community.openai.com/t/api-update-engines-models/18597

Co-authored-by: assert <zhangchengming@kkguan.com>
2023-05-18 09:54:42 -04:00
Eugene Yurtsev
c06a47a691 Load specific file types from Google Drive (issue #4878) (#4926)
# Load specific file types from Google Drive (issue #4878)
Add the possibility to define what file types you want to load from
Google Drive.
 
```
 loader = GoogleDriveLoader(
    folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5",
    file_types=["document", "pdf"]
    recursive=False
)
```

Fixes ##4878

## Who can review?
Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
DataLoaders
- @eyurtsev

Twitter: [@UmerHAdil](https://twitter.com/@UmerHAdil) | Discord:
RicChilligerDude#7589

---------

Co-authored-by: UmerHA <40663591+UmerHA@users.noreply.github.com>
2023-05-18 09:27:53 -04:00
Harrison Chase
dfbf45f028 bump version to 173 (#4910) 2023-05-17 23:36:45 -07:00
Harrison Chase
b8d48939a2 Harrison/unified objectives (#4905)
Co-authored-by: Matthias Samwald <samwald@gmx.at>
2023-05-17 23:03:57 -07:00
Harrison Chase
9165267f8a Harrison/improved retry tool (#4842) 2023-05-17 21:41:01 -07:00
Harrison Chase
ba023d53ca Harrison/faiss norm (#4903)
Co-authored-by: Jiaxin Shan <seedjeffwan@gmail.com>
2023-05-17 21:40:49 -07:00
Harrison Chase
9e2227ba11 Harrison/serper api bug (#4902)
Co-authored-by: Jerry Luan <xmaswillyou@gmail.com>
2023-05-17 21:40:39 -07:00
Leonid Ganeline
c998569c8f docs: text splitters improvements (#4490)
#docs: text splitters improvements

Changes are only in the Jupyter notebooks.
- added links to the source packages and a short description of these
packages
- removed " Text Splitters" suffixes from the TOC elements (they made
the list of the text splitters messy)
- moved text splitters, based on the length function into a separate
list. They can be mixed with any classes from the "Text Splitters", so
it is a different classification.

## Who can review?
        @hwchase17 - project lead
        @eyurtsev
        @vowelparrot

NOTE: please, check out the results of the `Python code` text splitter
example (text_splitters/examples/python.ipynb). It looks suboptimal.
2023-05-17 21:33:34 -07:00
Steve Kim
613bf9b514 Update getting_started.md (#4482)
# Added another helpful way for developers who want to set OpenAI API
Key dynamically

Previous methods like exporting environment variables are good for
project-wide settings.
But many use cases need to assign API keys dynamically, recently.

```python
from langchain.llms import OpenAI
llm = OpenAI(openai_api_key="OPENAI_API_KEY")
```

## Before submitting
```bash
export OPENAI_API_KEY="..."
```
Or,
```python
import os
os.environ["OPENAI_API_KEY"] = "..."
```

<hr>

Thank you.
Cheers,
Bongsang
2023-05-17 21:32:25 -07:00
Ismael G Serrano
41e2394c9c Fix AzureOpenAI embeddings documentation example. model -> deployment (#4389)
# Documentation for Azure OpenAI embeddings model

- OPENAI_API_VERSION environment variable is needed for the endpoint
- The constructor does not work with model, it works with deployment.

I fixed it in the notebook.

(This is my first contribution)

## Who can review?

@hwchase17 
@agola

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-05-17 21:05:53 -07:00
Davis Chase
a4ac006658 Update gallery (#4873) 2023-05-17 20:59:41 -07:00
Davis Chase
8966f61ca5 Zep memory (#4898)
Co-authored-by: Daniel Chalef <daniel.chalef@private.org>
Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com>
2023-05-17 20:01:01 -07:00
Davis Chase
e28bdf4453 Cadlabs/python tool sanitization (#4754)
Co-authored-by: BenSchZA <BenSchZA@users.noreply.github.com>
2023-05-17 19:46:12 -07:00
Eugene Yurtsev
0dc304ca80 Add html parsers (#4874)
# Add bs4 html parser

* Some minor refactors
* Extract the bs4 html parsing code from the bs html loader
* Move some tests from integration tests to unit tests
2023-05-17 22:39:11 -04:00
Eugene Yurtsev
8e41143bf5 Add a generic document loader (#4875)
# Add generic document loader

* This PR adds a generic document loader which can assemble a loader
from a blob loader and a parser
* Adds a registry for parsers
* Populate registry with a default mimetype based parser


## Expected changes

- Parsing involves loading content via IO so can be sped up via:
  * Threading in sync
  * Async  
- The actual parsing logic may be computatinoally involved: may need to
figure out to add multi-processing support
- May want to add suffix based parser since suffixes are easier to
specify in comparison to mime types

## Before submitting

No notebooks yet, we first need to get a few of the basic parsers up
(prior to advertising the interface)
2023-05-17 22:38:55 -04:00
Davis Chase
df0c33a005 Faiss no avx2 (#4895)
Co-authored-by: Ali Mirlou <alimirlou@gmail.com>
2023-05-17 19:18:57 -07:00
Emil Ahlbäck
5c9205d5f4 ConversationalChatAgent: Allow customizing TEMPLATE_TOOL_RESPONSE (#2361)
It's currently not possible to change the `TEMPLATE_TOOL_RESPONSE`
prompt for ConversationalChatAgent, this PR changes that.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-17 17:23:08 -07:00
Zander Chase
1ff7c958b0 Bold Crumbs (#4876) 2023-05-17 22:50:35 +00:00
Alexander Miasoiedov (Myasoedov)
4c3ab55e94 feat(Add FastAPI + Vercel deployment option): (#4520)
# Update deployments doc with langcorn API server

API server example 

```python
from fastapi import FastAPI

from langcorn import create_service

app: FastAPI = create_service(
    "examples.ex1:chain",
    "examples.ex2:chain",
    "examples.ex3:chain",
    "examples.ex4:sequential_chain",
    "examples.ex5:conversation",
    "examples.ex6:conversation_with_summary",
)

```
More examples: https://github.com/msoedov/langcorn/tree/main/examples

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-17 15:50:25 -07:00
Taqi Jaffri
ef8b5f64bc Tiny code review and docs fix for Docugami DataLoader (#4877)
# Docs and code review fixes for Docugami DataLoader

1. I noticed a couple of hyperlinks that are not loading in the
langchain docs (I guess need explicit anchor tags). Added those.
2. In code review @eyurtsev had a
[suggestion](https://github.com/hwchase17/langchain/pull/4727#discussion_r1194069347)
to allow string paths. Turns out just updating the type works (I tested
locally with string paths).

# Pre-submission checks
I ran `make lint` and `make tests` successfully.

---------

Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
2023-05-17 15:31:43 -07:00
C.J. Jameson
d6e0b9a43d fix homepage typo (#4883)
# Fix Homepage Typo

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested... not sure
2023-05-17 15:30:23 -07:00
Leonid Ganeline
b96ab4b763 docs retriever improvements (#4430)
# Docs: improvements in the `retrievers/examples/` notebooks

Its primary purpose is to make the Jupyter notebook examples
**consistent** and more suitable for first-time viewers.
- add links to the integration source (if applicable) with a short
description of this source;
- removed `_retriever` suffix from the file names (where it existed) for
consistency;
- removed ` retriever` from the notebook title (where it existed) for
consistency;
- added code to install necessary Python package(s);
- added code to set up the necessary API Key.
- very small fixes in notebooks from other folders (for consistency):
  - docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb
  - docs/modules/indexes/vectorstores/examples/pinecone.ipynb
  - docs/modules/models/llms/integrations/cohere.ipynb
- fixed misspelling in langchain/retrievers/time_weighted_retriever.py
comment (sorry, about this change in a .py file )

## Who can review
@dev2049
2023-05-17 15:29:22 -07:00
Justin Levi Winter
0147f845f1 Update getting_started.ipynb (#4850)
minor grammer issue
2023-05-17 13:19:14 -07:00
Yong Fu
3e12f0957a Remove unused variables in Milvus vectorstore (#4868)
# Remove unused variables in Milvus vectorstore
This PR simply removes a variable unused in Milvus. The variable looks
like a copy-paste from other functions in Milvus but it is really
unnecessary.
2023-05-17 12:00:37 -07:00
Eugene Yurtsev
c5ab9782c6 Add beautiful soup 4 to extended testing extra (#4869)
# Add bs4 to extended testing extra

Updating extended testing extra in preparation for more refactors.
2023-05-17 14:11:26 -04:00
Ryan Culligan
6a9cdc43f5 Fix TypeError in Vectorstore Redis class methods (#4857)
# Fix TypeError in Vectorstore Redis class methods

This change resolves a TypeError that was raised when invoking the
`from_texts_return_keys` method from the `from_texts` method in the
`Redis` class. The error was due to the `cls` argument being passed
explicitly, which led to it being provided twice since it's also
implicitly passed in class methods. No relevant tests were added as the
issue appeared to be better suited for linters to catch proactively.

Changes:
- Removed `cls=cls` from the call to `from_texts_return_keys` in the
`from_texts` method.

Related to:
https://github.com/hwchase17/langchain/pull/4653
2023-05-17 10:48:09 -07:00
Eugene Yurtsev
2d20a1196e Hugging Face Loader: Add lazy load (#4799)
# Add lazy load to HF datasets loader

Unfortunately, there are no tests as far as i can tell. Verified code manually.
2023-05-17 12:04:23 -04:00
Davis Chase
a63ab7ded1 bump 172 (#4864) 2023-05-17 08:54:39 -07:00
yujiosaka
2f8eb95a91 Remove unnecessary comment (#4845)
# Remove unnecessary comment

Remove unnecessary comment accidentally included in #4800

## Before submitting

- no test
- no document

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
2023-05-17 11:53:03 -04:00
UmerHA
e257380deb Typos (#4851)
# Fixed typos (issues #4818 & #4668 & more typos)
- At some places, it said `model = ChatOpenAI(model='gpt-3.5-turbo')`
but should be `model = ChatOpenAI(model_name='gpt-3.5-turbo')`
- Fixes some other typos

Fixes #4818, #4668

## Who can review?
Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
        Models
        - @hwchase17
        - @agola11
        Agents / Tools / Toolkits
        - @vowelparrot
2023-05-17 11:52:22 -04:00
Zander Chase
8dcad0f272 Add Support for Flexible Input Format for LLM and Chat Model Runs (#4805)
Previously, the client expected a strict 'prompt' or 'messages' format
and wouldn't permit running a chat model or llm on prompts or messages
(respectively).

Since many datasets may want to specify custom key: string , relax this
requirement.
Also, add support for running a chat model on raw prompts and LLM on
chat messages through their respective fallbacks.
2023-05-17 14:24:17 +00:00
Zander Chase
a47c62fcba Add dev option (#4828)
enable running
```
langchain plus start --dev
```

To use the RC iamges instead
2023-05-17 14:09:25 +00:00
Harrison Chase
720ac49f42 2markdown loader (#4796)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2023-05-16 23:42:53 -07:00
Ankush Gola
aa73a888fa Some notebook and client fixes (add retries, clean up docs, etc) (#4820)
# Your PR Title (What it does)

<!--
Thank you for contributing to LangChain! Your PR will appear in our next
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

<!-- Remove if not applicable -->

Fixes # (issue)

## Before submitting

<!-- If you're adding a new integration, include an integration test and
an example notebook showing its use! -->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

        @hwchase17 - project lead

        Tracing / Callbacks
        - @agola11

        Async
        - @agola11

        DataLoaders
        - @eyurtsev

        Models
        - @hwchase17
        - @agola11

        Agents / Tools / Toolkits
        - @vowelparrot
        
        VectorStores / Retrievers / Memory
        - @dev2049
        
 -->
2023-05-16 20:23:00 -07:00
Davis Chase
0a591da6db Add weaviate by_text (#4824)
Thanks @ZouhairElhadi! Made small change

Closes #4742

---------

Co-authored-by: Zouhair Elhadi <zouhair11elhadi@gmail.com>
Co-authored-by: ZouhairElhadi <87149442+ZouhairElhadi@users.noreply.github.com>
2023-05-16 19:43:15 -07:00
Zander Chase
d1b6839d97 Retry session and tenant (#4822) 2023-05-17 01:54:40 +00:00
Nguyen Trung Duc (john)
49e4aaf673 Fix subclassing OpenAIEmbeddings (#4500)
# Fix subclassing OpenAIEmbeddings

Fixes #4498 

## Before submitting

- Problem: Due to annotated type `Tuple[()]`.
- Fix: Change the annotated type to "Iterable[str]". Even though
tiktoken use
[Collection[str]](095924e02c/tiktoken/core.py (L80))
type annotation, but pydantic doesn't support Collection type, and
[Iterable](https://docs.pydantic.dev/latest/usage/types/#typing-iterables)
is the closest to Collection.
2023-05-16 18:35:19 -07:00
Harrison Chase
08df80bed6 console callback verbose (#4696)
add verbose callback

Co-authored-by: vowelparrot <130414180+vowelparrot@users.noreply.github.com>
2023-05-17 01:28:43 +00:00
David Peterson
d5d4c0a172 Update summarize.ipynb (#4529)
# Update order in which tasks are stated (logically correct)

Fixes the order in which steps are placed under titles.

@vowelparrot
2023-05-16 18:14:00 -07:00
Django
bcffc704c1 fix: agenerate miss run_manager args in llm.py (#4566)
# fix: agenerate miss run_manager args in llm.py

<!--
Thank you for contributing to LangChain! Your PR will appear in our next
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

<!-- Remove if not applicable -->

Fixes # (issue)
fix: agenerate miss run_manager args in llm.py


<!-- For a quicker response, figure out the right person to tag with @

        @hwchase17 - project lead

        Tracing / Callbacks
        - @agola11

        Async
        - @agola11

        DataLoaders
        - @eyurtsev

        Models
        - @hwchase17
        - @agola11

        Agents / Tools / Toolkits
        - @vowelparrot
        
        VectorStores / Retrievers / Memory
        - @dev2049
        
 -->
2023-05-16 17:37:56 -07:00
Brendan Mannix
4e56d3119c update qdrant docs to reflect the proper way to initialize Qdrant() constructor (#4596)
# update qdrant docs to reflect the proper way to initialize Qdrant()
constructor

The [Qdrant
docs](https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html)
still contain an old reference for passing an `embedding_function` into
the constructor. This is no longer supported.

This PR updates the docs to reflect the proper way to initialize
`Qdrant()`

Old:
![Screenshot 2023-05-12 at 3 06 33
PM](https://github.com/hwchase17/langchain/assets/1552962/dd4063d2-2a07-4340-91bb-e305f7215ddd)

New:
![Screenshot 2023-05-12 at 3 21 09
PM](https://github.com/hwchase17/langchain/assets/1552962/aebc3f63-1a8b-4ca3-93c0-a2ce30dcd282)
2023-05-16 17:30:38 -07:00
Sean Morgan
5372a06a8c DOC: Fix SageMaker example (#4598)
# Fix SageMaker example typing

Since https://github.com/hwchase17/langchain/pull/3249 a new type
`LLMContentHandler` is enforced for SageMaker Endpoints

Fixes #4168
2023-05-16 17:28:16 -07:00
Steve Kim
e90654f39b Added cleaning up the downloaded PDF files (#4601)
ArxivAPIWrapper searches and downloads PDFs to get related information.
But I found that it doesn't delete the downloaded file. The reason why
this is a problem is that a lot of PDF files remain on the server. For
example, one size is about 28M.
So, I added a delete line because it's too big to maintain on the
server.

# Clean up downloaded PDF files
- Changes: Added new line to delete downloaded file
- Background: To get the information on arXiv's paper, ArxivAPIWrapper
class downloads a PDF.
It's a natural approach, but the wrapper retains a lot of PDF files on
the server.
- Problem: One size of PDFs is about 28M. It's too big to maintain on a
small server like AWS.
- Dependency: import os

Thank you.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-16 17:26:56 -07:00
Quinn
6fbd5e837f Update planner_prompt.py, change usery to user (#4623)
# Fix misspell in planner_prompt.py

before

```
Usery query: I want to buy a couch
```

after

```
User query: I want to buy a couch
```
2023-05-16 17:24:27 -07:00
Tony Zhang
432421ffa5 [Fix][GenerativeAgent] Get the memory importance score from regex matched group (#4636)
# Get the memory importance score from regex matched group

In `GenerativeAgentMemory`, the `_score_memory_importance()` will make a
prompt to get a rating score. The prompt is:
```
        prompt = PromptTemplate.from_template(
            "On the scale of 1 to 10, where 1 is purely mundane"
            + " (e.g., brushing teeth, making bed) and 10 is"
            + " extremely poignant (e.g., a break up, college"
            + " acceptance), rate the likely poignancy of the"
            + " following piece of memory. Respond with a single integer."
            + "\nMemory: {memory_content}"
            + "\nRating: "
        )
```
For some LLM, it will respond with, for example, `Rating: 8`. Thus we
might want to get the score from the matched regex group.
2023-05-16 16:59:50 -07:00
Daniel Maturana
be405ac139 Query_constructor.base.py function _get_prompt() not including passed examples. (#4680)
The function _get_prompt() was returning the DEFAULT_EXAMPLES even if
some custom examples were given. The return FewShotPromptTemplate was
returnong DEFAULT_EXAMPLES and not examples
2023-05-16 16:31:10 -07:00
Anam Hira
3af448d72e Update huggingface_tools.ipynb (#4700) 2023-05-16 16:28:27 -07:00
rajib
e28f4a5f39 changed cohere.py to update the default model of embedding (#4709)
# The cohere embedding model do not use large, small. It is deprecated.
Changed the modules default model

Fixes #4694


Co-authored-by: rajib76 <rajib76@yahoo.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-16 16:27:23 -07:00
charosen
75fe9d3555 Add from_file method to message prompt template (#4713)
**Feature**: This PR adds `from_template_file` class method to
BaseStringMessagePromptTemplate. This is useful to help user to create
message prompt templates directly from template files, including
`ChatMessagePromptTemplate`, `HumanMessagePromptTemplate`,
`AIMessagePromptTemplate` & `SystemMessagePromptTemplate`.

**Tests**: Unit tests have been added in this PR.

Co-authored-by: charosen <charosen@bupt.cn>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-16 16:25:17 -07:00
Chandan Routray
e8d46bdd9b Replaced SQLDatabaseChain deprecated direct initialisation with from_llm method (#4778)
# Removed usage of deprecated methods

Replaced `SQLDatabaseChain` deprecated direct initialisation with
`from_llm` method

## Who can review?

@hwchase17
@agola11

---------

Co-authored-by: imeckr <chandanroutray2012@gmail.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-16 15:59:06 -07:00
Chandan Routray
11341fcecb Fixed query checker for SQLDatabaseChain (#4780)
# Fixed query checker for SQLDatabaseChain

When `SQLDatabaseChain`'s llm attribute was deprecated, the query
checker stopped working if `SQLDatabaseChain` is initialised via
`from_llm` method. With this fix, `SQLDatabaseChain`'s query checker
would use the same `llm` as used in the `llm_chain`


## Who can review?
@hwchase17 - project lead

Co-authored-by: imeckr <chandanroutray2012@gmail.com>
2023-05-16 15:58:58 -07:00
Yeong0228
08876ad066 Fix SelfQueryRetriever, passing new query to vector store (#4774)
# Fix SelfQueryRetriever, passing new query to vector store
2023-05-16 15:46:22 -07:00
Mark Pors
8fd4d5d117 Added dependencies to make example executable (#4790)
- Installation of non-colab packages
- Get API keys

# Added dependencies to make notebook executable on hosted notebooks

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

@hwchase17
@vowelparrot
2023-05-16 15:46:09 -07:00
Mark Pors
5bc7082e82 Cleanup and added dependencies to make example executable (#4795)
- Installation of non-colab packages
- Get API keys
- Get rid of warnings

# Cleanup and added dependencies to make notebook executable on hosted
notebooks
@hwchase17
@vowelparrot
2023-05-16 15:29:01 -07:00
keenangraham
bcce9a3a92 Fix age inconsistency in plan and execute Jupyter notebook example (#4814)
The current example in
https://python.langchain.com/en/latest/modules/agents/plan_and_execute.html
has inconsistent reasoning step (observing 28 years and thinking it's 26
years):

```
Observation: 28 years
Thought:Based on my search, Gigi Hadid's current age is 26 years old. 
Action:
{
  "action": "Final Answer",
  "action_input": "Gigi Hadid's current age is 26 years old."
}
```

Guessing this is model noise. Rerunning seems to give correct answer of
28 years.
2023-05-16 15:27:27 -07:00
Prateek K. Keshari
61f9c52fc7 Update twitter-the-algorithm-analysis-deeplake.ipynb (#4812)
Changed model to model_name
2023-05-16 15:27:15 -07:00
yujiosaka
6561efebb7 Accept uuids kwargs for weaviate (#4800)
# Accept uuids kwargs for weaviate

Fixes #4791
2023-05-16 15:26:46 -07:00
Adam Quigley
e78c9be312 Add Confluence Loader unit tests (#3333)
Adds some basic unit tests for the ConfluenceLoader that can be extended
later. Ports this [PR from
llama-hub](https://github.com/emptycrown/llama-hub/pull/208) and adapts
it to `langchain`.

@Jflick58 and @zywilliamli adding you here as potential reviewers

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-16 15:17:07 -07:00
Magnus Friberg
d126276693 Specify which data to return from chromadb (#4393)
# Improve the Chroma get() method by adding the optional "include"
parameter.

The Chroma get() method excludes embeddings by default. You can
customize the response by specifying the "include" parameter to
selectively retrieve the desired data from the collection.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-16 14:43:09 -07:00
Raduan Al-Shedivat
00c6ec8a2d fix(document_loaders/telegram): fix pandas calls + add tests (#4806)
# Fix Telegram API loader + add tests.
I was testing this integration and it was broken with next error:
```python
message_threads = loader._get_message_threads(df)
KeyError: False
```
Also, this particular loader didn't have any tests / related group in
poetry, so I added those as well.

@hwchase17 / @eyurtsev please take a look on this fix PR.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-16 14:35:25 -07:00
Zander Chase
206c87d525 Change server start name (#4811)
to `langchain plus start/stop`
2023-05-16 20:04:09 +00:00
Eugene Yurtsev
255690d78e Catch changes to test group (#4802)
# Catch changes to test group

Add test to catch changes to test group.
2023-05-16 14:48:56 -04:00
Eugene Yurtsev
c3b6129beb Block sockets for unit-tests (#4803)
# Block usage of sockets during unit tests

Catch any tests that attempt to use the network.
2023-05-16 14:41:24 -04:00
了空
f7e3d97b19 Remove unnecessary spaces from document object’s page_content of BiliBiliLoader (#4619)
- Remove unnecessary spaces from document object’s page_content of
BiliBiliLoader
- Fix BiliBiliLoader document and test file
2023-05-16 13:13:57 -04:00
Eugene Yurtsev
f47ec5b4b6 Docugami docs: First cell should be a title cell (#4735)
# Make first cell a title in docugami docs

This makes the first cell a title cell in docugami notebook
2023-05-16 13:12:14 -04:00
Eugene Yurtsev
d403f659ea Update google protobuf dep (#4798)
# Update google protobuf dep

Resolve: https://github.com/hwchase17/langchain/security/dependabot/11
2023-05-16 12:25:07 -04:00
Eugene Yurtsev
3ecd7c9641 Add check to verify poetry.toml (#4794)
# Add poetry check to github action

Check poetry toml file during tests for errors
2023-05-16 11:53:06 -04:00
Ikko Eltociear Ashimine
f5a476fdd4 Fix typo in dataframe.py (#4786)
# Fix typo in dataframe.py (#4786)

Fixed typo.
```
yeild -> yield
```
2023-05-16 11:49:04 -04:00
Eugene Yurtsev
14bedf1cc5 Github Action: Fix poetry lock file checking (#4789)
Fix how poetry lock file is checked to avoid skipping caches silently.
2023-05-16 11:40:28 -04:00
Davis Chase
7ce43372c3 Version 171 (#4788) 2023-05-16 08:24:45 -07:00
Zander Chase
bee136efa4 Update Tracing Walkthrough (#4760)
Add client methods to read / list runs and sessions.

Update walkthrough to:
- Let the user create a dataset from the runs without going to the UI
- Use the new CLI command to start the server

Improve the error message when `docker` isn't found
2023-05-16 13:26:43 +00:00
Zander Chase
fc0a3c8500 Persist Volume After Stop (#4763)
Previously, the data would be removed after shutting down the server.
This mounts a db volume that isn't erased between calls
2023-05-16 13:10:13 +00:00
Harrison Chase
a7af32c274 Cassandra support for chat history (#4378) (#4764)
# Cassandra support for chat history

### Description

- Store chat messages in cassandra

### Dependency

- cassandra-driver - Python Module

## Before submitting

- Added Integration Test

## Who can review?

@hwchase17
@agola11

# Your PR Title (What it does)

<!--
Thank you for contributing to LangChain! Your PR will appear in our next
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

<!-- Remove if not applicable -->

Fixes # (issue)

## Before submitting

<!-- If you're adding a new integration, include an integration test and
an example notebook showing its use! -->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

        @hwchase17 - project lead

        Tracing / Callbacks
        - @agola11

        Async
        - @agola11

        DataLoaders
        - @eyurtsev

        Models
        - @hwchase17
        - @agola11

        Agents / Tools / Toolkits
        - @vowelparrot
        
        VectorStores / Retrievers / Memory
        - @dev2049
        
 -->

Co-authored-by: Jinto Jose <129657162+jj701@users.noreply.github.com>
2023-05-15 23:43:09 -07:00
Harrison Chase
c4c7936caa Harrison/wiki loader (#4765)
Co-authored-by: Guillermo Segovia <T1b4lt@users.noreply.github.com>
2023-05-15 23:42:57 -07:00
Filip Haltmayer
c632f7fc4e Add Milvus and Zilliz Retrievals (#4416)
Adds the basic retrievers for Milvus and Zilliz. Hybrid search support
will be added in the future.

Signed-off-by: Filip Haltmayer <filip.haltmayer@zilliz.com>
2023-05-15 21:22:54 -07:00
Bradley James
2e43954bc3 fixed on_llm issue (#4717)
Fixes #4714
2023-05-16 01:36:21 +00:00
Zander Chase
bf0904b676 Add Server Command (#4695)
Add Support for `langchain server {start|stop}` commands, with support for using ngrok to tunnel to a remote notebook
2023-05-16 00:44:30 +00:00
Anirudh Suresh
03ac39368f Fixing DeepLake Overwrite Flag (#4683)
# Fix DeepLake Overwrite Flag Issue

Fixes Issue #4682: essentially, setting overwrite to False in the
DeepLake constructor still triggers an overwrite, because the logic is
just checking for the presence of "overwrite" in kwargs. The fix is
simple--just add some checks to inspect if "overwrite" in kwargs AND
kwargs["overwrite"]==True.

Added a new test in
tests/integration_tests/vectorstores/test_deeplake.py to reflect the
desired behavior.


Co-authored-by: Anirudh Suresh <ani@Anirudhs-MBP.cable.rcn.com>
Co-authored-by: Anirudh Suresh <ani@Anirudhs-MacBook-Pro.local>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-15 17:39:16 -07:00
d 3 n 7
8bb32d77d0 Update utils.py to make headless an optional argument (#4745)
Making headless an optional argument for
create_async_playwright_browser() and create_sync_playwright_browser()
By default no functionality is changed.

This allows for disabled people to use a web browser intelligently with
their voice, for example, while still seeing the content on the screen.
As well as many other use cases

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-15 17:29:06 -07:00
Mose Tronci
a9dbe90447 Exponential back-off support for Google PaLM api (#4001)
This PR adds exponential back-off to the Google PaLM api to gracefully
handle rate limiting errors.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-15 17:21:11 -07:00
Leonid Ganeline
a6f3ec94bc docs: added additional_resources folder (#4748)
# docs: added `additional_resources` folder

The additional resource files were inside the doc top-level folder,
which polluted the top-level folder.
- added the `additional_resources` folder and moved correspondent files
to this folder;
- fixed a broken link to the "Model comparison" page (model_laboratory
notebook)
- fixed a broken link to one of the YouTube videos (sorry, it is not
directly related to this PR)

## Who can review?

@dev2049
2023-05-15 17:12:47 -07:00
Zander Chase
a128d95aeb Fix Async Shared Resource Bug (#4751)
Use an async queue to distribute tracers rather than inappropriately
sharing a single one
2023-05-16 00:04:01 +00:00
whuwxl
3f0357f94a Add summarization task type for HuggingFace APIs (#4721)
# Add summarization task type for HuggingFace APIs

Add summarization task type for HuggingFace APIs.
This task type is described by [HuggingFace inference
API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task)

My project utilizes LangChain to connect multiple LLMs, including
various HuggingFace models that support the summarization task.
Integrating this task type is highly convenient and beneficial.

Fixes #4720
2023-05-15 16:26:17 -07:00
Zander Chase
580861e7f2 Revert "Make serpapi base url configurable via env (#4402)" (#4750)
This reverts commit 5111bec540.

This PR introduced a bug in the async API (the `url` param isn't bound);
it also didn't update the synchronous API correctly, which makes it
error-prone (the behavior of the async and sync endpoints would be
different)
2023-05-15 16:17:16 -07:00
shiyu22
21b9397342 Update the milvus example (#4706)
# Fix issue when running example

- add the query content
- update the `user` parameter with Zilliz

Signed-off-by: shiyu22 <shiyu.chen@zilliz.com>
2023-05-15 16:16:57 -07:00
hilarious-viking
7d15669b41 llama-cpp: add gpu layers parameter (#4739)
Adds gpu layers parameter to llama.cpp wrapper

Co-authored-by: andrew.khvalenski <andrew.khvalenski@behavox.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-15 16:01:48 -07:00
Davis Chase
36c9fd1af7 Dev2049/docs edit0 (#4699) 2023-05-15 15:20:37 -07:00
Jinto Jose
1e467d9fc4 Jupyter Notebook Example for using Mongodb to store Chat Message History (#4436)
# Jupyter Notebook Example for using Mongodb Chat Message History

@dev2049
2023-05-15 14:33:42 -07:00
Leonid Ganeline
6060505a9d Add new links to Tutorials and YouTube pages (#4746)
- added an official LangChain YouTube channel :)
- added new tutorials and videos (only videos with enough subscriber or
view numbers)
- added a "New video" icon 

## Who can review?

@dev2049
2023-05-15 14:32:48 -07:00
Eduard van Valkenburg
47657fe01a Tweaks to the PowerBI toolkit and utility (#4442)
Fixes some bugs I found while testing with more advanced datasets and
queries. Includes using the output of PowerBI to parse the error and
give that back to the LLM.
2023-05-15 14:30:48 -07:00
mvhensbergen
e363e709cb Add source field to metadata (#4462)
This is needed if one want to use index.query_with_sources on git files.
Without a source field, index.query_with_sources fails with an
exception.
2023-05-15 14:30:12 -07:00
vinoyang
5111bec540 Make serpapi base url configurable via env (#4402)
Fixes #4328

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-15 14:25:25 -07:00
Roma
cb802edf75 [Feature] Add GraphQL Query Tool (#4409)
# Add GraphQL Query Support

This PR introduces a GraphQL API Wrapper tool that allows LLM agents to
query GraphQL databases. The tool utilizes the httpx and gql Python
packages to interact with GraphQL APIs and provides a simple interface
for running queries with LLM agents.

@vowelparrot

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-15 14:06:12 -07:00
Eugene Yurtsev
49ce5ce1ca Only run linkcheck against docs dir on PR (#4741)
# Only run linkchecker on direct changes to docs

This is a stop-gap that will speed up PRs.

Some broken links can slip through if they're embedded in doc-strings
inside the codebase.

But we'll still be running the linkchecker on master.
2023-05-15 14:40:43 -04:00
Eugene Yurtsev
99cfe71cd0 Check poetry lock file (#4740)
# Check poetry lock file on CI

This PR checks that the lock file is up to date using poetry lock
--check.

As part of this PR, a new lock file was generated.
2023-05-15 14:38:01 -04:00
Eugene Yurtsev
09587a3201 Clean up tests for pdf parsers (#4595)
# Organize tests for pdf parsers

Clean up tests for pdf parsers, remove duplicate tests, convert to unit
tests.
2023-05-15 14:21:05 -04:00
Leonid Ganeline
70fd7cda14 docs: Concepts (#4734)
# glossary.md renamed as concepts.md and moved under the Getting Started

small PR.
`Concepts` looks right to the point. It is moved under Getting Started
(typical place). Previously it was lost in the Additional Resources
section.

## Who can review?

 @hwchase17
2023-05-15 11:09:25 -07:00
Harrison Chase
8de81d34a1 bump version to 170 (#4733) 2023-05-15 09:21:00 -07:00
Harrison Chase
dd95f0892d Harrison/add top k (#4707)
Co-authored-by: blc16 <benlc@umich.edu>
2023-05-15 09:09:22 -07:00
Harrison Chase
0551594722 add async default (#4701)
a spin on
https://github.com/hwchase17/langchain/pull/4300/files#diff-4f16071d58cd34fb3ec5cd5089e9dbd6fb06574c25c76b4d573827f8a2f48e96
2023-05-15 08:57:30 -07:00
Zander Chase
97434a64c5 Add Environment Info to Run (#4691)
Store the environment info within the `extra` fields of the Run
2023-05-15 15:38:49 +00:00
Eugene Yurtsev
d3300bd799 YouTube Loader: Replace regexp with built-in parsing (#4729) 2023-05-15 08:34:41 -07:00
519 changed files with 21946 additions and 5128 deletions

View File

@@ -33,11 +33,13 @@ runs:
using: composite
steps:
- uses: actions/setup-python@v4
name: Setup python $${ inputs.python-version }}
with:
python-version: ${{ inputs.python-version }}
- uses: actions/cache@v3
id: cache-pip
name: Cache Pip ${{ inputs.python-version }}
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "15"
with:
@@ -48,6 +50,16 @@ runs:
- run: pipx install poetry==${{ inputs.poetry-version }} --python python${{ inputs.python-version }}
shell: bash
- name: Check Poetry File
shell: bash
run: |
poetry check
- name: Check lock file
shell: bash
run: |
poetry lock --check
- uses: actions/cache@v3
id: cache-poetry
env:

View File

@@ -4,6 +4,8 @@ on:
push:
branches: [master]
pull_request:
paths:
- 'docs/**'
env:
POETRY_VERSION: "1.4.2"

View File

@@ -4,6 +4,7 @@ on:
push:
branches: [master]
pull_request:
workflow_dispatch:
env:
POETRY_VERSION: "1.4.2"

View File

@@ -35,13 +35,13 @@ lint lint_diff:
TEST_FILE ?= tests/unit_tests/
test:
poetry run pytest $(TEST_FILE)
poetry run pytest --disable-socket --allow-unix-socket $(TEST_FILE)
tests:
poetry run pytest $(TEST_FILE)
tests:
poetry run pytest --disable-socket --allow-unix-socket $(TEST_FILE)
extended_tests:
poetry run pytest --only-extended tests/unit_tests
poetry run pytest --disable-socket --allow-unix-socket --only-extended tests/unit_tests
test_watch:
poetry run ptw --now . -- tests/unit_tests

View File

@@ -13,5 +13,5 @@ pre {
}
#my-component-root *, #headlessui-portal-root * {
z-index: 1000000000000;
z-index: 10000;
}

View File

@@ -30,10 +30,7 @@ document.addEventListener('DOMContentLoaded', () => {
const icon = React.createElement('p', {
style: { color: '#ffffff', fontSize: '22px',width: '48px', height: '48px', margin: '0px', padding: '0px', display: 'flex', alignItems: 'center', justifyContent: 'center', textAlign: 'center' },
}, [iconSpan1, iconSpan2]);
const mendableFloatingButton = React.createElement(
MendableFloatingButton,
{
@@ -42,6 +39,7 @@ document.addEventListener('DOMContentLoaded', () => {
anon_key: '82842b36-3ea6-49b2-9fb8-52cfc4bde6bf', // Mendable Search Public ANON key, ok to be public
messageSettings: {
openSourcesInNewTab: false,
prettySources: true // Prettify the sources displayed now
},
icon: icon,
}
@@ -52,7 +50,7 @@ document.addEventListener('DOMContentLoaded', () => {
loadScript('https://unpkg.com/react@17/umd/react.production.min.js', () => {
loadScript('https://unpkg.com/react-dom@17/umd/react-dom.production.min.js', () => {
loadScript('https://unpkg.com/@mendable/search@0.0.93/dist/umd/mendable.min.js', initializeMendable);
loadScript('https://unpkg.com/@mendable/search@0.0.102/dist/umd/mendable.min.js', initializeMendable);
});
});
});

View File

@@ -6,8 +6,8 @@ First, you should install tracing and set up your environment properly.
You can use either a locally hosted version of this (uses Docker) or a cloud hosted version (in closed alpha).
If you're interested in using the hosted platform, please fill out the form [here](https://forms.gle/tRCEMSeopZf6TE3b6).
- [Locally Hosted Setup](./tracing/local_installation.md)
- [Cloud Hosted Setup](./tracing/hosted_installation.md)
- [Locally Hosted Setup](../tracing/local_installation.md)
- [Cloud Hosted Setup](../tracing/hosted_installation.md)
## Tracing Walkthrough
@@ -17,32 +17,32 @@ A session is just a way to group traces together.
If you click on a session, it will take you to a page with no recorded traces that says "No Runs."
You can create a new session with the new session form.
![](tracing/homepage.png)
![](../tracing/homepage.png)
If we click on the `default` session, we can see that to start we have no traces stored.
![](tracing/default_empty.png)
![](../tracing/default_empty.png)
If we now start running chains and agents with tracing enabled, we will see data show up here.
To do so, we can run [this notebook](tracing/agent_with_tracing.ipynb) as an example.
To do so, we can run [this notebook](../tracing/agent_with_tracing.ipynb) as an example.
After running it, we will see an initial trace show up.
![](tracing/first_trace.png)
![](../tracing/first_trace.png)
From here we can explore the trace at a high level by clicking on the arrow to show nested runs.
We can keep on clicking further and further down to explore deeper and deeper.
![](tracing/explore.png)
![](../tracing/explore.png)
We can also click on the "Explore" button of the top level run to dive even deeper.
Here, we can see the inputs and outputs in full, as well as all the nested traces.
![](tracing/explore_trace.png)
![](../tracing/explore_trace.png)
We can keep on exploring each of these nested traces in more detail.
For example, here is the lowest level trace with the exact inputs/outputs to the LLM.
![](tracing/explore_llm.png)
![](../tracing/explore_llm.png)
## Changing Sessions

View File

@@ -0,0 +1,90 @@
# YouTube
This is a collection of `LangChain` videos on `YouTube`.
### ⛓️[Official LangChain YouTube channel](https://www.youtube.com/@LangChain)⛓️
### Introduction to LangChain with Harrison Chase, creator of LangChain
- [Building the Future with LLMs, `LangChain`, & `Pinecone`](https://youtu.be/nMniwlGyX-c) by [Pinecone](https://www.youtube.com/@pinecone-io)
- [LangChain and Weaviate with Harrison Chase and Bob van Luijt - Weaviate Podcast #36](https://youtu.be/lhby7Ql7hbk) by [Weaviate • Vector Database](https://www.youtube.com/@Weaviate)
- [LangChain Demo + Q&A with Harrison Chase](https://youtu.be/zaYTXQFR0_s?t=788) by [Full Stack Deep Learning](https://www.youtube.com/@FullStackDeepLearning)
- [LangChain Agents: Build Personal Assistants For Your Data (Q&A with Harrison Chase and Mayo Oshin)](https://youtu.be/gVkF8cwfBLI) by [Chat with data](https://www.youtube.com/@chatwithdata)
- ⛓️ [LangChain "Agents in Production" Webinar](https://youtu.be/k8GNCCs16F4) by [LangChain](https://www.youtube.com/@LangChain)
## Videos (sorted by views)
- [Building AI LLM Apps with LangChain (and more?) - LIVE STREAM](https://www.youtube.com/live/M-2Cj_2fzWI?feature=share) by [Nicholas Renotte](https://www.youtube.com/@NicholasRenotte)
- [First look - `ChatGPT` + `WolframAlpha` (`GPT-3.5` and Wolfram|Alpha via LangChain by James Weaver)](https://youtu.be/wYGbY811oMo) by [Dr Alan D. Thompson](https://www.youtube.com/@DrAlanDThompson)
- [LangChain explained - The hottest new Python framework](https://youtu.be/RoR4XJw8wIc) by [AssemblyAI](https://www.youtube.com/@AssemblyAI)
- [Chatbot with INFINITE MEMORY using `OpenAI` & `Pinecone` - `GPT-3`, `Embeddings`, `ADA`, `Vector DB`, `Semantic`](https://youtu.be/2xNzB7xq8nk) by [David Shapiro ~ AI](https://www.youtube.com/@DavidShapiroAutomator)
- [LangChain for LLMs is... basically just an Ansible playbook](https://youtu.be/X51N9C-OhlE) by [David Shapiro ~ AI](https://www.youtube.com/@DavidShapiroAutomator)
- [Build your own LLM Apps with LangChain & `GPT-Index`](https://youtu.be/-75p09zFUJY) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- [`BabyAGI` - New System of Autonomous AI Agents with LangChain](https://youtu.be/lg3kJvf1kXo) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- [Run `BabyAGI` with Langchain Agents (with Python Code)](https://youtu.be/WosPGHPObx8) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- [How to Use Langchain With `Zapier` | Write and Send Email with GPT-3 | OpenAI API Tutorial](https://youtu.be/p9v2-xEa9A0) by [StarMorph AI](https://www.youtube.com/@starmorph)
- [Use Your Locally Stored Files To Get Response From GPT - `OpenAI` | Langchain | Python](https://youtu.be/NC1Ni9KS-rk) by [Shweta Lodha](https://www.youtube.com/@shweta-lodha)
- [`Langchain JS` | How to Use GPT-3, GPT-4 to Reference your own Data | `OpenAI Embeddings` Intro](https://youtu.be/veV2I-NEjaM) by [StarMorph AI](https://www.youtube.com/@starmorph)
- [The easiest way to work with large language models | Learn LangChain in 10min](https://youtu.be/kmbS6FDQh7c) by [Sophia Yang](https://www.youtube.com/@SophiaYangDS)
- [4 Autonomous AI Agents: “Westworld” simulation `BabyAGI`, `AutoGPT`, `Camel`, `LangChain`](https://youtu.be/yWbnH6inT_U) by [Sophia Yang](https://www.youtube.com/@SophiaYangDS)
- [AI CAN SEARCH THE INTERNET? Langchain Agents + OpenAI ChatGPT](https://youtu.be/J-GL0htqda8) by [tylerwhatsgood](https://www.youtube.com/@tylerwhatsgood)
- [Query Your Data with GPT-4 | Embeddings, Vector Databases | Langchain JS Knowledgebase](https://youtu.be/jRnUPUTkZmU) by [StarMorph AI](https://www.youtube.com/@starmorph)
- [`Weaviate` + LangChain for LLM apps presented by Erika Cardenas](https://youtu.be/7AGj4Td5Lgw) by [`Weaviate` • Vector Database](https://www.youtube.com/@Weaviate)
- [Langchain Overview — How to Use Langchain & `ChatGPT`](https://youtu.be/oYVYIq0lOtI) by [Python In Office](https://www.youtube.com/@pythoninoffice6568)
- [Langchain Overview - How to Use Langchain & `ChatGPT`](https://youtu.be/oYVYIq0lOtI) by [Python In Office](https://www.youtube.com/@pythoninoffice6568)
- [Custom langchain Agent & Tools with memory. Turn any `Python function` into langchain tool with Gpt 3](https://youtu.be/NIG8lXk0ULg) by [echohive](https://www.youtube.com/@echohive)
- [LangChain: Run Language Models Locally - `Hugging Face Models`](https://youtu.be/Xxxuw4_iCzw) by [Prompt Engineering](https://www.youtube.com/@engineerprompt)
- [`ChatGPT` with any `YouTube` video using langchain and `chromadb`](https://youtu.be/TQZfB2bzVwU) by [echohive](https://www.youtube.com/@echohive)
- [How to Talk to a `PDF` using LangChain and `ChatGPT`](https://youtu.be/v2i1YDtrIwk) by [Automata Learning Lab](https://www.youtube.com/@automatalearninglab)
- [Langchain Document Loaders Part 1: Unstructured Files](https://youtu.be/O5C0wfsen98) by [Merk](https://www.youtube.com/@merksworld)
- [LangChain - Prompt Templates (what all the best prompt engineers use)](https://youtu.be/1aRu8b0XNOQ) by [Nick Daigler](https://www.youtube.com/@nick_daigs)
- [LangChain. Crear aplicaciones Python impulsadas por GPT](https://youtu.be/DkW_rDndts8) by [Jesús Conde](https://www.youtube.com/@0utKast)
- [Easiest Way to Use GPT In Your Products | LangChain Basics Tutorial](https://youtu.be/fLy0VenZyGc) by [Rachel Woods](https://www.youtube.com/@therachelwoods)
- [`BabyAGI` + `GPT-4` Langchain Agent with Internet Access](https://youtu.be/wx1z_hs5P6E) by [tylerwhatsgood](https://www.youtube.com/@tylerwhatsgood)
- [Learning LLM Agents. How does it actually work? LangChain, AutoGPT & OpenAI](https://youtu.be/mb_YAABSplk) by [Arnoldas Kemeklis](https://www.youtube.com/@processusAI)
- [Get Started with LangChain in `Node.js`](https://youtu.be/Wxx1KUWJFv4) by [Developers Digest](https://www.youtube.com/@DevelopersDigest)
- [LangChain + `OpenAI` tutorial: Building a Q&A system w/ own text data](https://youtu.be/DYOU_Z0hAwo) by [Samuel Chan](https://www.youtube.com/@SamuelChan)
- [Langchain + `Zapier` Agent](https://youtu.be/yribLAb-pxA) by [Merk](https://www.youtube.com/@merksworld)
- [Connecting the Internet with `ChatGPT` (LLMs) using Langchain And Answers Your Questions](https://youtu.be/9Y0TBC63yZg) by [Kamalraj M M](https://www.youtube.com/@insightbuilder)
- [Build More Powerful LLM Applications for Businesss with LangChain (Beginners Guide)](https://youtu.be/sp3-WLKEcBg) by[ No Code Blackbox](https://www.youtube.com/@nocodeblackbox)
- ⛓️ [LangFlow LLM Agent Demo for 🦜🔗LangChain](https://youtu.be/zJxDHaWt-6o) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA)
- ⛓️ [Chatbot Factory: Streamline Python Chatbot Creation with LLMs and Langchain](https://youtu.be/eYer3uzrcuM) by [Finxter](https://www.youtube.com/@CobusGreylingZA)
- ⛓️ [LangChain Tutorial - ChatGPT mit eigenen Daten](https://youtu.be/0XDLyY90E2c) by [Coding Crashkurse](https://www.youtube.com/@codingcrashkurse6429)
- ⛓️ [Chat with a `CSV` | LangChain Agents Tutorial (Beginners)](https://youtu.be/tjeti5vXWOU) by [GoDataProf](https://www.youtube.com/@godataprof)
- ⛓️ [Introdução ao Langchain - #Cortes - Live DataHackers](https://youtu.be/fw8y5VRei5Y) by [Prof. João Gabriel Lima](https://www.youtube.com/@profjoaogabriellima)
- ⛓️ [LangChain: Level up `ChatGPT` !? | LangChain Tutorial Part 1](https://youtu.be/vxUGx8aZpDE) by [Code Affinity](https://www.youtube.com/@codeaffinitydev)
- ⛓️ [KI schreibt krasses Youtube Skript 😲😳 | LangChain Tutorial Deutsch](https://youtu.be/QpTiXyK1jus) by [SimpleKI](https://www.youtube.com/@simpleki)
- ⛓️ [Chat with Audio: Langchain, `Chroma DB`, OpenAI, and `Assembly AI`](https://youtu.be/Kjy7cx1r75g) by [AI Anytime](https://www.youtube.com/@AIAnytime)
- ⛓️ [QA over documents with Auto vector index selection with Langchain router chains](https://youtu.be/9G05qybShv8) by [echohive](https://www.youtube.com/@echohive)
- ⛓️ [Build your own custom LLM application with `Bubble.io` & Langchain (No Code & Beginner friendly)](https://youtu.be/O7NhQGu1m6c) by [No Code Blackbox](https://www.youtube.com/@nocodeblackbox)
- ⛓️ [Simple App to Question Your Docs: Leveraging `Streamlit`, `Hugging Face Spaces`, LangChain, and `Claude`!](https://youtu.be/X4YbNECRr7o) by [Chris Alexiuk](https://www.youtube.com/@chrisalexiuk)
- ⛓️ [LANGCHAIN AI- `ConstitutionalChainAI` + Databutton AI ASSISTANT Web App](https://youtu.be/5zIU6_rdJCU) by [Avra](https://www.youtube.com/@Avra_b)
- ⛓️ [LANGCHAIN AI AUTONOMOUS AGENT WEB APP - 👶 `BABY AGI` 🤖 with EMAIL AUTOMATION using `DATABUTTON`](https://youtu.be/cvAwOGfeHgw) by [Avra](https://www.youtube.com/@Avra_b)
- ⛓️ [The Future of Data Analysis: Using A.I. Models in Data Analysis (LangChain)](https://youtu.be/v_LIcVyg5dk) by [Absent Data](https://www.youtube.com/@absentdata)
- ⛓️ [Memory in LangChain | Deep dive (python)](https://youtu.be/70lqvTFh_Yg) by [Eden Marco](https://www.youtube.com/@EdenMarco)
- ⛓️ [9 LangChain UseCases | Beginner's Guide | 2023](https://youtu.be/zS8_qosHNMw) by [Data Science Basics](https://www.youtube.com/@datasciencebasics)
- ⛓️ [Use Large Language Models in Jupyter Notebook | LangChain | Agents & Indexes](https://youtu.be/JSe11L1a_QQ) by [Abhinaw Tiwari](https://www.youtube.com/@AbhinawTiwariAT)
- ⛓️ [How to Talk to Your Langchain Agent | `11 Labs` + `Whisper`](https://youtu.be/N4k459Zw2PU) by [VRSEN](https://www.youtube.com/@vrsen)
- ⛓️ [LangChain Deep Dive: 5 FUN AI App Ideas To Build Quickly and Easily](https://youtu.be/mPYEPzLkeks) by [James NoCode](https://www.youtube.com/@jamesnocode)
- ⛓️ [BEST OPEN Alternative to OPENAI's EMBEDDINGs for Retrieval QA: LangChain](https://youtu.be/ogEalPMUCSY) by [Prompt Engineering](https://www.youtube.com/@engineerprompt)
- ⛓️ [LangChain 101: Models](https://youtu.be/T6c_XsyaNSQ) by [Mckay Wrigley](https://www.youtube.com/@realmckaywrigley)
- ⛓️ [LangChain with JavaScript Tutorial #1 | Setup & Using LLMs](https://youtu.be/W3AoeMrg27o) by [Leon van Zyl](https://www.youtube.com/@leonvanzyl)
- ⛓️ [LangChain Overview & Tutorial for Beginners: Build Powerful AI Apps Quickly & Easily (ZERO CODE)](https://youtu.be/iI84yym473Q) by [James NoCode](https://www.youtube.com/@jamesnocode)
- ⛓️ [LangChain In Action: Real-World Use Case With Step-by-Step Tutorial](https://youtu.be/UO699Szp82M) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
- ⛓️ [Summarizing and Querying Multiple Papers with LangChain](https://youtu.be/p_MQRWH5Y6k) by [Automata Learning Lab](https://www.youtube.com/@automatalearninglab)
- ⛓️ [Using Langchain (and `Replit`) through `Tana`, ask `Google`/`Wikipedia`/`Wolfram Alpha` to fill out a table](https://youtu.be/Webau9lEzoI) by [Stian Håklev](https://www.youtube.com/@StianHaklev)
- ⛓️ [Langchain PDF App (GUI) | Create a ChatGPT For Your `PDF` in Python](https://youtu.be/wUAUdEw5oxM) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
- ⛓️ [Auto-GPT with LangChain 🔥 | Create Your Own Personal AI Assistant](https://youtu.be/imDfPmMKEjM) by [Data Science Basics](https://www.youtube.com/@datasciencebasics)
- ⛓️ [Create Your OWN Slack AI Assistant with Python & LangChain](https://youtu.be/3jFXRNn2Bu8) by [Dave Ebbelaar](https://www.youtube.com/@daveebbelaar)
- ⛓️ [How to Create LOCAL Chatbots with GPT4All and LangChain [Full Guide]](https://youtu.be/4p1Fojur8Zw) by [Liam Ottley](https://www.youtube.com/@LiamOttley)
- ⛓️ [Build a `Multilingual PDF` Search App with LangChain, `Cohere` and `Bubble`](https://youtu.be/hOrtuumOrv8) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
- ⛓️ [Building a LangChain Agent (code-free!) Using `Bubble` and `Flowise`](https://youtu.be/jDJIIVWTZDE) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
- ⛓️ [Build a LangChain-based Semantic PDF Search App with No-Code Tools Bubble and Flowise](https://youtu.be/s33v5cIeqA4) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
- ⛓️ [LangChain Memory Tutorial | Building a ChatGPT Clone in Python](https://youtu.be/Cwq91cj2Pnc) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
- ⛓️ [ChatGPT For Your DATA | Chat with Multiple Documents Using LangChain](https://youtu.be/TeDgIDqQmzs) by [Data Science Basics](https://www.youtube.com/@datasciencebasics)
- ⛓️ [`Llama Index`: Chat with Documentation using URL Loader](https://youtu.be/XJRoDEctAwA) by [Merk](https://www.youtube.com/@merksworld)
- ⛓️ [Using OpenAI, LangChain, and `Gradio` to Build Custom GenAI Applications](https://youtu.be/1MsmqMg3yUc) by [David Hundley](https://www.youtube.com/@dkhundley)
---------------------
⛓ icon marks a new video [last update 2023-05-15]

192
docs/dependents.md Normal file
View File

@@ -0,0 +1,192 @@
# Dependents
Dependents stats for `hwchase17/langchain`
[![](https://img.shields.io/static/v1?label=Used%20by&message=5152&color=informational&logo=slickpic)](https://github.com/hwchase17/langchain/network/dependents)
[![](https://img.shields.io/static/v1?label=Used%20by%20(public)&message=172&color=informational&logo=slickpic)](https://github.com/hwchase17/langchain/network/dependents)
[![](https://img.shields.io/static/v1?label=Used%20by%20(private)&message=4980&color=informational&logo=slickpic)](https://github.com/hwchase17/langchain/network/dependents)
[![](https://img.shields.io/static/v1?label=Used%20by%20(stars)&message=17239&color=informational&logo=slickpic)](https://github.com/hwchase17/langchain/network/dependents)
[update: 2023-05-17; only dependent repositories with Stars > 100]
| Repository | Stars |
| :-------- | -----: |
|[openai/openai-cookbook](https://github.com/openai/openai-cookbook) | 35401 |
|[LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant) | 32861 |
|[microsoft/TaskMatrix](https://github.com/microsoft/TaskMatrix) | 32766 |
|[hpcaitech/ColossalAI](https://github.com/hpcaitech/ColossalAI) | 29560 |
|[reworkd/AgentGPT](https://github.com/reworkd/AgentGPT) | 22315 |
|[imartinez/privateGPT](https://github.com/imartinez/privateGPT) | 17474 |
|[openai/chatgpt-retrieval-plugin](https://github.com/openai/chatgpt-retrieval-plugin) | 16923 |
|[mindsdb/mindsdb](https://github.com/mindsdb/mindsdb) | 16112 |
|[jerryjliu/llama_index](https://github.com/jerryjliu/llama_index) | 15407 |
|[mlflow/mlflow](https://github.com/mlflow/mlflow) | 14345 |
|[GaiZhenbiao/ChuanhuChatGPT](https://github.com/GaiZhenbiao/ChuanhuChatGPT) | 10372 |
|[databrickslabs/dolly](https://github.com/databrickslabs/dolly) | 9919 |
|[AIGC-Audio/AudioGPT](https://github.com/AIGC-Audio/AudioGPT) | 8177 |
|[logspace-ai/langflow](https://github.com/logspace-ai/langflow) | 6807 |
|[imClumsyPanda/langchain-ChatGLM](https://github.com/imClumsyPanda/langchain-ChatGLM) | 6087 |
|[arc53/DocsGPT](https://github.com/arc53/DocsGPT) | 5292 |
|[e2b-dev/e2b](https://github.com/e2b-dev/e2b) | 4622 |
|[nsarrazin/serge](https://github.com/nsarrazin/serge) | 4076 |
|[madawei2699/myGPTReader](https://github.com/madawei2699/myGPTReader) | 3952 |
|[zauberzeug/nicegui](https://github.com/zauberzeug/nicegui) | 3952 |
|[go-skynet/LocalAI](https://github.com/go-skynet/LocalAI) | 3762 |
|[GreyDGL/PentestGPT](https://github.com/GreyDGL/PentestGPT) | 3388 |
|[mmabrouk/chatgpt-wrapper](https://github.com/mmabrouk/chatgpt-wrapper) | 3243 |
|[zilliztech/GPTCache](https://github.com/zilliztech/GPTCache) | 3189 |
|[wenda-LLM/wenda](https://github.com/wenda-LLM/wenda) | 3050 |
|[marqo-ai/marqo](https://github.com/marqo-ai/marqo) | 2930 |
|[gkamradt/langchain-tutorials](https://github.com/gkamradt/langchain-tutorials) | 2710 |
|[PrefectHQ/marvin](https://github.com/PrefectHQ/marvin) | 2545 |
|[project-baize/baize-chatbot](https://github.com/project-baize/baize-chatbot) | 2479 |
|[whitead/paper-qa](https://github.com/whitead/paper-qa) | 2399 |
|[langgenius/dify](https://github.com/langgenius/dify) | 2344 |
|[GerevAI/gerev](https://github.com/GerevAI/gerev) | 2283 |
|[hwchase17/chat-langchain](https://github.com/hwchase17/chat-langchain) | 2266 |
|[guangzhengli/ChatFiles](https://github.com/guangzhengli/ChatFiles) | 1903 |
|[Azure-Samples/azure-search-openai-demo](https://github.com/Azure-Samples/azure-search-openai-demo) | 1884 |
|[OpenBMB/BMTools](https://github.com/OpenBMB/BMTools) | 1860 |
|[Farama-Foundation/PettingZoo](https://github.com/Farama-Foundation/PettingZoo) | 1813 |
|[OpenGVLab/Ask-Anything](https://github.com/OpenGVLab/Ask-Anything) | 1571 |
|[IntelligenzaArtificiale/Free-Auto-GPT](https://github.com/IntelligenzaArtificiale/Free-Auto-GPT) | 1480 |
|[hwchase17/notion-qa](https://github.com/hwchase17/notion-qa) | 1464 |
|[NVIDIA/NeMo-Guardrails](https://github.com/NVIDIA/NeMo-Guardrails) | 1419 |
|[Unstructured-IO/unstructured](https://github.com/Unstructured-IO/unstructured) | 1410 |
|[Kav-K/GPTDiscord](https://github.com/Kav-K/GPTDiscord) | 1363 |
|[paulpierre/RasaGPT](https://github.com/paulpierre/RasaGPT) | 1344 |
|[StanGirard/quivr](https://github.com/StanGirard/quivr) | 1330 |
|[lunasec-io/lunasec](https://github.com/lunasec-io/lunasec) | 1318 |
|[vocodedev/vocode-python](https://github.com/vocodedev/vocode-python) | 1286 |
|[agiresearch/OpenAGI](https://github.com/agiresearch/OpenAGI) | 1156 |
|[h2oai/h2ogpt](https://github.com/h2oai/h2ogpt) | 1141 |
|[jina-ai/thinkgpt](https://github.com/jina-ai/thinkgpt) | 1106 |
|[yanqiangmiffy/Chinese-LangChain](https://github.com/yanqiangmiffy/Chinese-LangChain) | 1072 |
|[ttengwang/Caption-Anything](https://github.com/ttengwang/Caption-Anything) | 1064 |
|[jina-ai/dev-gpt](https://github.com/jina-ai/dev-gpt) | 1057 |
|[juncongmoo/chatllama](https://github.com/juncongmoo/chatllama) | 1003 |
|[greshake/llm-security](https://github.com/greshake/llm-security) | 1002 |
|[visual-openllm/visual-openllm](https://github.com/visual-openllm/visual-openllm) | 957 |
|[richardyc/Chrome-GPT](https://github.com/richardyc/Chrome-GPT) | 918 |
|[irgolic/AutoPR](https://github.com/irgolic/AutoPR) | 886 |
|[mmz-001/knowledge_gpt](https://github.com/mmz-001/knowledge_gpt) | 867 |
|[thomas-yanxin/LangChain-ChatGLM-Webui](https://github.com/thomas-yanxin/LangChain-ChatGLM-Webui) | 850 |
|[microsoft/X-Decoder](https://github.com/microsoft/X-Decoder) | 837 |
|[peterw/Chat-with-Github-Repo](https://github.com/peterw/Chat-with-Github-Repo) | 826 |
|[cirediatpl/FigmaChain](https://github.com/cirediatpl/FigmaChain) | 782 |
|[hashintel/hash](https://github.com/hashintel/hash) | 778 |
|[seanpixel/Teenage-AGI](https://github.com/seanpixel/Teenage-AGI) | 773 |
|[jina-ai/langchain-serve](https://github.com/jina-ai/langchain-serve) | 738 |
|[corca-ai/EVAL](https://github.com/corca-ai/EVAL) | 737 |
|[ai-sidekick/sidekick](https://github.com/ai-sidekick/sidekick) | 717 |
|[rlancemartin/auto-evaluator](https://github.com/rlancemartin/auto-evaluator) | 703 |
|[poe-platform/api-bot-tutorial](https://github.com/poe-platform/api-bot-tutorial) | 689 |
|[SamurAIGPT/Camel-AutoGPT](https://github.com/SamurAIGPT/Camel-AutoGPT) | 666 |
|[eyurtsev/kor](https://github.com/eyurtsev/kor) | 608 |
|[run-llama/llama-lab](https://github.com/run-llama/llama-lab) | 559 |
|[namuan/dr-doc-search](https://github.com/namuan/dr-doc-search) | 544 |
|[pieroit/cheshire-cat](https://github.com/pieroit/cheshire-cat) | 520 |
|[griptape-ai/griptape](https://github.com/griptape-ai/griptape) | 514 |
|[getmetal/motorhead](https://github.com/getmetal/motorhead) | 481 |
|[hwchase17/chat-your-data](https://github.com/hwchase17/chat-your-data) | 462 |
|[langchain-ai/langchain-aiplugin](https://github.com/langchain-ai/langchain-aiplugin) | 452 |
|[jina-ai/agentchain](https://github.com/jina-ai/agentchain) | 439 |
|[SamurAIGPT/ChatGPT-Developer-Plugins](https://github.com/SamurAIGPT/ChatGPT-Developer-Plugins) | 437 |
|[alexanderatallah/window.ai](https://github.com/alexanderatallah/window.ai) | 433 |
|[michaelthwan/searchGPT](https://github.com/michaelthwan/searchGPT) | 427 |
|[mpaepper/content-chatbot](https://github.com/mpaepper/content-chatbot) | 425 |
|[mckaywrigley/repo-chat](https://github.com/mckaywrigley/repo-chat) | 422 |
|[whyiyhw/chatgpt-wechat](https://github.com/whyiyhw/chatgpt-wechat) | 421 |
|[freddyaboulton/gradio-tools](https://github.com/freddyaboulton/gradio-tools) | 407 |
|[jonra1993/fastapi-alembic-sqlmodel-async](https://github.com/jonra1993/fastapi-alembic-sqlmodel-async) | 395 |
|[yeagerai/yeagerai-agent](https://github.com/yeagerai/yeagerai-agent) | 383 |
|[akshata29/chatpdf](https://github.com/akshata29/chatpdf) | 374 |
|[OpenGVLab/InternGPT](https://github.com/OpenGVLab/InternGPT) | 368 |
|[ruoccofabrizio/azure-open-ai-embeddings-qna](https://github.com/ruoccofabrizio/azure-open-ai-embeddings-qna) | 358 |
|[101dotxyz/GPTeam](https://github.com/101dotxyz/GPTeam) | 357 |
|[mtenenholtz/chat-twitter](https://github.com/mtenenholtz/chat-twitter) | 354 |
|[amosjyng/langchain-visualizer](https://github.com/amosjyng/langchain-visualizer) | 343 |
|[msoedov/langcorn](https://github.com/msoedov/langcorn) | 334 |
|[showlab/VLog](https://github.com/showlab/VLog) | 330 |
|[continuum-llms/chatgpt-memory](https://github.com/continuum-llms/chatgpt-memory) | 324 |
|[steamship-core/steamship-langchain](https://github.com/steamship-core/steamship-langchain) | 323 |
|[daodao97/chatdoc](https://github.com/daodao97/chatdoc) | 320 |
|[xuwenhao/geektime-ai-course](https://github.com/xuwenhao/geektime-ai-course) | 308 |
|[StevenGrove/GPT4Tools](https://github.com/StevenGrove/GPT4Tools) | 301 |
|[logan-markewich/llama_index_starter_pack](https://github.com/logan-markewich/llama_index_starter_pack) | 300 |
|[andylokandy/gpt-4-search](https://github.com/andylokandy/gpt-4-search) | 299 |
|[Anil-matcha/ChatPDF](https://github.com/Anil-matcha/ChatPDF) | 287 |
|[itamargol/openai](https://github.com/itamargol/openai) | 273 |
|[BlackHC/llm-strategy](https://github.com/BlackHC/llm-strategy) | 267 |
|[momegas/megabots](https://github.com/momegas/megabots) | 259 |
|[bborn/howdoi.ai](https://github.com/bborn/howdoi.ai) | 238 |
|[Cheems-Seminar/grounded-segment-any-parts](https://github.com/Cheems-Seminar/grounded-segment-any-parts) | 232 |
|[ur-whitelab/exmol](https://github.com/ur-whitelab/exmol) | 227 |
|[sullivan-sean/chat-langchainjs](https://github.com/sullivan-sean/chat-langchainjs) | 227 |
|[explosion/spacy-llm](https://github.com/explosion/spacy-llm) | 226 |
|[recalign/RecAlign](https://github.com/recalign/RecAlign) | 218 |
|[jupyterlab/jupyter-ai](https://github.com/jupyterlab/jupyter-ai) | 218 |
|[alvarosevilla95/autolang](https://github.com/alvarosevilla95/autolang) | 215 |
|[conceptofmind/toolformer](https://github.com/conceptofmind/toolformer) | 213 |
|[MagnivOrg/prompt-layer-library](https://github.com/MagnivOrg/prompt-layer-library) | 209 |
|[JohnSnowLabs/nlptest](https://github.com/JohnSnowLabs/nlptest) | 208 |
|[airobotlab/KoChatGPT](https://github.com/airobotlab/KoChatGPT) | 197 |
|[langchain-ai/auto-evaluator](https://github.com/langchain-ai/auto-evaluator) | 195 |
|[yvann-hub/Robby-chatbot](https://github.com/yvann-hub/Robby-chatbot) | 195 |
|[alejandro-ao/langchain-ask-pdf](https://github.com/alejandro-ao/langchain-ask-pdf) | 192 |
|[daveebbelaar/langchain-experiments](https://github.com/daveebbelaar/langchain-experiments) | 189 |
|[NimbleBoxAI/ChainFury](https://github.com/NimbleBoxAI/ChainFury) | 187 |
|[kaleido-lab/dolphin](https://github.com/kaleido-lab/dolphin) | 184 |
|[Anil-matcha/Website-to-Chatbot](https://github.com/Anil-matcha/Website-to-Chatbot) | 183 |
|[plchld/InsightFlow](https://github.com/plchld/InsightFlow) | 180 |
|[OpenBMB/AgentVerse](https://github.com/OpenBMB/AgentVerse) | 166 |
|[benthecoder/ClassGPT](https://github.com/benthecoder/ClassGPT) | 166 |
|[jbrukh/gpt-jargon](https://github.com/jbrukh/gpt-jargon) | 161 |
|[hardbyte/qabot](https://github.com/hardbyte/qabot) | 160 |
|[shaman-ai/agent-actors](https://github.com/shaman-ai/agent-actors) | 153 |
|[radi-cho/datasetGPT](https://github.com/radi-cho/datasetGPT) | 153 |
|[poe-platform/poe-protocol](https://github.com/poe-platform/poe-protocol) | 152 |
|[paolorechia/learn-langchain](https://github.com/paolorechia/learn-langchain) | 149 |
|[ajndkr/lanarky](https://github.com/ajndkr/lanarky) | 149 |
|[fengyuli-dev/multimedia-gpt](https://github.com/fengyuli-dev/multimedia-gpt) | 147 |
|[yasyf/compress-gpt](https://github.com/yasyf/compress-gpt) | 144 |
|[homanp/superagent](https://github.com/homanp/superagent) | 143 |
|[realminchoi/babyagi-ui](https://github.com/realminchoi/babyagi-ui) | 141 |
|[ethanyanjiali/minChatGPT](https://github.com/ethanyanjiali/minChatGPT) | 141 |
|[ccurme/yolopandas](https://github.com/ccurme/yolopandas) | 139 |
|[hwchase17/langchain-streamlit-template](https://github.com/hwchase17/langchain-streamlit-template) | 138 |
|[Jaseci-Labs/jaseci](https://github.com/Jaseci-Labs/jaseci) | 136 |
|[hirokidaichi/wanna](https://github.com/hirokidaichi/wanna) | 135 |
|[Haste171/langchain-chatbot](https://github.com/Haste171/langchain-chatbot) | 134 |
|[jmpaz/promptlib](https://github.com/jmpaz/promptlib) | 130 |
|[Klingefjord/chatgpt-telegram](https://github.com/Klingefjord/chatgpt-telegram) | 130 |
|[filip-michalsky/SalesGPT](https://github.com/filip-michalsky/SalesGPT) | 128 |
|[handrew/browserpilot](https://github.com/handrew/browserpilot) | 128 |
|[shauryr/S2QA](https://github.com/shauryr/S2QA) | 127 |
|[steamship-core/vercel-examples](https://github.com/steamship-core/vercel-examples) | 127 |
|[yasyf/summ](https://github.com/yasyf/summ) | 127 |
|[gia-guar/JARVIS-ChatGPT](https://github.com/gia-guar/JARVIS-ChatGPT) | 126 |
|[jerlendds/osintbuddy](https://github.com/jerlendds/osintbuddy) | 125 |
|[ibiscp/LLM-IMDB](https://github.com/ibiscp/LLM-IMDB) | 124 |
|[Teahouse-Studios/akari-bot](https://github.com/Teahouse-Studios/akari-bot) | 124 |
|[hwchase17/chroma-langchain](https://github.com/hwchase17/chroma-langchain) | 124 |
|[menloparklab/langchain-cohere-qdrant-doc-retrieval](https://github.com/menloparklab/langchain-cohere-qdrant-doc-retrieval) | 123 |
|[peterw/StoryStorm](https://github.com/peterw/StoryStorm) | 123 |
|[chakkaradeep/pyCodeAGI](https://github.com/chakkaradeep/pyCodeAGI) | 123 |
|[petehunt/langchain-github-bot](https://github.com/petehunt/langchain-github-bot) | 115 |
|[su77ungr/CASALIOY](https://github.com/su77ungr/CASALIOY) | 113 |
|[eunomia-bpf/GPTtrace](https://github.com/eunomia-bpf/GPTtrace) | 113 |
|[zenml-io/zenml-projects](https://github.com/zenml-io/zenml-projects) | 112 |
|[pablomarin/GPT-Azure-Search-Engine](https://github.com/pablomarin/GPT-Azure-Search-Engine) | 111 |
|[shamspias/customizable-gpt-chatbot](https://github.com/shamspias/customizable-gpt-chatbot) | 109 |
|[WongSaang/chatgpt-ui-server](https://github.com/WongSaang/chatgpt-ui-server) | 108 |
|[davila7/file-gpt](https://github.com/davila7/file-gpt) | 104 |
|[enhancedocs/enhancedocs](https://github.com/enhancedocs/enhancedocs) | 102 |
|[aurelio-labs/arxiv-bot](https://github.com/aurelio-labs/arxiv-bot) | 101 |
_Generated by [github-dependents-info](https://github.com/nvuillam/github-dependents-info)_
[github-dependents-info --repo hwchase17/langchain --markdownfile dependents.md --minstars 100 --sort stars]

View File

@@ -29,6 +29,10 @@ It implements a Question Answering app and contains instructions for deploying t
A minimal example on how to run LangChain on Vercel using Flask.
## [FastAPI + Vercel](https://github.com/msoedov/langcorn)
A minimal example on how to run LangChain on Vercel using FastAPI and LangCorn/Uvicorn.
## [Kinsta](https://github.com/kinsta/hello-world-langchain)
A minimal example on how to deploy LangChain to [Kinsta](https://kinsta.com) using Flask.

View File

@@ -1,365 +0,0 @@
LangChain Gallery
=================
Lots of people have built some pretty awesome stuff with LangChain.
This is a collection of our favorites.
If you see any other demos that you think we should highlight, be sure to let us know!
Open Source
-----------
.. panels::
:body: text-center
---
.. link-button:: https://github.com/bborn/howdoi.ai
:type: url
:text: HowDoI.ai
:classes: stretched-link btn-lg
+++
This is an experiment in building a large-language-model-backed chatbot. It can hold a conversation, remember previous comments/questions,
and answer all types of queries (history, web search, movie data, weather, news, and more).
---
.. link-button:: https://colab.research.google.com/drive/1sKSTjt9cPstl_WMZ86JsgEqFG-aSAwkn?usp=sharing
:type: url
:text: YouTube Transcription QA with Sources
:classes: stretched-link btn-lg
+++
An end-to-end example of doing question answering on YouTube transcripts, returning the timestamps as sources to legitimize the answer.
---
.. link-button:: https://github.com/normandmickey/MrsStax
:type: url
:text: QA Slack Bot
:classes: stretched-link btn-lg
+++
This application is a Slack Bot that uses Langchain and OpenAI's GPT3 language model to provide domain specific answers. You provide the documents.
---
.. link-button:: https://github.com/OpenBioLink/ThoughtSource
:type: url
:text: ThoughtSource
:classes: stretched-link btn-lg
+++
A central, open resource and community around data and tools related to chain-of-thought reasoning in large language models.
---
.. link-button:: https://github.com/blackhc/llm-strategy
:type: url
:text: LLM Strategy
:classes: stretched-link btn-lg
+++
This Python package adds a decorator llm_strategy that connects to an LLM (such as OpenAIs GPT-3) and uses the LLM to "implement" abstract methods in interface classes. It does this by forwarding requests to the LLM and converting the responses back to Python data using Python's @dataclasses.
---
.. link-button:: https://github.com/JohnNay/llm-lobbyist
:type: url
:text: Zero-Shot Corporate Lobbyist
:classes: stretched-link btn-lg
+++
A notebook showing how to use GPT to help with the work of a corporate lobbyist.
---
.. link-button:: https://dagster.io/blog/chatgpt-langchain
:type: url
:text: Dagster Documentation ChatBot
:classes: stretched-link btn-lg
+++
A jupyter notebook demonstrating how you could create a semantic search engine on documents in one of your Google Folders
---
.. link-button:: https://github.com/venuv/langchain_semantic_search
:type: url
:text: Google Folder Semantic Search
:classes: stretched-link btn-lg
+++
Build a GitHub support bot with GPT3, LangChain, and Python.
---
.. link-button:: https://huggingface.co/spaces/team7/talk_with_wind
:type: url
:text: Talk With Wind
:classes: stretched-link btn-lg
+++
Record sounds of anything (birds, wind, fire, train station) and chat with it.
---
.. link-button:: https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain
:type: url
:text: ChatGPT LangChain
:classes: stretched-link btn-lg
+++
This simple application demonstrates a conversational agent implemented with OpenAI GPT-3.5 and LangChain. When necessary, it leverages tools for complex math, searching the internet, and accessing news and weather.
---
.. link-button:: https://huggingface.co/spaces/JavaFXpert/gpt-math-techniques
:type: url
:text: GPT Math Techniques
:classes: stretched-link btn-lg
+++
A Hugging Face spaces project showing off the benefits of using PAL for math problems.
---
.. link-button:: https://colab.research.google.com/drive/1xt2IsFPGYMEQdoJFNgWNAjWGxa60VXdV
:type: url
:text: GPT Political Compass
:classes: stretched-link btn-lg
+++
Measure the political compass of GPT.
---
.. link-button:: https://github.com/hwchase17/notion-qa
:type: url
:text: Notion Database Question-Answering Bot
:classes: stretched-link btn-lg
+++
Open source GitHub project shows how to use LangChain to create a chatbot that can answer questions about an arbitrary Notion database.
---
.. link-button:: https://github.com/jerryjliu/llama_index
:type: url
:text: LlamaIndex
:classes: stretched-link btn-lg
+++
LlamaIndex (formerly GPT Index) is a project consisting of a set of data structures that are created using GPT-3 and can be traversed using GPT-3 in order to answer queries.
---
.. link-button:: https://github.com/JavaFXpert/llm-grovers-search-party
:type: url
:text: Grover's Algorithm
:classes: stretched-link btn-lg
+++
Leveraging Qiskit, OpenAI and LangChain to demonstrate Grover's algorithm
---
.. link-button:: https://huggingface.co/spaces/rituthombre/QNim
:type: url
:text: QNimGPT
:classes: stretched-link btn-lg
+++
A chat UI to play Nim, where a player can select an opponent, either a quantum computer or an AI
---
.. link-button:: https://colab.research.google.com/drive/19WTIWC3prw5LDMHmRMvqNV2loD9FHls6?usp=sharing
:type: url
:text: ReAct TextWorld
:classes: stretched-link btn-lg
+++
Leveraging the ReActTextWorldAgent to play TextWorld with an LLM!
---
.. link-button:: https://github.com/jagilley/fact-checker
:type: url
:text: Fact Checker
:classes: stretched-link btn-lg
+++
This repo is a simple demonstration of using LangChain to do fact-checking with prompt chaining.
---
.. link-button:: https://github.com/arc53/docsgpt
:type: url
:text: DocsGPT
:classes: stretched-link btn-lg
+++
Answer questions about the documentation of any project
---
.. link-button:: https://github.com/akshata29/chatpdf
:type: url
:text: Chat & Ask your data
:classes: stretched-link btn-lg
+++
This sample demonstrates a few approaches for creating ChatGPT-like experiences over your own data. It uses OpenAI / Azure OpenAI Service to access the ChatGPT model (gpt-35-turbo and gpt3), and vector store (Pinecone, Redis and others) or Azure cognitive search for data indexing and retrieval.
Misc. Colab Notebooks
~~~~~~~~~~~~~~~~~~~~~
.. panels::
:body: text-center
---
.. link-button:: https://colab.research.google.com/drive/1AAyEdTz-Z6ShKvewbt1ZHUICqak0MiwR?usp=sharing
:type: url
:text: Wolfram Alpha in Conversational Agent
:classes: stretched-link btn-lg
+++
Give ChatGPT a WolframAlpha neural implant
---
.. link-button:: https://colab.research.google.com/drive/1UsCLcPy8q5PMNQ5ytgrAAAHa124dzLJg?usp=sharing
:type: url
:text: Tool Updates in Agents
:classes: stretched-link btn-lg
+++
Agent improvements (6th Jan 2023)
---
.. link-button:: https://colab.research.google.com/drive/1UsCLcPy8q5PMNQ5ytgrAAAHa124dzLJg?usp=sharing
:type: url
:text: Conversational Agent with Tools (Langchain AGI)
:classes: stretched-link btn-lg
+++
Langchain AGI (23rd Dec 2022)
Proprietary
-----------
.. panels::
:body: text-center
---
.. link-button:: https://twitter.com/sjwhitmore/status/1580593217153531908?s=20&t=neQvtZZTlp623U3LZwz3bQ
:type: url
:text: Daimon
:classes: stretched-link btn-lg
+++
A chat-based AI personal assistant with long-term memory about you.
---
.. link-button:: https://anysummary.app
:type: url
:text: Summarize any file with AI
:classes: stretched-link btn-lg
+++
Summarize not only long docs, interview audio or video files quickly, but also entire websites and YouTube videos. Share or download your generated summaries to collaborate with others, or revisit them at any time! Bonus: `@anysummary <https://twitter.com/anysummary>`_ on Twitter will also summarize any thread it is tagged in.
---
.. link-button:: https://twitter.com/dory111111/status/1608406234646052870?s=20&t=XYlrbKM0ornJsrtGa0br-g
:type: url
:text: AI Assisted SQL Query Generator
:classes: stretched-link btn-lg
+++
An app to write SQL using natural language, and execute against real DB.
---
.. link-button:: https://twitter.com/krrish_dh/status/1581028925618106368?s=20&t=neQvtZZTlp623U3LZwz3bQ
:type: url
:text: Clerkie
:classes: stretched-link btn-lg
+++
Stack Tracing QA Bot to help debug complex stack tracing (especially the ones that go multi-function/file deep).
---
.. link-button:: https://twitter.com/Raza_Habib496/status/1596880140490838017?s=20&t=6MqEQYWfSqmJwsKahjCVOA
:type: url
:text: Sales Email Writer
:classes: stretched-link btn-lg
+++
By Raza Habib, this demo utilizes LangChain + SerpAPI + HumanLoop to write sales emails. Give it a company name and a person, this application will use Google Search (via SerpAPI) to get more information on the company and the person, and then write them a sales message.
---
.. link-button:: https://twitter.com/chillzaza_/status/1592961099384905730?s=20&t=EhU8jl0KyCPJ7vE9Rnz-cQ
:type: url
:text: Question-Answering on a Web Browser
:classes: stretched-link btn-lg
+++
By Zahid Khawaja, this demo utilizes question answering to answer questions about a given website. A followup added this for `YouTube videos <https://twitter.com/chillzaza_/status/1593739682013220865?s=20&t=EhU8jl0KyCPJ7vE9Rnz-cQ>`_, and then another followup added it for `Wikipedia <https://twitter.com/chillzaza_/status/1594847151238037505?s=20&t=EhU8jl0KyCPJ7vE9Rnz-cQ>`_.
---
.. link-button:: https://mynd.so
:type: url
:text: Mynd
:classes: stretched-link btn-lg
+++
A journaling app for self-care that uses AI to uncover insights and patterns over time.
Articles on **Google Scholar**
-----------------------------
LangChain is used in many scientific and research projects.
**Google Scholar** presents a `list of the papers <https://scholar.google.com/scholar?q=%22langchain%22&hl=en&as_sdt=0,5&as_vis=1>`_
with references to LangChain.

View File

@@ -1,54 +1,44 @@
# Glossary
# Concepts
This is a collection of terminology commonly used when developing LLM applications.
These are concepts and terminology commonly used when developing LLM applications.
It contains reference to external papers or sources where the concept was first introduced,
as well as to places in LangChain where the concept is used.
## Chain of Thought Prompting
## Chain of Thought
A prompting technique used to encourage the model to generate a series of intermediate reasoning steps.
`Chain of Thought (CoT)` is a prompting technique used to encourage the model to generate a series of intermediate reasoning steps.
A less formal way to induce this behavior is to include “Lets think step-by-step” in the prompt.
Resources:
- [Chain-of-Thought Paper](https://arxiv.org/pdf/2201.11903.pdf)
- [Step-by-Step Paper](https://arxiv.org/abs/2112.00114)
## Action Plan Generation
A prompt usage that uses a language model to generate actions to take.
`Action Plan Generation` is a prompting technique that uses a language model to generate actions to take.
The results of these actions can then be fed back into the language model to generate a subsequent action.
Resources:
- [WebGPT Paper](https://arxiv.org/pdf/2112.09332.pdf)
- [SayCan Paper](https://say-can.github.io/assets/palm_saycan.pdf)
## ReAct Prompting
## ReAct
A prompting technique that combines Chain-of-Thought prompting with action plan generation.
`ReAct` is a prompting technique that combines Chain-of-Thought prompting with action plan generation.
This induces the to model to think about what action to take, then take it.
Resources:
- [Paper](https://arxiv.org/pdf/2210.03629.pdf)
- [LangChain Example](modules/agents/agents/examples/react.ipynb)
- [LangChain Example](../modules/agents/agents/examples/react.ipynb)
## Self-ask
A prompting method that builds on top of chain-of-thought prompting.
`Self-ask` is a prompting method that builds on top of chain-of-thought prompting.
In this method, the model explicitly asks itself follow-up questions, which are then answered by an external search engine.
Resources:
- [Paper](https://ofir.io/self-ask.pdf)
- [LangChain Example](modules/agents/agents/examples/self_ask_with_search.ipynb)
- [LangChain Example](../modules/agents/agents/examples/self_ask_with_search.ipynb)
## Prompt Chaining
Combining multiple LLM calls together, with the output of one-step being the input to the next.
Resources:
`Prompt Chaining` is combining multiple LLM calls, with the output of one-step being the input to the next.
- [PromptChainer Paper](https://arxiv.org/pdf/2203.06566.pdf)
- [Language Model Cascades](https://arxiv.org/abs/2207.10342)
@@ -57,34 +47,29 @@ Resources:
## Memetic Proxy
Encouraging the LLM to respond in a certain way framing the discussion in a context that the model knows of and that will result in that type of response. For example, as a conversation between a student and a teacher.
Resources:
`Memetic Proxy` is encouraging the LLM
to respond in a certain way framing the discussion in a context that the model knows of and that
will result in that type of response.
For example, as a conversation between a student and a teacher.
- [Paper](https://arxiv.org/pdf/2102.07350.pdf)
## Self Consistency
A decoding strategy that samples a diverse set of reasoning paths and then selects the most consistent answer.
`Self Consistency` is a decoding strategy that samples a diverse set of reasoning paths and then selects the most consistent answer.
Is most effective when combined with Chain-of-thought prompting.
Resources:
- [Paper](https://arxiv.org/pdf/2203.11171.pdf)
## Inception
Also called First Person Instruction.
Encouraging the model to think a certain way by including the start of the models response in the prompt.
Resources:
`Inception` is also called `First Person Instruction`.
It is encouraging the model to think a certain way by including the start of the models response in the prompt.
- [Example](https://twitter.com/goodside/status/1583262455207460865?s=20&t=8Hz7XBnK1OF8siQrxxCIGQ)
## MemPrompt
MemPrompt maintains a memory of errors and user feedback, and uses them to prevent repetition of mistakes.
Resources:
`MemPrompt` maintains a memory of errors and user feedback, and uses them to prevent repetition of mistakes.
- [Paper](https://memprompt.com/)

View File

@@ -37,6 +37,12 @@ import os
os.environ["OPENAI_API_KEY"] = "..."
```
If you want to set the API key dynamically, you can use the openai_api_key parameter when initiating OpenAI class—for instance, each user's API key.
```python
from langchain.llms import OpenAI
llm = OpenAI(openai_api_key="OPENAI_API_KEY")
```
## Building a Language Model Application: LLMs

View File

@@ -2,6 +2,12 @@
This is a collection of `LangChain` tutorials on `YouTube`.
⛓ icon marks a new video [last update 2023-05-15]
###
[LangChain Tutorials](https://www.youtube.com/watch?v=FuqdVNB_8c0&list=PL9V0lbeJ69brU-ojMpU1Y7Ic58Tap0Cw6) by [Edrick](https://www.youtube.com/@edrickdch):
- ⛓ [LangChain, Chroma DB, OpenAI Beginner Guide | ChatGPT with your PDF](https://youtu.be/FuqdVNB_8c0)
[LangChain Crash Course: Build an AutoGPT app in 25 minutes](https://youtu.be/MlK6SIjcjE8) by [Nicholas Renotte](https://www.youtube.com/@NicholasRenotte)
@@ -18,8 +24,10 @@ This is a collection of `LangChain` tutorials on `YouTube`.
- #3 [LLM Chains using `GPT 3.5` and other LLMs](https://youtu.be/S8j9Tk0lZHU)
- #4 [Chatbot Memory for `Chat-GPT`, `Davinci` + other LLMs](https://youtu.be/X05uK0TZozM)
- #5 [Chat with OpenAI in LangChain](https://youtu.be/CnAgB3A5OlU)
- #6 [LangChain Agents Deep Dive with `GPT 3.5`](https://youtu.be/jSP-gSEyVeI)
- [Prompt Engineering with OpenAI's `GPT-3` and other LLMs](https://youtu.be/BP9fi_0XTlw)
- #6 [Fixing LLM Hallucinations with Retrieval Augmentation in LangChain](https://youtu.be/kvdVduIJsc8)
- #7 [LangChain Agents Deep Dive with GPT 3.5](https://youtu.be/jSP-gSEyVeI)
-#8 [Create Custom Tools for Chatbots in LangChain](https://youtu.be/q-HNphrWsDE)
-#9 [Build Conversational Agents with Vector DBs](https://youtu.be/H6bCqqw9xyI)
###
@@ -39,6 +47,8 @@ This is a collection of `LangChain` tutorials on `YouTube`.
- [Structured Output From `OpenAI` (Clean Dirty Data)](https://youtu.be/KwAXfey-xQk)
- [Connect `OpenAI` To +5,000 Tools (LangChain + `Zapier`)](https://youtu.be/7tNm0yiDigU)
- [Use LLMs To Extract Data From Text (Expert Mode)](https://youtu.be/xZzvwR9jdPA)
- ⛓ [Extract Insights From Interview Transcripts Using LLMs](https://youtu.be/shkMOHwJ4SM)
- ⛓ [5 Levels Of LLM Summarizing: Novice to Expert](https://youtu.be/qaPMdcCqtWk)
###
@@ -60,6 +70,12 @@ This is a collection of `LangChain` tutorials on `YouTube`.
- [Talk to your `CSV` & `Excel` with LangChain](https://youtu.be/xQ3mZhw69bc)
- [`BabyAGI`: Discover the Power of Task-Driven Autonomous Agents!](https://youtu.be/QBcDLSE2ERA)
- [Improve your `BabyAGI` with LangChain](https://youtu.be/DRgPyOXZ-oE)
- ⛓ [Master `PDF` Chat with LangChain - Your essential guide to queries on documents](https://youtu.be/ZzgUqFtxgXI)
- ⛓ [Using LangChain with `DuckDuckGO` `Wikipedia` & `PythonREPL` Tools](https://youtu.be/KerHlb8nuVc)
- ⛓ [Building Custom Tools and Agents with LangChain (gpt-3.5-turbo)](https://youtu.be/biS8G8x8DdA)
- ⛓ [LangChain Retrieval QA Over Multiple Files with `ChromaDB`](https://youtu.be/3yPBVii7Ct0)
- ⛓ [LangChain Retrieval QA with Instructor Embeddings & `ChromaDB` for PDFs](https://youtu.be/cFCGUjc33aU)
- ⛓ [LangChain + Retrieval Local LLMs for Retrieval QA - No OpenAI!!!](https://youtu.be/9ISVjh8mdlA)
###
@@ -68,6 +84,7 @@ This is a collection of `LangChain` tutorials on `YouTube`.
- [Working with MULTIPLE `PDF` Files in LangChain: `ChatGPT` for your Data](https://youtu.be/s5LhRdh5fu4)
- [`ChatGPT` for YOUR OWN `PDF` files with LangChain](https://youtu.be/TLf90ipMzfE)
- [Talk to YOUR DATA without OpenAI APIs: LangChain](https://youtu.be/wrD-fZvT6UI)
- ⛓️ [CHATGPT For WEBSITES: Custom ChatBOT](https://youtu.be/RBnuhhmD21U)
###
@@ -75,6 +92,7 @@ LangChain by [Chat with data](https://www.youtube.com/@chatwithdata)
- [LangChain Beginner's Tutorial for `Typescript`/`Javascript`](https://youtu.be/bH722QgRlhQ)
- [`GPT-4` Tutorial: How to Chat With Multiple `PDF` Files (~1000 pages of Tesla's 10-K Annual Reports)](https://youtu.be/Ix9WIZpArm0)
- [`GPT-4` & LangChain Tutorial: How to Chat With A 56-Page `PDF` Document (w/`Pinecone`)](https://youtu.be/ih9PBGVVOO4)
- ⛓ [LangChain & Supabase Tutorial: How to Build a ChatGPT Chatbot For Your Website](https://youtu.be/R2FMzcsmQY8)
###
@@ -84,3 +102,7 @@ LangChain by [Chat with data](https://www.youtube.com/@chatwithdata)
- [LangChain Models: `ChatGPT`, `Flan Alpaca`, `OpenAI Embeddings`, Prompt Templates & Streaming](https://www.youtube.com/watch?v=zy6LiK5F5-s)
- [LangChain Chains: Use `ChatGPT` to Build Conversational Agents, Summaries and Q&A on Text With LLMs](https://www.youtube.com/watch?v=h1tJZQPcimM)
- [Analyze Custom CSV Data with `GPT-4` using Langchain](https://www.youtube.com/watch?v=Ew3sGdX8at4)
- ⛓ [Build ChatGPT Chatbots with LangChain Memory: Understanding and Implementing Memory in Conversations](https://youtu.be/CyuUlf54wTs)
---------------------
⛓ icon marks a new video [last update 2023-05-15]

View File

@@ -1,57 +1,63 @@
Welcome to LangChain
==========================
LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:
| **LangChain** is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model, but will also be:
1. *Data-aware*: connect a language model to other sources of data
2. *Agentic*: allow a language model to interact with its environment
- *Be data-aware*: connect a language model to other sources of data
- *Be agentic*: allow a language model to interact with its environment
| The LangChain framework is designed around these principles.
The LangChain framework is designed with the above principles in mind.
This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see `here <https://docs.langchain.com/docs/>`_. For the JavaScript documentation, see `here <https://js.langchain.com/docs/>`_.
| This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see `here <https://docs.langchain.com/docs/>`_. For the JavaScript documentation, see `here <https://js.langchain.com/docs/>`_.
Getting Started
----------------
How to get started using LangChain to create an Language Model application.
| How to get started using LangChain to create an Language Model application.
- `Getting Started tutorial <./getting_started/getting_started.html>`_
- `Quickstart Guide <./getting_started/getting_started.html>`_
Tutorials created by community experts and presented on YouTube.
| Concepts and terminology.
- `Concepts and terminology <./getting_started/concepts.html>`_
| Tutorials created by community experts and presented on YouTube.
- `Tutorials <./getting_started/tutorials.html>`_
.. toctree::
:maxdepth: 1
:maxdepth: 2
:caption: Getting Started
:name: getting_started
:hidden:
getting_started/getting_started.md
getting_started/concepts.md
getting_started/tutorials.md
Modules
-----------
There are several main modules that LangChain provides support for.
For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.
These modules are, in increasing order of complexity:
| These modules are the core abstractions which we view as the building blocks of any LLM-powered application.
For each module LangChain provides standard, extendable interfaces. LangChain also provides external integrations and even end-to-end implementations for off-the-shelf use.
- `Models <./modules/models.html>`_: The various model types and model integrations LangChain supports.
| The docs for each module contain quickstart examples, how-to guides, reference docs, and conceptual guides.
- `Prompts <./modules/prompts.html>`_: This includes prompt management, prompt optimization, and prompt serialization.
| The modules are (from least to most complex):
- `Memory <./modules/memory.html>`_: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
- `Models <./modules/models.html>`_: Supported model types and integrations.
- `Indexes <./modules/indexes.html>`_: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.
- `Prompts <./modules/prompts.html>`_: Prompt management, optimization, and serialization.
- `Chains <./modules/chains.html>`_: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
- `Memory <./modules/memory.html>`_: Memory refers to state that is persisted between calls of a chain/agent.
- `Agents <./modules/agents.html>`_: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.
- `Indexes <./modules/indexes.html>`_: Language models become much more powerful when combined with application-specific data - this module contains interfaces and integrations for loading, querying and updating external data.
- `Callbacks <./modules/callbacks/getting_started.html>`_: It can be difficult to track all that occurs inside a chain or agent - callbacks help add a level of observability and introspection.
- `Chains <./modules/chains.html>`_: Chains are structured sequences of calls (to an LLM or to a different utility).
- `Agents <./modules/agents.html>`_: An agent is a Chain in which an LLM, given a high-level directive and a set of tools, repeatedly decides an action, executes the action and observes the outcome until the high-level directive is complete.
- `Callbacks <./modules/callbacks/getting_started.html>`_: Callbacks let you log and stream the intermediate steps of any chain, making it easy to observe, debug, and evaluate the internals of an application.
.. toctree::
:maxdepth: 1
@@ -61,8 +67,8 @@ These modules are, in increasing order of complexity:
./modules/models.rst
./modules/prompts.rst
./modules/indexes.md
./modules/memory.md
./modules/indexes.md
./modules/chains.md
./modules/agents.md
./modules/callbacks/getting_started.ipynb
@@ -70,29 +76,29 @@ These modules are, in increasing order of complexity:
Use Cases
----------
The above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.
| Best practices and built-in implementations for common LangChain use cases:
- `Autonomous Agents <./use_cases/autonomous_agents.html>`_: Autonomous agents are long running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI.
- `Autonomous Agents <./use_cases/autonomous_agents.html>`_: Autonomous agents are long-running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI.
- `Agent Simulations <./use_cases/agent_simulations.html>`_: Putting agents in a sandbox and observing how they interact with each other or to events can be an interesting way to observe their long-term memory abilities.
- `Agent Simulations <./use_cases/agent_simulations.html>`_: Putting agents in a sandbox and observing how they interact with each other and react to events can be an effective way to evaluate their long-range reasoning and planning abilities.
- `Personal Assistants <./use_cases/personal_assistants.html>`_: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.
- `Personal Assistants <./use_cases/personal_assistants.html>`_: One of the primary LangChain use cases. Personal assistants need to take actions, remember interactions, and have knowledge about your data.
- `Question Answering <./use_cases/question_answering.html>`_: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.
- `Question Answering <./use_cases/question_answering.html>`_: Another common LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.
- `Chatbots <./use_cases/chatbots.html>`_: Since language models are good at producing text, that makes them ideal for creating chatbots.
- `Chatbots <./use_cases/chatbots.html>`_: Language models love to chat, making this a very natural use of them.
- `Querying Tabular Data <./use_cases/tabular.html>`_: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.
- `Querying Tabular Data <./use_cases/tabular.html>`_: Recommended reading if you want to use language models to query structured data (CSVs, SQL, dataframes, etc).
- `Code Understanding <./use_cases/code.html>`_: If you want to understand how to use LLMs to query source code from github, you should read this page.
- `Code Understanding <./use_cases/code.html>`_: Recommended reading if you want to use language models to analyze code.
- `Interacting with APIs <./use_cases/apis.html>`_: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.
- `Interacting with APIs <./use_cases/apis.html>`_: Enabling language models to interact with APIs is extremely powerful. It gives them access to up-to-date information and allows them to take actions.
- `Extraction <./use_cases/extraction.html>`_: Extract structured information from text.
- `Summarization <./use_cases/summarization.html>`_: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.
- `Summarization <./use_cases/summarization.html>`_: Compressing longer documents. A type of Data-Augmented Generation.
- `Evaluation <./use_cases/evaluation.html>`_: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
- `Evaluation <./use_cases/evaluation.html>`_: Generative models are hard to evaluate with traditional metrics. One promising approach is to use language models themselves to do the evaluation.
.. toctree::
@@ -101,26 +107,29 @@ The above modules can be used in a variety of ways. LangChain also provides guid
:name: use_cases
:hidden:
./use_cases/personal_assistants.md
./use_cases/autonomous_agents.md
./use_cases/agent_simulations.md
./use_cases/personal_assistants.md
./use_cases/question_answering.md
./use_cases/chatbots.md
./use_cases/tabular.rst
./use_cases/code.md
./use_cases/apis.md
./use_cases/summarization.md
./use_cases/extraction.md
./use_cases/summarization.md
./use_cases/evaluation.rst
Reference Docs
---------------
All of LangChain's reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.
| Full documentation on all methods, classes, installation methods, and integration setups for LangChain.
- `LangChain Installation <./reference/installation.html>`_
- `Reference Documentation <./reference.html>`_
.. toctree::
:maxdepth: 1
:caption: Reference
@@ -128,47 +137,52 @@ All of LangChain's reference documentation, in one place. Full documentation on
:hidden:
./reference/installation.md
./reference/integrations.md
./reference.rst
LangChain Ecosystem
-------------------
Ecosystem
------------
Guides for how other companies/products can be used with LangChain
| LangChain integrates a lot of different LLMs, systems, and products.
| From the other side, many systems and products depend on LangChain.
| It creates a vibrant and thriving ecosystem.
- `Integrations <./integrations.html>`_: Guides for how other products can be used with LangChain.
- `Dependents <./dependents.html>`_: List of repositories that use LangChain.
- `Deployments <./ecosystem/deployments.html>`_: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.
- `LangChain Ecosystem <./ecosystem.html>`_
.. toctree::
:maxdepth: 1
:maxdepth: 2
:glob:
:caption: Ecosystem
:name: ecosystem
:hidden:
./ecosystem.rst
./integrations.rst
./dependents.md
./ecosystem/deployments.md
Additional Resources
---------------------
Additional collection of resources we think may be useful as you develop your application!
| Additional resources we think may be useful as you develop your application!
- `LangChainHub <https://github.com/hwchase17/langchain-hub>`_: The LangChainHub is a place to share and explore other prompts, chains, and agents.
- `Glossary <./glossary.html>`_: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!
- `Gallery <https://github.com/kyrolabs/awesome-langchain>`_: A collection of great projects that use Langchain, compiled by the folks at `Kyrolabs <https://kyrolabs.com>`_. Useful for finding inspiration and example implementations.
- `Gallery <./gallery.html>`_: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.
- `Tracing <./additional_resources/tracing.html>`_: A guide on using tracing in LangChain to visualize the execution of chains and agents.
- `Deployments <./deployments.html>`_: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.
- `Tracing <./tracing.html>`_: A guide on using tracing in LangChain to visualize the execution of chains and agents.
- `Model Laboratory <./model_laboratory.html>`_: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
- `Model Laboratory <./additional_resources/model_laboratory.html>`_: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
- `Discord <https://discord.gg/6adMQxSpJS>`_: Join us on our Discord to discuss all things LangChain!
- `YouTube <./youtube.html>`_: A collection of the LangChain tutorials and videos.
- `YouTube <./additional_resources/youtube.html>`_: A collection of the LangChain tutorials and videos.
- `Production Support <https://forms.gle/57d8AmXBYp8PP8tZA>`_: As you move your LangChains into production, we'd love to offer more comprehensive support. Please fill out this form and we'll set up a dedicated support Slack channel.
@@ -180,11 +194,9 @@ Additional collection of resources we think may be useful as you develop your ap
:hidden:
LangChainHub <https://github.com/hwchase17/langchain-hub>
./glossary.md
./gallery.rst
./deployments.md
./tracing.md
./use_cases/model_laboratory.ipynb
Gallery <https://github.com/kyrolabs/awesome-langchain>
./additional_resources/tracing.md
./additional_resources/model_laboratory.ipynb
Discord <https://discord.gg/6adMQxSpJS>
./youtube.md
./additional_resources/youtube.md
Production Support <https://forms.gle/57d8AmXBYp8PP8tZA>

View File

@@ -1,12 +1,13 @@
LangChain Ecosystem
Integrations
===================
Guides for how other companies/products can be used with LangChain
LangChain integrates with many LLMs, systems, and products.
Groups
----------
Integrations by Module
--------------------------------
| Integrations grouped by the core LangChain module they map to:
LangChain provides integration with many LLMs and systems:
- `LLM Providers <./modules/models/llms/integrations.html>`_
- `Chat Model Providers <./modules/models/chat/integrations.html>`_
@@ -18,12 +19,15 @@ LangChain provides integration with many LLMs and systems:
- `Tool Providers <./modules/agents/tools.html>`_
- `Toolkit Integrations <./modules/agents/toolkits.html>`_
Companies / Products
----------
All Integrations
-------------------------------------------
| A comprehensive list of LLMs, systems, and products integrated with LangChain:
.. toctree::
:maxdepth: 1
:glob:
ecosystem/*
integrations/*

92
docs/integrations/beam.md Normal file
View File

@@ -0,0 +1,92 @@
# Beam
This page covers how to use Beam within LangChain.
It is broken into two parts: installation and setup, and then references to specific Beam wrappers.
## Installation and Setup
- [Create an account](https://www.beam.cloud/)
- Install the Beam CLI with `curl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | sh`
- Register API keys with `beam configure`
- Set environment variables (`BEAM_CLIENT_ID`) and (`BEAM_CLIENT_SECRET`)
- Install the Beam SDK `pip install beam-sdk`
## Wrappers
### LLM
There exists a Beam LLM wrapper, which you can access with
```python
from langchain.llms.beam import Beam
```
## Define your Beam app.
This is the environment youll be developing against once you start the app.
It's also used to define the maximum response length from the model.
```python
llm = Beam(model_name="gpt2",
name="langchain-gpt2-test",
cpu=8,
memory="32Gi",
gpu="A10G",
python_version="python3.8",
python_packages=[
"diffusers[torch]>=0.10",
"transformers",
"torch",
"pillow",
"accelerate",
"safetensors",
"xformers",],
max_length="50",
verbose=False)
```
## Deploy your Beam app
Once defined, you can deploy your Beam app by calling your model's `_deploy()` method.
```python
llm._deploy()
```
## Call your Beam app
Once a beam model is deployed, it can be called by callying your model's `_call()` method.
This returns the GPT2 text response to your prompt.
```python
response = llm._call("Running machine learning on a remote GPU")
```
An example script which deploys the model and calls it would be:
```python
from langchain.llms.beam import Beam
import time
llm = Beam(model_name="gpt2",
name="langchain-gpt2-test",
cpu=8,
memory="32Gi",
gpu="A10G",
python_version="python3.8",
python_packages=[
"diffusers[torch]>=0.10",
"transformers",
"torch",
"pillow",
"accelerate",
"safetensors",
"xformers",],
max_length="50",
verbose=False)
llm._deploy()
response = llm._call("Running machine learning on a remote GPU")
print(response)
```

View File

@@ -0,0 +1,280 @@
{
"cells": [
{
"cell_type": "markdown",
"source": [
"# Databricks\n",
"\n",
"This notebook covers how to connect to the [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain.\n",
"It is broken into 3 parts: installation and setup, connecting to Databricks, and examples."
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"## Installation and Setup"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": 1,
"outputs": [],
"source": [
"!pip install databricks-sql-connector"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"## Connecting to Databricks\n",
"\n",
"You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the `SQLDatabase.from_databricks()` method.\n",
"\n",
"### Syntax\n",
"```python\n",
"SQLDatabase.from_databricks(\n",
" catalog: str,\n",
" schema: str,\n",
" host: Optional[str] = None,\n",
" api_token: Optional[str] = None,\n",
" warehouse_id: Optional[str] = None,\n",
" cluster_id: Optional[str] = None,\n",
" engine_args: Optional[dict] = None,\n",
" **kwargs: Any)\n",
"```\n",
"### Required Parameters\n",
"* `catalog`: The catalog name in the Databricks database.\n",
"* `schema`: The schema name in the catalog.\n",
"\n",
"### Optional Parameters\n",
"There following parameters are optional. When executing the method in a Databricks notebook, you don't need to provide them in most of the cases.\n",
"* `host`: The Databricks workspace hostname, excluding 'https://' part. Defaults to 'DATABRICKS_HOST' environment variable or current workspace if in a Databricks notebook.\n",
"* `api_token`: The Databricks personal access token for accessing the Databricks SQL warehouse or the cluster. Defaults to 'DATABRICKS_API_TOKEN' environment variable or a temporary one is generated if in a Databricks notebook.\n",
"* `warehouse_id`: The warehouse ID in the Databricks SQL.\n",
"* `cluster_id`: The cluster ID in the Databricks Runtime. If running in a Databricks notebook and both 'warehouse_id' and 'cluster_id' are None, it uses the ID of the cluster the notebook is attached to.\n",
"* `engine_args`: The arguments to be used when connecting Databricks.\n",
"* `**kwargs`: Additional keyword arguments for the `SQLDatabase.from_uri` method."
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"## Examples"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": 2,
"outputs": [],
"source": [
"# Connecting to Databricks with SQLDatabase wrapper\n",
"from langchain import SQLDatabase\n",
"\n",
"db = SQLDatabase.from_databricks(catalog='samples', schema='nyctaxi')"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": 3,
"outputs": [],
"source": [
"# Creating a OpenAI Chat LLM wrapper\n",
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0, model_name=\"gpt-4\")"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"### SQL Chain example\n",
"\n",
"This example demonstrates the use of the [SQL Chain](https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html) for answering a question over a Databricks database."
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": 4,
"id": "36f2270b",
"metadata": {},
"outputs": [],
"source": [
"from langchain import SQLDatabaseChain\n",
"\n",
"db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "4e2b5f25",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new SQLDatabaseChain chain...\u001B[0m\n",
"What is the average duration of taxi rides that start between midnight and 6am?\n",
"SQLQuery:\u001B[32;1m\u001B[1;3mSELECT AVG(UNIX_TIMESTAMP(tpep_dropoff_datetime) - UNIX_TIMESTAMP(tpep_pickup_datetime)) as avg_duration\n",
"FROM trips\n",
"WHERE HOUR(tpep_pickup_datetime) >= 0 AND HOUR(tpep_pickup_datetime) < 6\u001B[0m\n",
"SQLResult: \u001B[33;1m\u001B[1;3m[(987.8122786304605,)]\u001B[0m\n",
"Answer:\u001B[32;1m\u001B[1;3mThe average duration of taxi rides that start between midnight and 6am is 987.81 seconds.\u001B[0m\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": "'The average duration of taxi rides that start between midnight and 6am is 987.81 seconds.'"
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"db_chain.run(\"What is the average duration of taxi rides that start between midnight and 6am?\")"
]
},
{
"cell_type": "markdown",
"source": [
"### SQL Database Agent example\n",
"\n",
"This example demonstrates the use of the [SQL Database Agent](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/sql_database.html) for answering questions over a Databricks database."
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": 7,
"id": "9918e86a",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import create_sql_agent\n",
"from langchain.agents.agent_toolkits import SQLDatabaseToolkit\n",
"\n",
"toolkit = SQLDatabaseToolkit(db=db, llm=llm)\n",
"agent = create_sql_agent(\n",
" llm=llm,\n",
" toolkit=toolkit,\n",
" verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c484a76e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mAction: list_tables_sql_db\n",
"Action Input: \u001B[0m\n",
"Observation: \u001B[38;5;200m\u001B[1;3mtrips\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3mI should check the schema of the trips table to see if it has the necessary columns for trip distance and duration.\n",
"Action: schema_sql_db\n",
"Action Input: trips\u001B[0m\n",
"Observation: \u001B[33;1m\u001B[1;3m\n",
"CREATE TABLE trips (\n",
"\ttpep_pickup_datetime TIMESTAMP, \n",
"\ttpep_dropoff_datetime TIMESTAMP, \n",
"\ttrip_distance FLOAT, \n",
"\tfare_amount FLOAT, \n",
"\tpickup_zip INT, \n",
"\tdropoff_zip INT\n",
") USING DELTA\n",
"\n",
"/*\n",
"3 rows from trips table:\n",
"tpep_pickup_datetime\ttpep_dropoff_datetime\ttrip_distance\tfare_amount\tpickup_zip\tdropoff_zip\n",
"2016-02-14 16:52:13+00:00\t2016-02-14 17:16:04+00:00\t4.94\t19.0\t10282\t10171\n",
"2016-02-04 18:44:19+00:00\t2016-02-04 18:46:00+00:00\t0.28\t3.5\t10110\t10110\n",
"2016-02-17 17:13:57+00:00\t2016-02-17 17:17:55+00:00\t0.7\t5.0\t10103\t10023\n",
"*/\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3mThe trips table has the necessary columns for trip distance and duration. I will write a query to find the longest trip distance and its duration.\n",
"Action: query_checker_sql_db\n",
"Action Input: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1\u001B[0m\n",
"Observation: \u001B[31;1m\u001B[1;3mSELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3mThe query is correct. I will now execute it to find the longest trip distance and its duration.\n",
"Action: query_sql_db\n",
"Action Input: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1\u001B[0m\n",
"Observation: \u001B[36;1m\u001B[1;3m[(30.6, '0 00:43:31.000000000')]\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3mI now know the final answer.\n",
"Final Answer: The longest trip distance is 30.6 miles and it took 43 minutes and 31 seconds.\u001B[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": "'The longest trip distance is 30.6 miles and it took 43 minutes and 31 seconds.'"
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"What is the longest trip distance and how long did it take?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -8,10 +8,10 @@ Docugami converts business documents into a Document XML Knowledge Graph, genera
## Quick start
1. Create a Docugami workspace: http://www.docugami.com (free trials available)
1. Create a Docugami workspace: <a href="http://www.docugami.com">http://www.docugami.com</a> (free trials available)
2. Add your documents (PDF, DOCX or DOC) and allow Docugami to ingest and cluster them into sets of similar documents, e.g. NDAs, Lease Agreements, and Service Agreements. There is no fixed set of document types supported by the system, the clusters created depend on your particular documents, and you can [change the docset assignments](https://help.docugami.com/home/working-with-the-doc-sets-view) later.
3. Create an access token via the Developer Playground for your workspace. Detailed instructions: https://help.docugami.com/home/docugami-api
4. Explore the Docugami API at https://api-docs.docugami.com/ to get a list of your processed docset IDs, or just the document IDs for a particular docset.
4. Explore the Docugami API at <a href="https://api-docs.docugami.com">https://api-docs.docugami.com</a> to get a list of your processed docset IDs, or just the document IDs for a particular docset.
6. Use the DocugamiLoader as detailed in [this notebook](../modules/indexes/document_loaders/examples/docugami.ipynb), to get rich semantic chunks for your documents.
7. Optionally, build and publish one or more [reports or abstracts](https://help.docugami.com/home/reports). This helps Docugami improve the semantic XML with better tags based on your preferences, which are then added to the DocugamiLoader output as metadata. Use techniques like [self-querying retriever](https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query_retriever.html) to do high accuracy Document QA.

View File

@@ -1,4 +1,4 @@
# Google Search Wrapper
# Google Search
This page covers how to use the Google Search API within LangChain.
It is broken into two parts: installation and setup, and then references to the specific Google Search wrapper.

View File

@@ -1,4 +1,4 @@
# Google Serper Wrapper
# Google Serper
This page covers how to use the [Serper](https://serper.dev) Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.
It is broken into two parts: setup, and then references to the specific Google Serper wrapper.

View File

@@ -0,0 +1,20 @@
# Psychic
This page covers how to use [Psychic](https://www.psychic.dev/) within LangChain.
## What is Psychic?
Psychic is a platform for integrating with your customers SaaS tools like Notion, Zendesk, Confluence, and Google Drive via OAuth and syncing documents from these applications to your SQL or vector database. You can think of it like Plaid for unstructured data. Psychic is easy to set up - you use it by importing the react library and configuring it with your Sidekick API key, which you can get from the [Psychic dashboard](https://dashboard.psychic.dev/). When your users connect their applications, you can view these connections from the dashboard and retrieve data using the server-side libraries.
## Quick start
1. Create an account in the [dashboard](https://dashboard.psychic.dev/).
2. Use the [react library](https://docs.psychic.dev/sidekick-link) to add the Psychic link modal to your frontend react app. Users will use this to connect their SaaS apps.
3. Once your user has created a connection, you can use the langchain PsychicLoader by following the [example notebook](../modules/indexes/document_loaders/examples/psychic.ipynb)
# Advantages vs Other Document Loaders
1. **Universal API:** Instead of building OAuth flows and learning the APIs for every SaaS app, you integrate Psychic once and leverage our universal API to retrieve data.
2. **Data Syncs:** Data in your customers' SaaS apps can get stale fast. With Psychic you can configure webhooks to keep your documents up to date on a daily or realtime basis.
3. **Simplified OAuth:** Psychic handles OAuth end-to-end so that you don't have to spend time creating OAuth clients for each integration, keeping access tokens fresh, and handling OAuth redirect logic.

View File

@@ -0,0 +1,40 @@
# Vectara
What is Vectara?
**Vectara Overview:**
- Vectara is developer-first API platform for building conversational search applications
- To use Vectara - first [sign up](https://console.vectara.com/signup) and create an account. Then create a corpus and an API key for indexing and searching.
- You can use Vectara's [indexing API](https://docs.vectara.com/docs/indexing-apis/indexing) to add documents into Vectara's index
- You can use Vectara's [Search API](https://docs.vectara.com/docs/search-apis/search) to query Vectara's index (which also supports Hybrid search implicitly).
- You can use Vectara's integration with LangChain as a Vector store or using the Retriever abstraction.
## Installation and Setup
To use Vectara with LangChain no special installation steps are required. You just have to provide your customer_id, corpus ID, and an API key created within the Vectara console to enable indexing and searching.
### VectorStore
There exists a wrapper around the Vectara platform, allowing you to use it as a vectorstore, whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import Vectara
```
To create an instance of the Vectara vectorstore:
```python
vectara = Vectara(
vectara_customer_id=customer_id,
vectara_corpus_id=corpus_id,
vectara_api_key=api_key
)
```
The customer_id, corpus_id and api_key are optional, and if they are not supplied will be read from the environment variables `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY`, respectively.
For a more detailed walkthrough of the Vectara wrapper, see one of the two example notebooks:
* [Chat Over Documents with Vectara](./vectara/vectara_chat.html)
* [Vectara Text Generation](./vectara/vectara_text_generation.html)

View File

@@ -0,0 +1,726 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "134a0785",
"metadata": {},
"source": [
"# Chat Over Documents with Vectara\n",
"\n",
"This notebook is based on the [chat_vector_db](https://github.com/hwchase17/langchain/blob/master/docs/modules/chains/index_examples/chat_vector_db.ipynb) notebook, but using Vectara as the vector database."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "70c4e529",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"from langchain.vectorstores import Vectara\n",
"from langchain.vectorstores.vectara import VectaraRetriever\n",
"from langchain.llms import OpenAI\n",
"from langchain.chains import ConversationalRetrievalChain"
]
},
{
"cell_type": "markdown",
"id": "cdff94be",
"metadata": {},
"source": [
"Load in documents. You can replace this with a loader for whatever type of data you want"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "01c46e92",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader(\"../../modules/state_of_the_union.txt\")\n",
"documents = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "239475d2",
"metadata": {},
"source": [
"We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a8930cf7",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"vectorstore = Vectara.from_documents(documents, embedding=None)"
]
},
{
"cell_type": "markdown",
"id": "898b574b",
"metadata": {},
"source": [
"We can now create a memory object, which is neccessary to track the inputs/outputs and hold a conversation."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "af803fee",
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import ConversationBufferMemory\n",
"memory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True)"
]
},
{
"cell_type": "markdown",
"id": "3c96b118",
"metadata": {},
"source": [
"We now initialize the `ConversationalRetrievalChain`"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "7b4110f3",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<class 'langchain.vectorstores.vectara.Vectara'>\n"
]
}
],
"source": [
"openai_api_key = os.environ['OPENAI_API_KEY']\n",
"llm = OpenAI(openai_api_key=openai_api_key, temperature=0)\n",
"retriever = VectaraRetriever(vectorstore, alpha=0.025, k=5, filter=None)\n",
"\n",
"print(type(vectorstore))\n",
"d = retriever.get_relevant_documents('What did the president say about Ketanji Brown Jackson')\n",
"\n",
"qa = ConversationalRetrievalChain.from_llm(llm, retriever, memory=memory)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e8ce4fe9",
"metadata": {},
"outputs": [],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query})"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "4c79862b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, and a former federal public defender.\""
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"answer\"]"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c697d9d1",
"metadata": {},
"outputs": [],
"source": [
"query = \"Did he mention who she suceeded\"\n",
"result = qa({\"question\": query})"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "ba0678f3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' Justice Stephen Breyer.'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['answer']"
]
},
{
"cell_type": "markdown",
"id": "b3308b01-5300-4999-8cd3-22f16dae757e",
"metadata": {},
"source": [
"## Pass in chat history\n",
"\n",
"In the above example, we used a Memory object to track chat history. We can also just pass it in explicitly. In order to do this, we need to initialize a chain without any memory object."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "1b41a10b-bf68-4689-8f00-9aed7675e2ab",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever())"
]
},
{
"cell_type": "markdown",
"id": "83f38c18-ac82-45f4-a79e-8b37ce1ae115",
"metadata": {},
"source": [
"Here's an example of asking a question with no chat history"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "bc672290-8a8b-4828-a90c-f1bbdd6b3920",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "6b62d758-c069-4062-88f0-21e7ea4710bf",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, and a former federal public defender.\""
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"answer\"]"
]
},
{
"cell_type": "markdown",
"id": "8c26a83d-c945-4458-b54a-c6bd7f391303",
"metadata": {},
"source": [
"Here's an example of asking a question with some chat history"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "9c95460b-7116-4155-a9d2-c0fb027ee592",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = [(query, result[\"answer\"])]\n",
"query = \"Did he mention who she suceeded\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "698ac00c-cadc-407f-9423-226b2d9258d0",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"' Justice Stephen Breyer.'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['answer']"
]
},
{
"cell_type": "markdown",
"id": "0eaadf0f",
"metadata": {},
"source": [
"## Return Source Documents\n",
"You can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned."
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "562769c6",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"qa = ConversationalRetrievalChain.from_llm(llm, vectorstore.as_retriever(), return_source_documents=True)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "ea478300",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "4cb75b4e",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence. A former top litigator in private practice. A former federal public defender.', metadata={'source': '../../modules/state_of_the_union.txt'})"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['source_documents'][0]"
]
},
{
"cell_type": "markdown",
"id": "669ede2f-d69f-4960-8468-8a768ce1a55f",
"metadata": {},
"source": [
"## ConversationalRetrievalChain with `search_distance`\n",
"If you are using a vector store that supports filtering by search distance, you can add a threshold value parameter."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "f4f32c6f-8e49-44af-9116-8830b1fcc5f2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"vectordbkwargs = {\"search_distance\": 0.9}"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "1e251775-31e7-4679-b744-d4a57937f93a",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)\n",
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history, \"vectordbkwargs\": vectordbkwargs})"
]
},
{
"cell_type": "markdown",
"id": "99b96dae",
"metadata": {},
"source": [
"## ConversationalRetrievalChain with `map_reduce`\n",
"We can also use different types of combine document chains with the ConversationalRetrievalChain chain."
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "e53a9d66",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.chains.question_answering import load_qa_chain\n",
"from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "bf205e35",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
"doc_chain = load_qa_chain(llm, chain_type=\"map_reduce\")\n",
"\n",
"chain = ConversationalRetrievalChain(\n",
" retriever=vectorstore.as_retriever(),\n",
" question_generator=question_generator,\n",
" combine_docs_chain=doc_chain,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "78155887",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = chain({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "e54b5fa2",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"' The president did not mention Ketanji Brown Jackson.'"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['answer']"
]
},
{
"cell_type": "markdown",
"id": "a2fe6b14",
"metadata": {},
"source": [
"## ConversationalRetrievalChain with Question Answering with sources\n",
"\n",
"You can also use this chain with the question answering with sources chain."
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "d1058fd2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains.qa_with_sources import load_qa_with_sources_chain"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "a6594482",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"\n",
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
"doc_chain = load_qa_with_sources_chain(llm, chain_type=\"map_reduce\")\n",
"\n",
"chain = ConversationalRetrievalChain(\n",
" retriever=vectorstore.as_retriever(),\n",
" question_generator=question_generator,\n",
" combine_docs_chain=doc_chain,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "e2badd21",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = chain({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "edb31fe5",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"' The president did not mention Ketanji Brown Jackson.\\nSOURCES: ../../modules/state_of_the_union.txt'"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['answer']"
]
},
{
"cell_type": "markdown",
"id": "2324cdc6-98bf-4708-b8cd-02a98b1e5b67",
"metadata": {},
"source": [
"## ConversationalRetrievalChain with streaming to `stdout`\n",
"\n",
"Output from the chain will be streamed to `stdout` token by token in this example."
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "2efacec3-2690-4b05-8de3-a32fd2ac3911",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains.llm import LLMChain\n",
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPT\n",
"from langchain.chains.question_answering import load_qa_chain\n",
"\n",
"# Construct a ConversationalRetrievalChain with a streaming llm for combine docs\n",
"# and a separate, non-streaming llm for question generation\n",
"llm = OpenAI(temperature=0, openai_api_key=openai_api_key)\n",
"streaming_llm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0, openai_api_key=openai_api_key)\n",
"\n",
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
"doc_chain = load_qa_chain(streaming_llm, chain_type=\"stuff\", prompt=QA_PROMPT)\n",
"\n",
"qa = ConversationalRetrievalChain(\n",
" retriever=vectorstore.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator)"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "fd6d43f4-7428-44a4-81bc-26fe88a98762",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, and a former federal public defender."
]
}
],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "5ab38978-f3e8-4fa7-808c-c79dec48379a",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Justice Stephen Breyer."
]
}
],
"source": [
"chat_history = [(query, result[\"answer\"])]\n",
"query = \"Did he mention who she suceeded\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})\n"
]
},
{
"cell_type": "markdown",
"id": "f793d56b",
"metadata": {},
"source": [
"## get_chat_history Function\n",
"You can also specify a `get_chat_history` function, which can be used to format the chat_history string."
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "a7ba9d8c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"def get_chat_history(inputs) -> str:\n",
" res = []\n",
" for human, ai in inputs:\n",
" res.append(f\"Human:{human}\\nAI:{ai}\")\n",
" return \"\\n\".join(res)\n",
"qa = ConversationalRetrievalChain.from_llm(llm, vectorstore.as_retriever(), get_chat_history=get_chat_history)"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "a3e33c0d",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "936dc62f",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, and a former federal public defender.\""
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['answer']"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b8c26901",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,199 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Vectara Text Generation\n",
"\n",
"This notebook is based on [chat_vector_db](https://github.com/hwchase17/langchain/blob/master/docs/modules/chains/index_examples/question_answering.ipynb) and adapted to Vectara."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare Data\n",
"\n",
"First, we prepare the data. For this example, we fetch a documentation site that consists of markdown files hosted on Github and split them into small enough Documents."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.docstore.document import Document\n",
"import requests\n",
"from langchain.vectorstores import Vectara\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.prompts import PromptTemplate\n",
"import pathlib\n",
"import subprocess\n",
"import tempfile"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Cloning into '.'...\n"
]
}
],
"source": [
"def get_github_docs(repo_owner, repo_name):\n",
" with tempfile.TemporaryDirectory() as d:\n",
" subprocess.check_call(\n",
" f\"git clone --depth 1 https://github.com/{repo_owner}/{repo_name}.git .\",\n",
" cwd=d,\n",
" shell=True,\n",
" )\n",
" git_sha = (\n",
" subprocess.check_output(\"git rev-parse HEAD\", shell=True, cwd=d)\n",
" .decode(\"utf-8\")\n",
" .strip()\n",
" )\n",
" repo_path = pathlib.Path(d)\n",
" markdown_files = list(repo_path.glob(\"*/*.md\")) + list(\n",
" repo_path.glob(\"*/*.mdx\")\n",
" )\n",
" for markdown_file in markdown_files:\n",
" with open(markdown_file, \"r\") as f:\n",
" relative_path = markdown_file.relative_to(repo_path)\n",
" github_url = f\"https://github.com/{repo_owner}/{repo_name}/blob/{git_sha}/{relative_path}\"\n",
" yield Document(page_content=f.read(), metadata={\"source\": github_url})\n",
"\n",
"sources = get_github_docs(\"yirenlu92\", \"deno-manual-forked\")\n",
"\n",
"source_chunks = []\n",
"splitter = CharacterTextSplitter(separator=\" \", chunk_size=1024, chunk_overlap=0)\n",
"for source in sources:\n",
" for chunk in splitter.split_text(source.page_content):\n",
" source_chunks.append(chunk)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set Up Vector DB\n",
"\n",
"Now that we have the documentation content in chunks, let's put all this information in a vector index for easy retrieval."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"search_index = Vectara.from_texts(source_chunks, embedding=None)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set Up LLM Chain with Custom Prompt\n",
"\n",
"Next, let's set up a simple LLM chain but give it a custom prompt for blog post generation. Note that the custom prompt is parameterized and takes two inputs: `context`, which will be the documents fetched from the vector search, and `topic`, which is given by the user."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain\n",
"prompt_template = \"\"\"Use the context below to write a 400 word blog post about the topic below:\n",
" Context: {context}\n",
" Topic: {topic}\n",
" Blog post:\"\"\"\n",
"\n",
"PROMPT = PromptTemplate(\n",
" template=prompt_template, input_variables=[\"context\", \"topic\"]\n",
")\n",
"\n",
"llm = OpenAI(openai_api_key=os.environ['OPENAI_API_KEY'], temperature=0)\n",
"\n",
"chain = LLMChain(llm=llm, prompt=PROMPT)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Generate Text\n",
"\n",
"Finally, we write a function to apply our inputs to the chain. The function takes an input parameter `topic`. We find the documents in the vector index that correspond to that `topic`, and use them as additional context in our simple LLM chain."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"def generate_blog_post(topic):\n",
" docs = search_index.similarity_search(topic, k=4)\n",
" inputs = [{\"context\": doc.page_content, \"topic\": topic} for doc in docs]\n",
" print(chain.apply(inputs))"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'text': '\\n\\nEnvironment variables are an essential part of any development workflow. They provide a way to store and access information that is specific to the environment in which the code is running. This can be especially useful when working with different versions of a language or framework, or when running code on different machines.\\n\\nThe Deno CLI tasks extension provides a way to easily manage environment variables when running Deno commands. This extension provides a task definition for allowing you to create tasks that execute the `deno` CLI from within the editor. The template for the Deno CLI tasks has the following interface, which can be configured in a `tasks.json` within your workspace:\\n\\nThe task definition includes the `type` field, which should be set to `deno`, and the `command` field, which is the `deno` command to run (e.g. `run`, `test`, `cache`, etc.). Additionally, you can specify additional arguments to pass on the command line, the current working directory to execute the command, and any environment variables.\\n\\nUsing environment variables with the Deno CLI tasks extension is a great way to ensure that your code is running in the correct environment. For example, if you are running a test suite,'}, {'text': '\\n\\nEnvironment variables are an important part of any programming language, and they can be used to store and access data in a variety of ways. In this blog post, we\\'ll be taking a look at environment variables specifically for the shell.\\n\\nShell variables are similar to environment variables, but they won\\'t be exported to spawned commands. They are defined with the following syntax:\\n\\n```sh\\nVAR_NAME=value\\n```\\n\\nShell variables can be used to store and access data in a variety of ways. For example, you can use them to store values that you want to re-use, but don\\'t want to be available in any spawned processes.\\n\\nFor example, if you wanted to store a value and then use it in a command, you could do something like this:\\n\\n```sh\\nVAR=hello && echo $VAR && deno eval \"console.log(\\'Deno: \\' + Deno.env.get(\\'VAR\\'))\"\\n```\\n\\nThis would output the following:\\n\\n```\\nhello\\nDeno: undefined\\n```\\n\\nAs you can see, the value stored in the shell variable is not available in the spawned process.\\n\\n'}, {'text': '\\n\\nWhen it comes to developing applications, environment variables are an essential part of the process. Environment variables are used to store information that can be used by applications and scripts to customize their behavior. This is especially important when it comes to developing applications with Deno, as there are several environment variables that can impact the behavior of Deno.\\n\\nThe most important environment variable for Deno is `DENO_AUTH_TOKENS`. This environment variable is used to store authentication tokens that are used to access remote resources. This is especially important when it comes to accessing remote APIs or databases. Without the proper authentication tokens, Deno will not be able to access the remote resources.\\n\\nAnother important environment variable for Deno is `DENO_DIR`. This environment variable is used to store the directory where Deno will store its files. This includes the Deno executable, the Deno cache, and the Deno configuration files. By setting this environment variable, you can ensure that Deno will always be able to find the files it needs.\\n\\nFinally, there is the `DENO_PLUGINS` environment variable. This environment variable is used to store the list of plugins that Deno will use. This is important for customizing the'}, {'text': '\\n\\nEnvironment variables are a great way to store and access sensitive information in your Deno applications. Deno offers built-in support for environment variables with `Deno.env`, and you can also use a `.env` file to store and access environment variables. In this blog post, we\\'ll explore both of these options and how to use them in your Deno applications.\\n\\n## Built-in `Deno.env`\\n\\nThe Deno runtime offers built-in support for environment variables with [`Deno.env`](https://deno.land/api@v1.25.3?s=Deno.env). `Deno.env` has getter and setter methods. Here is example usage:\\n\\n```ts\\nDeno.env.set(\"FIREBASE_API_KEY\", \"examplekey123\");\\nDeno.env.set(\"FIREBASE_AUTH_DOMAIN\", \"firebasedomain.com\");\\n\\nconsole.log(Deno.env.get(\"FIREBASE_API_KEY\")); // examplekey123\\nconsole.log(Deno.env.get(\"FIREBASE_AUTH_'}]\n"
]
}
],
"source": [
"generate_blog_post(\"environment variables\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,134 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# WhyLabs Integration\n",
"\n",
"Enable observability to detect inputs and LLM issues faster, deliver continuous improvements, and avoid costly incidents."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install langkit -q"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Make sure to set the required API keys and config required to send telemetry to WhyLabs:\n",
"* WhyLabs API Key: https://whylabs.ai/whylabs-free-sign-up\n",
"* Org and Dataset [https://docs.whylabs.ai/docs/whylabs-onboarding](https://docs.whylabs.ai/docs/whylabs-onboarding#upload-a-profile-to-a-whylabs-project)\n",
"* OpenAI: https://platform.openai.com/account/api-keys\n",
"\n",
"Then you can set them like this:\n",
"\n",
"```python\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\"\n",
"os.environ[\"WHYLABS_DEFAULT_ORG_ID\"] = \"\"\n",
"os.environ[\"WHYLABS_DEFAULT_DATASET_ID\"] = \"\"\n",
"os.environ[\"WHYLABS_API_KEY\"] = \"\"\n",
"```\n",
"> *Note*: the callback supports directly passing in these variables to the callback, when no auth is directly passed in it will default to the environment. Passing in auth directly allows for writing profiles to multiple projects or organizations in WhyLabs.\n",
"\n",
"Here's a single LLM integration with OpenAI, which will log various out of the box metrics and send telemetry to WhyLabs for monitoring."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"generations=[[Generation(text=\"\\n\\nMy name is John and I'm excited to learn more about programming.\", generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 20, 'prompt_tokens': 4, 'completion_tokens': 16}, 'model_name': 'text-davinci-003'}\n"
]
}
],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.callbacks import WhyLabsCallbackHandler\n",
"\n",
"whylabs = WhyLabsCallbackHandler.from_params()\n",
"llm = OpenAI(temperature=0, callbacks=[whylabs])\n",
"\n",
"result = llm.generate([\"Hello, World!\"])\n",
"print(result)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"generations=[[Generation(text='\\n\\n1. 123-45-6789\\n2. 987-65-4321\\n3. 456-78-9012', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\n1. johndoe@example.com\\n2. janesmith@example.com\\n3. johnsmith@example.com', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\n1. 123 Main Street, Anytown, USA 12345\\n2. 456 Elm Street, Nowhere, USA 54321\\n3. 789 Pine Avenue, Somewhere, USA 98765', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 137, 'prompt_tokens': 33, 'completion_tokens': 104}, 'model_name': 'text-davinci-003'}\n"
]
}
],
"source": [
"result = llm.generate(\n",
" [\n",
" \"Can you give me 3 SSNs so I can understand the format?\",\n",
" \"Can you give me 3 fake email addresses?\",\n",
" \"Can you give me 3 fake US mailing addresses?\",\n",
" ]\n",
")\n",
"print(result)\n",
"# you don't need to call flush, this will occur periodically, but to demo let's not wait.\n",
"whylabs.flush()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"whylabs.close()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.11.2 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -17,7 +17,7 @@ At the moment, there are two main types of agents:
When should you use each one? Action Agents are more conventional, and good for small tasks.
For more complex or long running tasks, the initial planning step helps to maintain long term objectives and focus. However, that comes at the expense of generally more calls and higher latency.
These two agents are also not mutually exclusive - in fact, it is often best to have an Action Agent be in change of the execution for the Plan and Execute agent.
These two agents are also not mutually exclusive - in fact, it is often best to have an Action Agent be in charge of the execution for the Plan and Execute agent.
Action Agents
-------------

View File

@@ -0,0 +1,371 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "6317727b",
"metadata": {},
"source": [
"# Handle Parsing Errors\n",
"\n",
"Occasionally the LLM cannot determine what step to take because it outputs format in incorrect form to be handled by the output parser. In this case, by default the agent errors. But you can easily control this functionality with `handle_parsing_errors`! Let's explore how."
]
},
{
"cell_type": "markdown",
"id": "39cc1a7b",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "33c7f220",
"metadata": {},
"outputs": [],
"source": [
"from langchain import OpenAI, LLMMathChain, SerpAPIWrapper, SQLDatabase, SQLDatabaseChain\n",
"from langchain.agents import initialize_agent, Tool\n",
"from langchain.agents import AgentType\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.agents.types import AGENT_TO_CLASS"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "3de22959",
"metadata": {},
"outputs": [],
"source": [
"search = SerpAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name = \"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events. You should ask targeted questions\"\n",
" ),\n",
"]"
]
},
{
"cell_type": "markdown",
"id": "9f1fc58a",
"metadata": {},
"source": [
"## Error\n",
"\n",
"In this scenario, the agent will error (because it fails to output an Action string)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "32ad08d1",
"metadata": {},
"outputs": [],
"source": [
"mrkl = initialize_agent(\n",
" tools, \n",
" ChatOpenAI(temperature=0), \n",
" agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, \n",
" verbose=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "facb8895",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n"
]
},
{
"ename": "OutputParserException",
"evalue": "Could not parse LLM output: I'm sorry, but I cannot provide an answer without an Action. Please provide a valid Action in the format specified above.",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mIndexError\u001b[0m Traceback (most recent call last)",
"File \u001b[0;32m~/workplace/langchain/langchain/agents/chat/output_parser.py:21\u001b[0m, in \u001b[0;36mChatOutputParser.parse\u001b[0;34m(self, text)\u001b[0m\n\u001b[1;32m 20\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m---> 21\u001b[0m action \u001b[38;5;241m=\u001b[39m \u001b[43mtext\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msplit\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43m```\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\u001b[43m[\u001b[49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[43m]\u001b[49m\n\u001b[1;32m 22\u001b[0m response \u001b[38;5;241m=\u001b[39m json\u001b[38;5;241m.\u001b[39mloads(action\u001b[38;5;241m.\u001b[39mstrip())\n",
"\u001b[0;31mIndexError\u001b[0m: list index out of range",
"\nDuring handling of the above exception, another exception occurred:\n",
"\u001b[0;31mOutputParserException\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[4], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[43mmrkl\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrun\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mWho is Leo DiCaprio\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43ms girlfriend? No need to add Action\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n",
"File \u001b[0;32m~/workplace/langchain/langchain/chains/base.py:236\u001b[0m, in \u001b[0;36mChain.run\u001b[0;34m(self, callbacks, *args, **kwargs)\u001b[0m\n\u001b[1;32m 234\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mlen\u001b[39m(args) \u001b[38;5;241m!=\u001b[39m \u001b[38;5;241m1\u001b[39m:\n\u001b[1;32m 235\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`run` supports only one positional argument.\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m--> 236\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43margs\u001b[49m\u001b[43m[\u001b[49m\u001b[38;5;241;43m0\u001b[39;49m\u001b[43m]\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcallbacks\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcallbacks\u001b[49m\u001b[43m)\u001b[49m[\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39moutput_keys[\u001b[38;5;241m0\u001b[39m]]\n\u001b[1;32m 238\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m kwargs \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m args:\n\u001b[1;32m 239\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m(kwargs, callbacks\u001b[38;5;241m=\u001b[39mcallbacks)[\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39moutput_keys[\u001b[38;5;241m0\u001b[39m]]\n",
"File \u001b[0;32m~/workplace/langchain/langchain/chains/base.py:140\u001b[0m, in \u001b[0;36mChain.__call__\u001b[0;34m(self, inputs, return_only_outputs, callbacks)\u001b[0m\n\u001b[1;32m 138\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m (\u001b[38;5;167;01mKeyboardInterrupt\u001b[39;00m, \u001b[38;5;167;01mException\u001b[39;00m) \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 139\u001b[0m run_manager\u001b[38;5;241m.\u001b[39mon_chain_error(e)\n\u001b[0;32m--> 140\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m e\n\u001b[1;32m 141\u001b[0m run_manager\u001b[38;5;241m.\u001b[39mon_chain_end(outputs)\n\u001b[1;32m 142\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mprep_outputs(inputs, outputs, return_only_outputs)\n",
"File \u001b[0;32m~/workplace/langchain/langchain/chains/base.py:134\u001b[0m, in \u001b[0;36mChain.__call__\u001b[0;34m(self, inputs, return_only_outputs, callbacks)\u001b[0m\n\u001b[1;32m 128\u001b[0m run_manager \u001b[38;5;241m=\u001b[39m callback_manager\u001b[38;5;241m.\u001b[39mon_chain_start(\n\u001b[1;32m 129\u001b[0m {\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mname\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__class__\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m},\n\u001b[1;32m 130\u001b[0m inputs,\n\u001b[1;32m 131\u001b[0m )\n\u001b[1;32m 132\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 133\u001b[0m outputs \u001b[38;5;241m=\u001b[39m (\n\u001b[0;32m--> 134\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_call\u001b[49m\u001b[43m(\u001b[49m\u001b[43minputs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mrun_manager\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrun_manager\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 135\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m new_arg_supported\n\u001b[1;32m 136\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_call(inputs)\n\u001b[1;32m 137\u001b[0m )\n\u001b[1;32m 138\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m (\u001b[38;5;167;01mKeyboardInterrupt\u001b[39;00m, \u001b[38;5;167;01mException\u001b[39;00m) \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 139\u001b[0m run_manager\u001b[38;5;241m.\u001b[39mon_chain_error(e)\n",
"File \u001b[0;32m~/workplace/langchain/langchain/agents/agent.py:947\u001b[0m, in \u001b[0;36mAgentExecutor._call\u001b[0;34m(self, inputs, run_manager)\u001b[0m\n\u001b[1;32m 945\u001b[0m \u001b[38;5;66;03m# We now enter the agent loop (until it returns something).\u001b[39;00m\n\u001b[1;32m 946\u001b[0m \u001b[38;5;28;01mwhile\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_should_continue(iterations, time_elapsed):\n\u001b[0;32m--> 947\u001b[0m next_step_output \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_take_next_step\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 948\u001b[0m \u001b[43m \u001b[49m\u001b[43mname_to_tool_map\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 949\u001b[0m \u001b[43m \u001b[49m\u001b[43mcolor_mapping\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 950\u001b[0m \u001b[43m \u001b[49m\u001b[43minputs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 951\u001b[0m \u001b[43m \u001b[49m\u001b[43mintermediate_steps\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 952\u001b[0m \u001b[43m \u001b[49m\u001b[43mrun_manager\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrun_manager\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 953\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 954\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(next_step_output, AgentFinish):\n\u001b[1;32m 955\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_return(\n\u001b[1;32m 956\u001b[0m next_step_output, intermediate_steps, run_manager\u001b[38;5;241m=\u001b[39mrun_manager\n\u001b[1;32m 957\u001b[0m )\n",
"File \u001b[0;32m~/workplace/langchain/langchain/agents/agent.py:773\u001b[0m, in \u001b[0;36mAgentExecutor._take_next_step\u001b[0;34m(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\u001b[0m\n\u001b[1;32m 771\u001b[0m raise_error \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mFalse\u001b[39;00m\n\u001b[1;32m 772\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m raise_error:\n\u001b[0;32m--> 773\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m e\n\u001b[1;32m 774\u001b[0m text \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mstr\u001b[39m(e)\n\u001b[1;32m 775\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mhandle_parsing_errors, \u001b[38;5;28mbool\u001b[39m):\n",
"File \u001b[0;32m~/workplace/langchain/langchain/agents/agent.py:762\u001b[0m, in \u001b[0;36mAgentExecutor._take_next_step\u001b[0;34m(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\u001b[0m\n\u001b[1;32m 756\u001b[0m \u001b[38;5;250m\u001b[39m\u001b[38;5;124;03m\"\"\"Take a single step in the thought-action-observation loop.\u001b[39;00m\n\u001b[1;32m 757\u001b[0m \n\u001b[1;32m 758\u001b[0m \u001b[38;5;124;03mOverride this to take control of how the agent makes and acts on choices.\u001b[39;00m\n\u001b[1;32m 759\u001b[0m \u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m 760\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 761\u001b[0m \u001b[38;5;66;03m# Call the LLM to see what to do.\u001b[39;00m\n\u001b[0;32m--> 762\u001b[0m output \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43magent\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mplan\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 763\u001b[0m \u001b[43m \u001b[49m\u001b[43mintermediate_steps\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 764\u001b[0m \u001b[43m \u001b[49m\u001b[43mcallbacks\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrun_manager\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_child\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mif\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mrun_manager\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01melse\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m 765\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43minputs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 766\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 767\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m OutputParserException \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 768\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mhandle_parsing_errors, \u001b[38;5;28mbool\u001b[39m):\n",
"File \u001b[0;32m~/workplace/langchain/langchain/agents/agent.py:444\u001b[0m, in \u001b[0;36mAgent.plan\u001b[0;34m(self, intermediate_steps, callbacks, **kwargs)\u001b[0m\n\u001b[1;32m 442\u001b[0m full_inputs \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mget_full_inputs(intermediate_steps, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n\u001b[1;32m 443\u001b[0m full_output \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mllm_chain\u001b[38;5;241m.\u001b[39mpredict(callbacks\u001b[38;5;241m=\u001b[39mcallbacks, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mfull_inputs)\n\u001b[0;32m--> 444\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43moutput_parser\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mparse\u001b[49m\u001b[43m(\u001b[49m\u001b[43mfull_output\u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[0;32m~/workplace/langchain/langchain/agents/chat/output_parser.py:26\u001b[0m, in \u001b[0;36mChatOutputParser.parse\u001b[0;34m(self, text)\u001b[0m\n\u001b[1;32m 23\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m AgentAction(response[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124maction\u001b[39m\u001b[38;5;124m\"\u001b[39m], response[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124maction_input\u001b[39m\u001b[38;5;124m\"\u001b[39m], text)\n\u001b[1;32m 25\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mException\u001b[39;00m:\n\u001b[0;32m---> 26\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m OutputParserException(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mCould not parse LLM output: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mtext\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m)\n",
"\u001b[0;31mOutputParserException\u001b[0m: Could not parse LLM output: I'm sorry, but I cannot provide an answer without an Action. Please provide a valid Action in the format specified above."
]
}
],
"source": [
"mrkl.run(\"Who is Leo DiCaprio's girlfriend? No need to add Action\")"
]
},
{
"cell_type": "markdown",
"id": "72687d56",
"metadata": {},
"source": [
"## Default error handling\n",
"\n",
"Handle errors with `Invalid or incomplete response`"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6bfc21ef",
"metadata": {},
"outputs": [],
"source": [
"mrkl = initialize_agent(\n",
" tools, \n",
" ChatOpenAI(temperature=0), \n",
" agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, \n",
" verbose=True,\n",
" handle_parsing_errors=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9c181f33",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\n",
"Observation: Invalid or incomplete response\n",
"Thought:\n",
"Observation: Invalid or incomplete response\n",
"Thought:\u001b[32;1m\u001b[1;3mSearch for Leo DiCaprio's current girlfriend\n",
"Action:\n",
"```\n",
"{\n",
" \"action\": \"Search\",\n",
" \"action_input\": \"Leo DiCaprio current girlfriend\"\n",
"}\n",
"```\n",
"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mJust Jared on Instagram: “Leonardo DiCaprio & girlfriend Camila Morrone couple up for a lunch date!\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mCamila Morrone is currently Leo DiCaprio's girlfriend\n",
"Final Answer: Camila Morrone\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Camila Morrone'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mrkl.run(\"Who is Leo DiCaprio's girlfriend? No need to add Action\")"
]
},
{
"cell_type": "markdown",
"id": "6613cc9c",
"metadata": {},
"source": [
"## Custom Error Message\n",
"\n",
"You can easily customize the message to use when there are parsing errors"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "2b23b0af",
"metadata": {},
"outputs": [],
"source": [
"mrkl = initialize_agent(\n",
" tools, \n",
" ChatOpenAI(temperature=0), \n",
" agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, \n",
" verbose=True,\n",
" handle_parsing_errors=\"Check your output and make sure it conforms!\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "5d5a3e47",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\n",
"Observation: Could not parse LLM output: I'm sorry, but I canno\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to use the Search tool to find the answer to the question.\n",
"Action:\n",
"```\n",
"{\n",
" \"action\": \"Search\",\n",
" \"action_input\": \"Who is Leo DiCaprio's girlfriend?\"\n",
"}\n",
"```\n",
"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mDiCaprio broke up with girlfriend Camila Morrone, 25, in the summer of 2022, after dating for four years. He's since been linked to another famous supermodel Gigi Hadid. The power couple were first supposedly an item in September after being spotted getting cozy during a party at New York Fashion Week.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThe answer to the question is that Leo DiCaprio's current girlfriend is Gigi Hadid. \n",
"Final Answer: Gigi Hadid.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Gigi Hadid.'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mrkl.run(\"Who is Leo DiCaprio's girlfriend? No need to add Action\")"
]
},
{
"cell_type": "markdown",
"id": "c2eb06e2",
"metadata": {},
"source": [
"## Custom Error Function\n",
"\n",
"You can also customize the error to be a function that takes the error in and outputs a string."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "22772981",
"metadata": {},
"outputs": [],
"source": [
"def _handle_error(error) -> str:\n",
" return str(error)[:50]\n",
"\n",
"mrkl = initialize_agent(\n",
" tools, \n",
" ChatOpenAI(temperature=0), \n",
" agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, \n",
" verbose=True,\n",
" handle_parsing_errors=_handle_error\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "151eb820",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\n",
"Observation: Could not parse LLM output: I'm sorry, but I canno\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to use the Search tool to find the answer to the question.\n",
"Action:\n",
"```\n",
"{\n",
" \"action\": \"Search\",\n",
" \"action_input\": \"Who is Leo DiCaprio's girlfriend?\"\n",
"}\n",
"```\n",
"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mDiCaprio broke up with girlfriend Camila Morrone, 25, in the summer of 2022, after dating for four years. He's since been linked to another famous supermodel Gigi Hadid. The power couple were first supposedly an item in September after being spotted getting cozy during a party at New York Fashion Week.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThe current girlfriend of Leonardo DiCaprio is Gigi Hadid. \n",
"Final Answer: Gigi Hadid.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Gigi Hadid.'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mrkl.run(\"Who is Leo DiCaprio's girlfriend? No need to add Action\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4aaef878",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "ba5f8741",
"metadata": {},
@@ -9,7 +10,7 @@
"\n",
"This notebook goes through how to create your own custom agent.\n",
"\n",
"An agent consists of three parts:\n",
"An agent consists of two parts:\n",
" \n",
" - Tools: The tools the agent has available to use.\n",
" - The agent class itself: this decides which action to take.\n",

View File

@@ -1,396 +1,480 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ba5f8741",
"metadata": {},
"source": [
"# Custom LLM Agent (with a ChatModel)\n",
"\n",
"This notebook goes through how to create your own custom agent based on a chat model.\n",
"\n",
"An LLM chat agent consists of three parts:\n",
"\n",
"- PromptTemplate: This is the prompt template that can be used to instruct the language model on what to do\n",
"- ChatModel: This is the language model that powers the agent\n",
"- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found\n",
"- OutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish object\n",
"\n",
"\n",
"The LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:\n",
"1. Passes user input and any previous steps to the Agent (in this case, the LLMAgent)\n",
"2. If the Agent returns an `AgentFinish`, then return that directly to the user\n",
"3. If the Agent returns an `AgentAction`, then use that to call a tool and get an `Observation`\n",
"4. Repeat, passing the `AgentAction` and `Observation` back to the Agent until an `AgentFinish` is emitted.\n",
" \n",
"`AgentAction` is a response that consists of `action` and `action_input`. `action` refers to which tool to use, and `action_input` refers to the input to that tool. `log` can also be provided as more context (that can be used for logging, tracing, etc).\n",
"\n",
"`AgentFinish` is a response that contains the final message to be sent back to the user. This should be used to end an agent run.\n",
" \n",
"In this notebook we walk through how to create a custom LLM agent."
]
},
{
"cell_type": "markdown",
"id": "fea4812c",
"metadata": {},
"source": [
"## Set up environment\n",
"\n",
"Do necessary imports, etc."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9af9734e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser\n",
"from langchain.prompts import BaseChatPromptTemplate\n",
"from langchain import SerpAPIWrapper, LLMChain\n",
"from langchain.chat_models import ChatOpenAI\n",
"from typing import List, Union\n",
"from langchain.schema import AgentAction, AgentFinish, HumanMessage\n",
"import re"
]
},
{
"cell_type": "markdown",
"id": "6df0253f",
"metadata": {},
"source": [
"## Set up tool\n",
"\n",
"Set up any tools the agent may want to use. This may be necessary to put in the prompt (so that the agent knows to use these tools)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "becda2a1",
"metadata": {},
"outputs": [],
"source": [
"# Define which tools the agent can use to answer user queries\n",
"search = SerpAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name = \"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\"\n",
" )\n",
"]"
]
},
{
"cell_type": "markdown",
"id": "2e7a075c",
"metadata": {},
"source": [
"## Prompt Template\n",
"\n",
"This instructs the agent on what to do. Generally, the template should incorporate:\n",
" \n",
"- `tools`: which tools the agent has access and how and when to call them.\n",
"- `intermediate_steps`: These are tuples of previous (`AgentAction`, `Observation`) pairs. These are generally not passed directly to the model, but the prompt template formats them in a specific way.\n",
"- `input`: generic user input"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "339b1bb8",
"metadata": {},
"outputs": [],
"source": [
"# Set up the base template\n",
"template = \"\"\"Complete the objective as best you can. You have access to the following tools:\n",
"\n",
"{tools}\n",
"\n",
"Use the following format:\n",
"\n",
"Question: the input question you must answer\n",
"Thought: you should always think about what to do\n",
"Action: the action to take, should be one of [{tool_names}]\n",
"Action Input: the input to the action\n",
"Observation: the result of the action\n",
"... (this Thought/Action/Action Input/Observation can repeat N times)\n",
"Thought: I now know the final answer\n",
"Final Answer: the final answer to the original input question\n",
"\n",
"These were previous tasks you completed:\n",
"\n",
"\n",
"\n",
"Begin!\n",
"\n",
"Question: {input}\n",
"{agent_scratchpad}\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "fd969d31",
"metadata": {},
"outputs": [],
"source": [
"# Set up a prompt template\n",
"class CustomPromptTemplate(BaseChatPromptTemplate):\n",
" # The template to use\n",
" template: str\n",
" # The list of tools available\n",
" tools: List[Tool]\n",
" \n",
" def format_messages(self, **kwargs) -> str:\n",
" # Get the intermediate steps (AgentAction, Observation tuples)\n",
" # Format them in a particular way\n",
" intermediate_steps = kwargs.pop(\"intermediate_steps\")\n",
" thoughts = \"\"\n",
" for action, observation in intermediate_steps:\n",
" thoughts += action.log\n",
" thoughts += f\"\\nObservation: {observation}\\nThought: \"\n",
" # Set the agent_scratchpad variable to that value\n",
" kwargs[\"agent_scratchpad\"] = thoughts\n",
" # Create a tools variable from the list of tools provided\n",
" kwargs[\"tools\"] = \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in self.tools])\n",
" # Create a list of tool names for the tools provided\n",
" kwargs[\"tool_names\"] = \", \".join([tool.name for tool in self.tools])\n",
" formatted = self.template.format(**kwargs)\n",
" return [HumanMessage(content=formatted)]"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "798ef9fb",
"metadata": {},
"outputs": [],
"source": [
"prompt = CustomPromptTemplate(\n",
" template=template,\n",
" tools=tools,\n",
" # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically\n",
" # This includes the `intermediate_steps` variable because that is needed\n",
" input_variables=[\"input\", \"intermediate_steps\"]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ef3a1af3",
"metadata": {},
"source": [
"## Output Parser\n",
"\n",
"The output parser is responsible for parsing the LLM output into `AgentAction` and `AgentFinish`. This usually depends heavily on the prompt used.\n",
"\n",
"This is where you can change the parsing to do retries, handle whitespace, etc"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "7c6fe0d3",
"metadata": {},
"outputs": [],
"source": [
"class CustomOutputParser(AgentOutputParser):\n",
" \n",
" def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:\n",
" # Check if agent should finish\n",
" if \"Final Answer:\" in llm_output:\n",
" return AgentFinish(\n",
" # Return values is generally always a dictionary with a single `output` key\n",
" # It is not recommended to try anything else at the moment :)\n",
" return_values={\"output\": llm_output.split(\"Final Answer:\")[-1].strip()},\n",
" log=llm_output,\n",
" )\n",
" # Parse out the action and action input\n",
" regex = r\"Action\\s*\\d*\\s*:(.*?)\\nAction\\s*\\d*\\s*Input\\s*\\d*\\s*:[\\s]*(.*)\"\n",
" match = re.search(regex, llm_output, re.DOTALL)\n",
" if not match:\n",
" raise ValueError(f\"Could not parse LLM output: `{llm_output}`\")\n",
" action = match.group(1).strip()\n",
" action_input = match.group(2)\n",
" # Return the action and action input\n",
" return AgentAction(tool=action, tool_input=action_input.strip(\" \").strip('\"'), log=llm_output)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "d278706a",
"metadata": {},
"outputs": [],
"source": [
"output_parser = CustomOutputParser()"
]
},
{
"cell_type": "markdown",
"id": "170587b1",
"metadata": {},
"source": [
"## Set up LLM\n",
"\n",
"Choose the LLM you want to use!"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "f9d4c374",
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(temperature=0)"
]
},
{
"cell_type": "markdown",
"id": "caeab5e4",
"metadata": {},
"source": [
"## Define the stop sequence\n",
"\n",
"This is important because it tells the LLM when to stop generation.\n",
"\n",
"This depends heavily on the prompt and model you are using. Generally, you want this to be whatever token you use in the prompt to denote the start of an `Observation` (otherwise, the LLM may hallucinate an observation for you)."
]
},
{
"cell_type": "markdown",
"id": "34be9f65",
"metadata": {},
"source": [
"## Set up the Agent\n",
"\n",
"We can now combine everything to set up our agent"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "9b1cc2a2",
"metadata": {},
"outputs": [],
"source": [
"# LLM chain consisting of the LLM and a prompt\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "e4f5092f",
"metadata": {},
"outputs": [],
"source": [
"tool_names = [tool.name for tool in tools]\n",
"agent = LLMSingleActionAgent(\n",
" llm_chain=llm_chain, \n",
" output_parser=output_parser,\n",
" stop=[\"\\nObservation:\"], \n",
" allowed_tools=tool_names\n",
")"
]
},
{
"cell_type": "markdown",
"id": "aa8a5326",
"metadata": {},
"source": [
"## Use the Agent\n",
"\n",
"Now we can use it!"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "490604e9",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "653b1617",
"metadata": {},
"outputs": [
"cells": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I should use a reliable search engine to get accurate information.\n",
"Action: Search\n",
"Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
"\n",
"Observation:\u001b[36;1m\u001b[1;3mHe went on to date Gisele Bündchen, Bar Refaeli, Blake Lively, Toni Garrn and Nina Agdal, among others, before finally settling down with current girlfriend Camila Morrone, who is 23 years his junior.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI have found the answer to the question.\n",
"Final Answer: Leo DiCaprio's current girlfriend is Camila Morrone.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
"cell_type": "markdown",
"id": "ba5f8741",
"metadata": {
"id": "ba5f8741"
},
"source": [
"# Custom LLM Agent (with a ChatModel)\n",
"\n",
"This notebook goes through how to create your own custom agent based on a chat model.\n",
"\n",
"An LLM chat agent consists of three parts:\n",
"\n",
"- PromptTemplate: This is the prompt template that can be used to instruct the language model on what to do\n",
"- ChatModel: This is the language model that powers the agent\n",
"- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found\n",
"- OutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish object\n",
"\n",
"\n",
"The LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:\n",
"1. Passes user input and any previous steps to the Agent (in this case, the LLMAgent)\n",
"2. If the Agent returns an `AgentFinish`, then return that directly to the user\n",
"3. If the Agent returns an `AgentAction`, then use that to call a tool and get an `Observation`\n",
"4. Repeat, passing the `AgentAction` and `Observation` back to the Agent until an `AgentFinish` is emitted.\n",
" \n",
"`AgentAction` is a response that consists of `action` and `action_input`. `action` refers to which tool to use, and `action_input` refers to the input to that tool. `log` can also be provided as more context (that can be used for logging, tracing, etc).\n",
"\n",
"`AgentFinish` is a response that contains the final message to be sent back to the user. This should be used to end an agent run.\n",
" \n",
"In this notebook we walk through how to create a custom LLM agent."
]
},
{
"data": {
"text/plain": [
"\"Leo DiCaprio's current girlfriend is Camila Morrone.\""
"cell_type": "markdown",
"id": "fea4812c",
"metadata": {
"id": "fea4812c"
},
"source": [
"## Set up environment\n",
"\n",
"Do necessary imports, etc."
]
},
{
"cell_type": "code",
"source": [
"!pip install langchain\n",
"!pip install google-search-results\n",
"!pip install openai"
],
"metadata": {
"id": "mvxi3g8DExu6"
},
"id": "mvxi3g8DExu6",
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9af9734e",
"metadata": {
"id": "9af9734e"
},
"outputs": [],
"source": [
"from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser\n",
"from langchain.prompts import BaseChatPromptTemplate\n",
"from langchain import SerpAPIWrapper, LLMChain\n",
"from langchain.chat_models import ChatOpenAI\n",
"from typing import List, Union\n",
"from langchain.schema import AgentAction, AgentFinish, HumanMessage\n",
"import re\n",
"from getpass import getpass"
]
},
{
"cell_type": "markdown",
"id": "6df0253f",
"metadata": {
"id": "6df0253f"
},
"source": [
"## Set up tool\n",
"\n",
"Set up any tools the agent may want to use. This may be necessary to put in the prompt (so that the agent knows to use these tools)."
]
},
{
"cell_type": "code",
"source": [
"SERPAPI_API_KEY = getpass()"
],
"metadata": {
"id": "LcSV8a5bFSDE"
},
"id": "LcSV8a5bFSDE",
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": 4,
"id": "becda2a1",
"metadata": {
"id": "becda2a1"
},
"outputs": [],
"source": [
"# Define which tools the agent can use to answer user queries\n",
"search = SerpAPIWrapper(serpapi_api_key=SERPAPI_API_KEY)\n",
"tools = [\n",
" Tool(\n",
" name = \"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\"\n",
" )\n",
"]"
]
},
{
"cell_type": "markdown",
"id": "2e7a075c",
"metadata": {
"id": "2e7a075c"
},
"source": [
"## Prompt Template\n",
"\n",
"This instructs the agent on what to do. Generally, the template should incorporate:\n",
" \n",
"- `tools`: which tools the agent has access and how and when to call them.\n",
"- `intermediate_steps`: These are tuples of previous (`AgentAction`, `Observation`) pairs. These are generally not passed directly to the model, but the prompt template formats them in a specific way.\n",
"- `input`: generic user input"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "339b1bb8",
"metadata": {
"id": "339b1bb8"
},
"outputs": [],
"source": [
"# Set up the base template\n",
"template = \"\"\"Complete the objective as best you can. You have access to the following tools:\n",
"\n",
"{tools}\n",
"\n",
"Use the following format:\n",
"\n",
"Question: the input question you must answer\n",
"Thought: you should always think about what to do\n",
"Action: the action to take, should be one of [{tool_names}]\n",
"Action Input: the input to the action\n",
"Observation: the result of the action\n",
"... (this Thought/Action/Action Input/Observation can repeat N times)\n",
"Thought: I now know the final answer\n",
"Final Answer: the final answer to the original input question\n",
"\n",
"These were previous tasks you completed:\n",
"\n",
"\n",
"\n",
"Begin!\n",
"\n",
"Question: {input}\n",
"{agent_scratchpad}\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "fd969d31",
"metadata": {
"id": "fd969d31"
},
"outputs": [],
"source": [
"# Set up a prompt template\n",
"class CustomPromptTemplate(BaseChatPromptTemplate):\n",
" # The template to use\n",
" template: str\n",
" # The list of tools available\n",
" tools: List[Tool]\n",
" \n",
" def format_messages(self, **kwargs) -> str:\n",
" # Get the intermediate steps (AgentAction, Observation tuples)\n",
" # Format them in a particular way\n",
" intermediate_steps = kwargs.pop(\"intermediate_steps\")\n",
" thoughts = \"\"\n",
" for action, observation in intermediate_steps:\n",
" thoughts += action.log\n",
" thoughts += f\"\\nObservation: {observation}\\nThought: \"\n",
" # Set the agent_scratchpad variable to that value\n",
" kwargs[\"agent_scratchpad\"] = thoughts\n",
" # Create a tools variable from the list of tools provided\n",
" kwargs[\"tools\"] = \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in self.tools])\n",
" # Create a list of tool names for the tools provided\n",
" kwargs[\"tool_names\"] = \", \".join([tool.name for tool in self.tools])\n",
" formatted = self.template.format(**kwargs)\n",
" return [HumanMessage(content=formatted)]"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "798ef9fb",
"metadata": {
"id": "798ef9fb"
},
"outputs": [],
"source": [
"prompt = CustomPromptTemplate(\n",
" template=template,\n",
" tools=tools,\n",
" # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically\n",
" # This includes the `intermediate_steps` variable because that is needed\n",
" input_variables=[\"input\", \"intermediate_steps\"]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ef3a1af3",
"metadata": {
"id": "ef3a1af3"
},
"source": [
"## Output Parser\n",
"\n",
"The output parser is responsible for parsing the LLM output into `AgentAction` and `AgentFinish`. This usually depends heavily on the prompt used.\n",
"\n",
"This is where you can change the parsing to do retries, handle whitespace, etc"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7c6fe0d3",
"metadata": {
"id": "7c6fe0d3"
},
"outputs": [],
"source": [
"class CustomOutputParser(AgentOutputParser):\n",
" \n",
" def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:\n",
" # Check if agent should finish\n",
" if \"Final Answer:\" in llm_output:\n",
" return AgentFinish(\n",
" # Return values is generally always a dictionary with a single `output` key\n",
" # It is not recommended to try anything else at the moment :)\n",
" return_values={\"output\": llm_output.split(\"Final Answer:\")[-1].strip()},\n",
" log=llm_output,\n",
" )\n",
" # Parse out the action and action input\n",
" regex = r\"Action\\s*\\d*\\s*:(.*?)\\nAction\\s*\\d*\\s*Input\\s*\\d*\\s*:[\\s]*(.*)\"\n",
" match = re.search(regex, llm_output, re.DOTALL)\n",
" if not match:\n",
" raise ValueError(f\"Could not parse LLM output: `{llm_output}`\")\n",
" action = match.group(1).strip()\n",
" action_input = match.group(2)\n",
" # Return the action and action input\n",
" return AgentAction(tool=action, tool_input=action_input.strip(\" \").strip('\"'), log=llm_output)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "d278706a",
"metadata": {
"id": "d278706a"
},
"outputs": [],
"source": [
"output_parser = CustomOutputParser()"
]
},
{
"cell_type": "markdown",
"id": "170587b1",
"metadata": {
"id": "170587b1"
},
"source": [
"## Set up LLM\n",
"\n",
"Choose the LLM you want to use!"
]
},
{
"cell_type": "code",
"source": [
"OPENAI_API_KEY = getpass()"
],
"metadata": {
"id": "V8UM02AfGyYa"
},
"id": "V8UM02AfGyYa",
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": 11,
"id": "f9d4c374",
"metadata": {
"id": "f9d4c374"
},
"outputs": [],
"source": [
"llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY, temperature=0)"
]
},
{
"cell_type": "markdown",
"id": "caeab5e4",
"metadata": {
"id": "caeab5e4"
},
"source": [
"## Define the stop sequence\n",
"\n",
"This is important because it tells the LLM when to stop generation.\n",
"\n",
"This depends heavily on the prompt and model you are using. Generally, you want this to be whatever token you use in the prompt to denote the start of an `Observation` (otherwise, the LLM may hallucinate an observation for you)."
]
},
{
"cell_type": "markdown",
"id": "34be9f65",
"metadata": {
"id": "34be9f65"
},
"source": [
"## Set up the Agent\n",
"\n",
"We can now combine everything to set up our agent"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "9b1cc2a2",
"metadata": {
"id": "9b1cc2a2"
},
"outputs": [],
"source": [
"# LLM chain consisting of the LLM and a prompt\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "e4f5092f",
"metadata": {
"id": "e4f5092f"
},
"outputs": [],
"source": [
"tool_names = [tool.name for tool in tools]\n",
"agent = LLMSingleActionAgent(\n",
" llm_chain=llm_chain, \n",
" output_parser=output_parser,\n",
" stop=[\"\\nObservation:\"], \n",
" allowed_tools=tool_names\n",
")"
]
},
{
"cell_type": "markdown",
"id": "aa8a5326",
"metadata": {
"id": "aa8a5326"
},
"source": [
"## Use the Agent\n",
"\n",
"Now we can use it!"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "490604e9",
"metadata": {
"id": "490604e9"
},
"outputs": [],
"source": [
"agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "653b1617",
"metadata": {
"id": "653b1617",
"outputId": "82f7dc8f-c09f-46f3-ae45-9acf7e4e3d94",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 264
}
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I should use a reliable search engine to get accurate information.\n",
"Action: Search\n",
"Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
"\n",
"Observation:\u001b[36;1m\u001b[1;3mHe went on to date Gisele Bündchen, Bar Refaeli, Blake Lively, Toni Garrn and Nina Agdal, among others, before finally settling down with current girlfriend Camila Morrone, who is 23 years his junior.\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI have found the answer to the question.\n",
"Final Answer: Leo DiCaprio's current girlfriend is Camila Morrone.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"\"Leo DiCaprio's current girlfriend is Camila Morrone.\""
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 15
}
],
"source": [
"agent_executor.run(\"Search for Leo DiCaprio's girlfriend on the internet.\")"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(\"Search for Leo DiCaprio's girlfriend on the internet.\")"
]
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "18784188d7ecd866c0586ac068b02361a6896dc3a29b64f5cc957f09c590acef"
}
},
"colab": {
"provenance": []
}
},
{
"cell_type": "code",
"execution_count": null,
"id": "adefb4c2",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "18784188d7ecd866c0586ac068b02361a6896dc3a29b64f5cc957f09c590acef"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -9,7 +9,7 @@
"\n",
"This notebook goes through how to create your own custom agent.\n",
"\n",
"An agent consists of three parts:\n",
"An agent consists of two parts:\n",
" \n",
" - Tools: The tools the agent has available to use.\n",
" - The agent class itself: this decides which action to take.\n",

View File

@@ -1,386 +1,383 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4658d71a",
"metadata": {},
"source": [
"# Conversation Agent (for Chat Models)\n",
"\n",
"This notebook walks through using an agent optimized for conversation, using ChatModels. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.\n",
"\n",
"This is accomplished with a specific type of agent (`chat-conversational-react-description`) which expects to be used with a memory component."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "f4f5d1a8",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"os.environ[\"LANGCHAIN_HANDLER\"] = \"langchain\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f65308ab",
"metadata": {},
"outputs": [
"cells": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"WARNING:root:Failed to default session, using empty session: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /sessions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10a1767c0>: Failed to establish a new connection: [Errno 61] Connection refused'))\n"
]
}
],
"source": [
"from langchain.agents import Tool\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.utilities import SerpAPIWrapper\n",
"from langchain.agents import initialize_agent\n",
"from langchain.agents import AgentType"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5fb14d6d",
"metadata": {},
"outputs": [],
"source": [
"search = SerpAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name = \"Current Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events or the current state of the world. the input to this should be a single search term.\"\n",
" ),\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "dddc34c4",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "cafe9bc1",
"metadata": {},
"outputs": [],
"source": [
"llm=ChatOpenAI(temperature=0)\n",
"agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "dc70b454",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x13fab40d0>: Failed to establish a new connection: [Errno 61] Connection refused'))\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"Hello Bob! How can I assist you today?\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Hello Bob! How can I assist you today?'"
"cell_type": "markdown",
"id": "4658d71a",
"metadata": {
"id": "4658d71a"
},
"source": [
"# Conversation Agent (for Chat Models)\n",
"\n",
"This notebook walks through using an agent optimized for conversation, using ChatModels. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.\n",
"\n",
"This is accomplished with a specific type of agent (`chat-conversational-react-description`) which expects to be used with a memory component."
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_chain.run(input=\"hi, i am bob\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "3dcf7953",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x13fab44f0>: Failed to establish a new connection: [Errno 61] Connection refused'))\n"
]
"cell_type": "code",
"source": [
"!pip install langchain\n",
"!pip install google-search-results\n",
"!pip install openai"
],
"metadata": {
"id": "efpRpEwvNXU5"
},
"id": "efpRpEwvNXU5",
"execution_count": null,
"outputs": []
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"Your name is Bob.\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Your name is Bob.'"
"cell_type": "code",
"execution_count": 2,
"id": "f65308ab",
"metadata": {
"id": "f65308ab"
},
"outputs": [],
"source": [
"from langchain.agents import Tool\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.utilities import SerpAPIWrapper\n",
"from langchain.agents import initialize_agent\n",
"from langchain.agents import AgentType\n",
"from getpass import getpass"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_chain.run(input=\"what's my name?\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "aa05f566",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Current Search\",\n",
" \"action_input\": \"Thai food dinner recipes\"\n",
"}\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m59 easy Thai recipes for any night of the week · Marion Grasby's Thai spicy chilli and basil fried rice · Thai curry noodle soup · Marion Grasby's Thai Spicy ...\u001b[0m\n",
"Thought:"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x13fae8be0>: Failed to establish a new connection: [Errno 61] Connection refused'))\n"
]
"cell_type": "code",
"source": [
"SERPAPI_API_KEY = getpass()"
],
"metadata": {
"id": "qMOoW5QYNlPQ"
},
"id": "qMOoW5QYNlPQ",
"execution_count": null,
"outputs": []
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"Here are some Thai food dinner recipes you can make this week: Thai spicy chilli and basil fried rice, Thai curry noodle soup, and Thai Spicy ... (59 recipes in total).\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Here are some Thai food dinner recipes you can make this week: Thai spicy chilli and basil fried rice, Thai curry noodle soup, and Thai Spicy ... (59 recipes in total).'"
"cell_type": "code",
"execution_count": 4,
"id": "5fb14d6d",
"metadata": {
"id": "5fb14d6d"
},
"outputs": [],
"source": [
"search = SerpAPIWrapper(serpapi_api_key=SERPAPI_API_KEY)\n",
"tools = [\n",
" Tool(\n",
" name = \"Current Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events or the current state of the world. the input to this should be a single search term.\"\n",
" ),\n",
"]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_chain.run(\"what are some good dinners to make this week, if i like thai food?\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "c5d8b7ea",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m```json\n",
"{\n",
" \"action\": \"Current Search\",\n",
" \"action_input\": \"who won the world cup in 1978\"\n",
"}\n",
"```\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mArgentina national football team\u001b[0m\n",
"Thought:"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x13fae86d0>: Failed to establish a new connection: [Errno 61] Connection refused'))\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[32;1m\u001b[1;3m```json\n",
"{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"The last letter in your name is 'b', and the winner of the 1978 World Cup was the Argentina national football team.\"\n",
"}\n",
"```\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"The last letter in your name is 'b', and the winner of the 1978 World Cup was the Argentina national football team.\""
"cell_type": "code",
"execution_count": 5,
"id": "dddc34c4",
"metadata": {
"id": "dddc34c4"
},
"outputs": [],
"source": [
"memory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True)"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_chain.run(input=\"tell me the last letter in my name, and also tell me who won the world cup in 1978?\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "f608889b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Current Search\",\n",
" \"action_input\": \"weather in pomfret\"\n",
"}\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m10 Day Weather-Pomfret, CT ; Sun 16. 64° · 50°. 24% · NE 7 mph ; Mon 17. 58° · 45°. 70% · ESE 8 mph ; Tue 18. 57° · 37°. 8% · WSW 15 mph.\u001b[0m\n",
"Thought:"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x13fa9d7f0>: Failed to establish a new connection: [Errno 61] Connection refused'))\n"
]
"cell_type": "code",
"source": [
"OPENAI_API_KEY = getpass()"
],
"metadata": {
"id": "pJWcpWnoN56_"
},
"id": "pJWcpWnoN56_",
"execution_count": null,
"outputs": []
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"The weather in Pomfret, CT for the next 10 days is as follows: Sun 16. 64° · 50°. 24% · NE 7 mph ; Mon 17. 58° · 45°. 70% · ESE 8 mph ; Tue 18. 57° · 37°. 8% · WSW 15 mph.\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The weather in Pomfret, CT for the next 10 days is as follows: Sun 16. 64° · 50°. 24% · NE 7 mph ; Mon 17. 58° · 45°. 70% · ESE 8 mph ; Tue 18. 57° · 37°. 8% · WSW 15 mph.'"
"cell_type": "code",
"execution_count": 7,
"id": "cafe9bc1",
"metadata": {
"id": "cafe9bc1"
},
"outputs": [],
"source": [
"llm=ChatOpenAI(openai_api_key=OPENAI_API_KEY, temperature=0)\n",
"agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "dc70b454",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 192
},
"id": "dc70b454",
"outputId": "9e3d6857-72de-472f-b531-9a7b843f1621"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"Hello Bob! How can I assist you today?\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"'Hello Bob! How can I assist you today?'"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 8
}
],
"source": [
"agent_chain.run(input=\"hi, i am bob\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "3dcf7953",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 192
},
"id": "3dcf7953",
"outputId": "9afdbf2c-ceed-4835-9975-0841dd2162d6"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"Your name is Bob.\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"'Your name is Bob.'"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 9
}
],
"source": [
"agent_chain.run(input=\"what's my name?\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "aa05f566",
"metadata": {
"scrolled": false,
"colab": {
"base_uri": "https://localhost:8080/",
"height": 316
},
"id": "aa05f566",
"outputId": "d38fe468-6c94-450a-9f07-0044bf7beb34"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Current Search\",\n",
" \"action_input\": \"Thai food dinner recipes\"\n",
"}\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m64 easy Thai recipes for any night of the week · Thai curry noodle soup · Thai yellow cauliflower, snake bean and tofu curry · Thai-spiced chicken hand pies · Thai ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"Here are some Thai food dinner recipes you can try this week: Thai curry noodle soup, Thai yellow cauliflower, snake bean and tofu curry, Thai-spiced chicken hand pies, and many more. You can find the full list of recipes at the source I found earlier.\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"'Here are some Thai food dinner recipes you can try this week: Thai curry noodle soup, Thai yellow cauliflower, snake bean and tofu curry, Thai-spiced chicken hand pies, and many more. You can find the full list of recipes at the source I found earlier.'"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 10
}
],
"source": [
"agent_chain.run(\"what are some good dinners to make this week, if i like thai food?\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "c5d8b7ea",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 192
},
"id": "c5d8b7ea",
"outputId": "105db01e-c0f7-4b82-edd9-ea02a02fc66a"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"The last letter in your name is 'b'. Argentina won the World Cup in 1978.\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"\"The last letter in your name is 'b'. Argentina won the World Cup in 1978.\""
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 11
}
],
"source": [
"agent_chain.run(input=\"tell me the last letter in my name, and also tell me who won the world cup in 1978?\")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "f608889b",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 278
},
"id": "f608889b",
"outputId": "49ea0e17-d8cd-4de9-e119-e6006caea32f"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Current Search\",\n",
" \"action_input\": \"weather in pomfret\"\n",
"}\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCloudy with showers. Low around 55F. Winds S at 5 to 10 mph. Chance of rain 60%. Humidity76%.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"Cloudy with showers. Low around 55F. Winds S at 5 to 10 mph. Chance of rain 60%. Humidity76%.\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"'Cloudy with showers. Low around 55F. Winds S at 5 to 10 mph. Chance of rain 60%. Humidity76%.'"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
}
},
"metadata": {},
"execution_count": 12
}
],
"source": [
"agent_chain.run(input=\"whats the weather like in pomfret?\")"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_chain.run(input=\"whats the weather like in pomfret?\")"
]
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"colab": {
"provenance": []
}
},
{
"cell_type": "code",
"execution_count": null,
"id": "0084efd6",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -42,7 +42,7 @@
"search = SerpAPIWrapper()\n",
"llm_math_chain = LLMMathChain(llm=llm, verbose=True)\n",
"db = SQLDatabase.from_uri(\"sqlite:///../../../../../notebooks/Chinook.db\")\n",
"db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True)\n",
"db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)\n",
"tools = [\n",
" Tool(\n",
" name = \"Search\",\n",

View File

@@ -44,7 +44,7 @@
"search = SerpAPIWrapper()\n",
"llm_math_chain = LLMMathChain(llm=llm1, verbose=True)\n",
"db = SQLDatabase.from_uri(\"sqlite:///../../../../../notebooks/Chinook.db\")\n",
"db_chain = SQLDatabaseChain(llm=llm1, database=db, verbose=True)\n",
"db_chain = SQLDatabaseChain.from_llm(llm1, db, verbose=True)\n",
"tools = [\n",
" Tool(\n",
" name = \"Search\",\n",

View File

@@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "406483c4",
"metadata": {},
@@ -15,6 +16,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "91192118",
"metadata": {},
@@ -38,6 +40,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "0b10d200",
"metadata": {},
@@ -70,6 +73,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "ce38ae84",
"metadata": {},
@@ -114,10 +118,11 @@
"metadata": {},
"outputs": [],
"source": [
"agent = PlanAndExecute(planner=planner, executer=executor, verbose=True)"
"agent = PlanAndExecute(planner=planner, executor=executor, verbose=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "8be9f1bd",
"metadata": {},
@@ -194,14 +199,18 @@
"\n",
"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m28 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mBased on my search, Gigi Hadid's current age is 26 years old. \n",
"Thought:\u001b[32;1m\u001b[1;3mPrevious steps: steps=[(Step(value=\"Search for Leo DiCaprio's girlfriend on the internet.\"), StepResponse(response='Leo DiCaprio is currently linked to Gigi Hadid.')), (Step(value='Find her current age.'), StepResponse(response='28 years'))]\n",
"\n",
"Current objective: None\n",
"\n",
"Action:\n",
"```\n",
"{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"Gigi Hadid's current age is 26 years old.\"\n",
" \"action_input\": \"Gigi Hadid's current age is 28 years.\"\n",
"}\n",
"```\n",
"\n",
"\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
@@ -209,64 +218,39 @@
"\n",
"Step: Find her current age.\n",
"\n",
"Response: Gigi Hadid's current age is 26 years old.\n",
"Response: Gigi Hadid's current age is 28 years.\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mAction:\n",
"```\n",
"{\n",
" \"action\": \"Calculator\",\n",
" \"action_input\": \"26 ** 0.43\"\n",
" \"action_input\": \"28 ** 0.43\"\n",
"}\n",
"```\n",
"\n",
"\u001b[0m\n",
"\n",
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
"26 ** 0.43\u001b[32;1m\u001b[1;3m\n",
"28 ** 0.43\u001b[32;1m\u001b[1;3m\n",
"```text\n",
"26 ** 0.43\n",
"28 ** 0.43\n",
"```\n",
"...numexpr.evaluate(\"26 ** 0.43\")...\n",
"...numexpr.evaluate(\"28 ** 0.43\")...\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m4.059182145592686\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m4.1906168361987195\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 4.059182145592686\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThe current objective is to raise Gigi Hadid's age to the 0.43 power. \n",
"\n",
"Action:\n",
"```\n",
"{\n",
" \"action\": \"Calculator\",\n",
" \"action_input\": \"26 ** 0.43\"\n",
"}\n",
"```\n",
"\n",
"\u001b[0m\n",
"\n",
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
"26 ** 0.43\u001b[32;1m\u001b[1;3m\n",
"```text\n",
"26 ** 0.43\n",
"```\n",
"...numexpr.evaluate(\"26 ** 0.43\")...\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m4.059182145592686\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 4.059182145592686\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThe answer to the current objective is 4.059182145592686.\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 4.1906168361987195\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThe next step is to provide the answer to the user's question.\n",
"\n",
"Action:\n",
"```\n",
"{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years.\"\n",
" \"action_input\": \"Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.\"\n",
"}\n",
"```\n",
"\n",
"\n",
"\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
@@ -274,14 +258,14 @@
"\n",
"Step: Raise her current age to the 0.43 power using a calculator or programming language.\n",
"\n",
"Response: Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years.\n",
"Response: Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mAction:\n",
"```\n",
"{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years.\"\n",
" \"action_input\": \"The result is approximately 4.19.\"\n",
"}\n",
"```\n",
"\u001b[0m\n",
@@ -291,14 +275,14 @@
"\n",
"Step: Output the result.\n",
"\n",
"Response: Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years.\n",
"Response: The result is approximately 4.19.\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mAction:\n",
"```\n",
"{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years.\"\n",
" \"action_input\": \"Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.\"\n",
"}\n",
"```\n",
"\u001b[0m\n",
@@ -310,14 +294,14 @@
"\n",
"\n",
"\n",
"Response: Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years.\n",
"Response: Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years.\""
"\"Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.\""
]
},
"execution_count": 10,

View File

@@ -0,0 +1,154 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "23234b50-e6c6-4c87-9f97-259c15f36894",
"metadata": {
"tags": []
},
"source": [
"# Only streaming final agent output"
]
},
{
"cell_type": "markdown",
"id": "29dd6333-307c-43df-b848-65001c01733b",
"metadata": {},
"source": [
"If you only want the final output of an agent to be streamed, you can use the callback ``FinalStreamingStdOutCallbackHandler``.\n",
"For this, the underlying LLM has to support streaming as well."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e4592215-6604-47e2-89ff-5db3af6d1e40",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.agents import load_tools\n",
"from langchain.agents import initialize_agent\n",
"from langchain.agents import AgentType\n",
"from langchain.callbacks.streaming_stdout_final_only import FinalStreamingStdOutCallbackHandler\n",
"from langchain.llms import OpenAI"
]
},
{
"cell_type": "markdown",
"id": "19a813f7",
"metadata": {},
"source": [
"Let's create the underlying LLM with ``streaming = True`` and pass a new instance of ``FinalStreamingStdOutCallbackHandler``."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "7fe81ef4",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(streaming=True, callbacks=[FinalStreamingStdOutCallbackHandler()], temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ff45b85d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Konrad Adenauer became Chancellor of Germany in 1949, 74 years ago in 2023."
]
},
{
"data": {
"text/plain": [
"'Konrad Adenauer became Chancellor of Germany in 1949, 74 years ago in 2023.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tools = load_tools([\"wikipedia\", \"llm-math\"], llm=llm)\n",
"agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False)\n",
"agent.run(\"It's 2023 now. How many years ago did Konrad Adenauer become Chancellor of Germany.\")"
]
},
{
"cell_type": "markdown",
"id": "53a743b8",
"metadata": {},
"source": [
"### Handling custom answer prefixes"
]
},
{
"cell_type": "markdown",
"id": "23602c62",
"metadata": {},
"source": [
"By default, we assume that the token sequence ``\"\\nFinal\", \" Answer\", \":\"`` indicates that the agent has reached an answers. We can, however, also pass a custom sequence to use as answer prefix."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "5662a638",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(\n",
" streaming=True,\n",
" callbacks=[FinalStreamingStdOutCallbackHandler(answer_prefix_tokens=[\"\\nThe\", \" answer\", \":\"])],\n",
" temperature=0\n",
")"
]
},
{
"cell_type": "markdown",
"id": "b1a96cc0",
"metadata": {},
"source": [
"Be aware you likely need to include whitespaces and new line characters in your token. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9278b522",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,270 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Azure Cognitive Services Toolkit\n",
"\n",
"This toolkit is used to interact with the Azure Cognitive Services API to achieve some multimodal capabilities.\n",
"\n",
"Currently There are four tools bundled in this toolkit:\n",
"- AzureCogsImageAnalysisTool: used to extract caption, objects, tags, and text from images. (Note: this tool is not available on Mac OS yet, due to the dependency on `azure-ai-vision` package, which is only supported on Windows and Linux currently.)\n",
"- AzureCogsFormRecognizerTool: used to extract text, tables, and key-value pairs from documents.\n",
"- AzureCogsSpeech2TextTool: used to transcribe speech to text.\n",
"- AzureCogsText2SpeechTool: used to synthesize text to speech."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, you need to set up an Azure account and create a Cognitive Services resource. You can follow the instructions [here](https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows) to create a resource. \n",
"\n",
"Then, you need to get the endpoint, key and region of your resource, and set them as environment variables. You can find them in the \"Keys and Endpoint\" page of your resource."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# !pip install --upgrade azure-ai-formrecognizer > /dev/null\n",
"# !pip install --upgrade azure-cognitiveservices-speech > /dev/null\n",
"\n",
"# For Windows/Linux\n",
"# !pip install --upgrade azure-ai-vision > /dev/null"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"sk-\"\n",
"os.environ[\"AZURE_COGS_KEY\"] = \"\"\n",
"os.environ[\"AZURE_COGS_ENDPOINT\"] = \"\"\n",
"os.environ[\"AZURE_COGS_REGION\"] = \"\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Toolkit"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents.agent_toolkits import AzureCognitiveServicesToolkit\n",
"\n",
"toolkit = AzureCognitiveServicesToolkit()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['Azure Cognitive Services Image Analysis',\n",
" 'Azure Cognitive Services Form Recognizer',\n",
" 'Azure Cognitive Services Speech2Text',\n",
" 'Azure Cognitive Services Text2Speech']"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"[tool.name for tool in toolkit.get_tools()]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use within an Agent"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain import OpenAI\n",
"from langchain.agents import initialize_agent, AgentType"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)\n",
"agent = initialize_agent(\n",
" tools=toolkit.get_tools(),\n",
" llm=llm,\n",
" agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,\n",
" verbose=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Action:\n",
"```\n",
"{\n",
" \"action\": \"Azure Cognitive Services Image Analysis\",\n",
" \"action_input\": \"https://images.openai.com/blob/9ad5a2ab-041f-475f-ad6a-b51899c50182/ingredients.png\"\n",
"}\n",
"```\n",
"\n",
"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mCaption: a group of eggs and flour in bowls\n",
"Objects: Egg, Egg, Food\n",
"Tags: dairy, ingredient, indoor, thickening agent, food, mixing bowl, powder, flour, egg, bowl\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I can use the objects and tags to suggest recipes\n",
"Action:\n",
"```\n",
"{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"You can make pancakes, omelettes, or quiches with these ingredients!\"\n",
"}\n",
"```\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'You can make pancakes, omelettes, or quiches with these ingredients!'"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"What can I make with these ingredients?\"\n",
" \"https://images.openai.com/blob/9ad5a2ab-041f-475f-ad6a-b51899c50182/ingredients.png\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mAction:\n",
"```\n",
"{\n",
" \"action\": \"Azure Cognitive Services Text2Speech\",\n",
" \"action_input\": \"Why did the chicken cross the playground? To get to the other slide!\"\n",
"}\n",
"```\n",
"\n",
"\u001b[0m\n",
"Observation: \u001b[31;1m\u001b[1;3m/tmp/tmpa3uu_j6b.wav\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I have the audio file of the joke\n",
"Action:\n",
"```\n",
"{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"/tmp/tmpa3uu_j6b.wav\"\n",
"}\n",
"```\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'/tmp/tmpa3uu_j6b.wav'"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"audio_file = agent.run(\"Tell me a joke and read it out for me.\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython import display\n",
"\n",
"audio = display.Audio(audio_file)\n",
"display.display(audio)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,10 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "0e499e90-7a6d-4fab-8aab-31a4df417601",
"metadata": {},
"source": [
"# PowerBI Dataset Agent\n",
"\n",
@@ -17,46 +14,41 @@
"- You can also supply a username to impersonate for use with datasets that have RLS enabled. \n",
"- The toolkit uses a LLM to create the query from the question, the agent uses the LLM for the overall execution.\n",
"- Testing was done mostly with a `text-davinci-003` model, codex models did not seem to perform ver well."
]
],
"metadata": {},
"attachments": {}
},
{
"cell_type": "markdown",
"id": "ec927ac6-9b2a-4e8a-9a6e-3e429191875c",
"metadata": {
"tags": []
},
"source": [
"## Initialization"
]
],
"metadata": {
"tags": []
}
},
{
"cell_type": "code",
"execution_count": null,
"id": "53422913-967b-4f2a-8022-00269c1be1b1",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.agents.agent_toolkits import create_pbi_agent\n",
"from langchain.agents.agent_toolkits import PowerBIToolkit\n",
"from langchain.utilities.powerbi import PowerBIDataset\n",
"from langchain.llms.openai import AzureOpenAI\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.agents import AgentExecutor\n",
"from azure.identity import DefaultAzureCredential"
]
],
"outputs": [],
"metadata": {
"tags": []
}
},
{
"cell_type": "code",
"execution_count": null,
"id": "090f3699-79c6-4ce1-ab96-a94f0121fd64",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"fast_llm = AzureOpenAI(temperature=0.5, max_tokens=1000, deployment_name=\"gpt-35-turbo\", verbose=True)\n",
"smart_llm = AzureOpenAI(temperature=0, max_tokens=100, deployment_name=\"gpt-4\", verbose=True)\n",
"fast_llm = ChatOpenAI(temperature=0.5, max_tokens=1000, model_name=\"gpt-3.5-turbo\", verbose=True)\n",
"smart_llm = ChatOpenAI(temperature=0, max_tokens=100, model_name=\"gpt-4\", verbose=True)\n",
"\n",
"toolkit = PowerBIToolkit(\n",
" powerbi=PowerBIDataset(dataset_id=\"<dataset_id>\", table_names=['table1', 'table2'], credential=DefaultAzureCredential()), \n",
@@ -68,97 +60,90 @@
" toolkit=toolkit,\n",
" verbose=True,\n",
")"
]
],
"outputs": [],
"metadata": {
"tags": []
}
},
{
"cell_type": "markdown",
"id": "36ae48c7-cb08-4fef-977e-c7d4b96a464b",
"metadata": {},
"source": [
"## Example: describing a table"
]
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"id": "ff70e83d-5ad0-4fc7-bb96-27d82ac166d7",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"agent_executor.run(\"Describe table1\")"
]
],
"outputs": [],
"metadata": {
"tags": []
}
},
{
"attachments": {},
"cell_type": "markdown",
"id": "9abcfe8e-1868-42a4-8345-ad2d9b44c681",
"metadata": {},
"source": [
"## Example: simple query on a table\n",
"In this example, the agent actually figures out the correct query to get a row count of the table."
]
],
"metadata": {},
"attachments": {}
},
{
"cell_type": "code",
"execution_count": null,
"id": "bea76658-a65b-47e2-b294-6d52c5556246",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"agent_executor.run(\"How many records are in table1?\")"
]
],
"outputs": [],
"metadata": {
"tags": []
}
},
{
"cell_type": "markdown",
"id": "6fbc26af-97e4-4a21-82aa-48bdc992da26",
"metadata": {},
"source": [
"## Example: running queries"
]
],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"id": "17bea710-4a23-4de0-b48e-21d57be48293",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"agent_executor.run(\"How many records are there by dimension1 in table2?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "474dddda-c067-4eeb-98b1-e763ee78b18c",
],
"outputs": [],
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"agent_executor.run(\"What unique values are there for dimensions2 in table2\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "6fd950e4",
"metadata": {},
"source": [
"## Example: add your own few-shot prompts"
]
}
},
{
"cell_type": "code",
"execution_count": null,
"id": "87d677f9",
"metadata": {},
"source": [
"agent_executor.run(\"What unique values are there for dimensions2 in table2\")"
],
"outputs": [],
"metadata": {
"tags": []
}
},
{
"cell_type": "markdown",
"source": [
"## Example: add your own few-shot prompts"
],
"metadata": {},
"attachments": {}
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"#fictional example\n",
"few_shots = \"\"\"\n",
@@ -182,24 +167,24 @@
" toolkit=toolkit,\n",
" verbose=True,\n",
")"
]
],
"outputs": [],
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"id": "33f4bb43",
"metadata": {},
"outputs": [],
"source": [
"agent_executor.run(\"What was the maximum of value in revenue in dollars in 2022?\")"
]
],
"outputs": [],
"metadata": {}
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
"name": "python3",
"display_name": "Python 3.9.16 64-bit"
},
"language_info": {
"codemirror_mode": {
@@ -211,9 +196,12 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.9.16"
},
"interpreter": {
"hash": "397704579725e15f5c7cb49fe5f0341eb7531c82d19f2c29d197e8b64ab5776b"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -17,7 +18,6 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import create_spark_dataframe_agent\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"...input your openai api key here...\""
@@ -25,9 +25,20 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"23/05/15 20:33:10 WARN Utils: Your hostname, Mikes-Mac-mini.local resolves to a loopback address: 127.0.0.1; using 192.168.68.115 instead (on interface en1)\n",
"23/05/15 20:33:10 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address\n",
"Setting default log level to \"WARN\".\n",
"To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).\n",
"23/05/15 20:33:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable\n"
]
},
{
"name": "stdout",
"output_type": "stream",
@@ -64,6 +75,7 @@
"source": [
"from langchain.llms import OpenAI\n",
"from pyspark.sql import SparkSession\n",
"from langchain.agents import create_spark_dataframe_agent\n",
"\n",
"spark = SparkSession.builder.getOrCreate()\n",
"csv_file_path = \"titanic.csv\"\n",
@@ -92,7 +104,7 @@
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out the size of the dataframe\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out how many rows are in the dataframe\n",
"Action: python_repl_ast\n",
"Action Input: df.count()\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m891\u001b[0m\n",
@@ -205,7 +217,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
@@ -213,6 +225,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [

View File

@@ -0,0 +1,348 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Spark SQL Agent\n",
"\n",
"This notebook shows how to use agents to interact with a Spark SQL. Similar to [SQL Database Agent](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/sql_database.html), it is designed to address general inquiries about Spark SQL and facilitate error recovery.\n",
"\n",
"**NOTE: Note that, as this agent is in active development, all answers might not be correct. Additionally, it is not guaranteed that the agent won't perform DML statements on your Spark cluster given certain questions. Be careful running it on sensitive data!**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import create_spark_sql_agent\n",
"from langchain.agents.agent_toolkits import SparkSQLToolkit\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.utilities.spark_sql import SparkSQL"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Setting default log level to \"WARN\".\n",
"To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).\n",
"23/05/18 16:03:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"+-----------+--------+------+--------------------+------+----+-----+-----+----------------+-------+-----+--------+\n",
"|PassengerId|Survived|Pclass| Name| Sex| Age|SibSp|Parch| Ticket| Fare|Cabin|Embarked|\n",
"+-----------+--------+------+--------------------+------+----+-----+-----+----------------+-------+-----+--------+\n",
"| 1| 0| 3|Braund, Mr. Owen ...| male|22.0| 1| 0| A/5 21171| 7.25| null| S|\n",
"| 2| 1| 1|Cumings, Mrs. Joh...|female|38.0| 1| 0| PC 17599|71.2833| C85| C|\n",
"| 3| 1| 3|Heikkinen, Miss. ...|female|26.0| 0| 0|STON/O2. 3101282| 7.925| null| S|\n",
"| 4| 1| 1|Futrelle, Mrs. Ja...|female|35.0| 1| 0| 113803| 53.1| C123| S|\n",
"| 5| 0| 3|Allen, Mr. Willia...| male|35.0| 0| 0| 373450| 8.05| null| S|\n",
"| 6| 0| 3| Moran, Mr. James| male|null| 0| 0| 330877| 8.4583| null| Q|\n",
"| 7| 0| 1|McCarthy, Mr. Tim...| male|54.0| 0| 0| 17463|51.8625| E46| S|\n",
"| 8| 0| 3|Palsson, Master. ...| male| 2.0| 3| 1| 349909| 21.075| null| S|\n",
"| 9| 1| 3|Johnson, Mrs. Osc...|female|27.0| 0| 2| 347742|11.1333| null| S|\n",
"| 10| 1| 2|Nasser, Mrs. Nich...|female|14.0| 1| 0| 237736|30.0708| null| C|\n",
"| 11| 1| 3|Sandstrom, Miss. ...|female| 4.0| 1| 1| PP 9549| 16.7| G6| S|\n",
"| 12| 1| 1|Bonnell, Miss. El...|female|58.0| 0| 0| 113783| 26.55| C103| S|\n",
"| 13| 0| 3|Saundercock, Mr. ...| male|20.0| 0| 0| A/5. 2151| 8.05| null| S|\n",
"| 14| 0| 3|Andersson, Mr. An...| male|39.0| 1| 5| 347082| 31.275| null| S|\n",
"| 15| 0| 3|Vestrom, Miss. Hu...|female|14.0| 0| 0| 350406| 7.8542| null| S|\n",
"| 16| 1| 2|Hewlett, Mrs. (Ma...|female|55.0| 0| 0| 248706| 16.0| null| S|\n",
"| 17| 0| 3|Rice, Master. Eugene| male| 2.0| 4| 1| 382652| 29.125| null| Q|\n",
"| 18| 1| 2|Williams, Mr. Cha...| male|null| 0| 0| 244373| 13.0| null| S|\n",
"| 19| 0| 3|Vander Planke, Mr...|female|31.0| 1| 0| 345763| 18.0| null| S|\n",
"| 20| 1| 3|Masselmani, Mrs. ...|female|null| 0| 0| 2649| 7.225| null| C|\n",
"+-----------+--------+------+--------------------+------+----+-----+-----+----------------+-------+-----+--------+\n",
"only showing top 20 rows\n",
"\n"
]
}
],
"source": [
"from pyspark.sql import SparkSession\n",
"\n",
"spark = SparkSession.builder.getOrCreate()\n",
"schema = \"langchain_example\"\n",
"spark.sql(f\"CREATE DATABASE IF NOT EXISTS {schema}\")\n",
"spark.sql(f\"USE {schema}\")\n",
"csv_file_path = \"titanic.csv\"\n",
"table = \"titanic\"\n",
"spark.read.csv(csv_file_path, header=True, inferSchema=True).write.saveAsTable(table)\n",
"spark.table(table).show()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# Note, you can also connect to Spark via Spark connect. For example:\n",
"# db = SparkSQL.from_uri(\"sc://localhost:15002\", schema=schema)\n",
"spark_sql = SparkSQL(schema=schema)\n",
"llm = ChatOpenAI(temperature=0)\n",
"toolkit = SparkSQLToolkit(db=spark_sql, llm=llm)\n",
"agent_executor = create_spark_sql_agent(\n",
" llm=llm,\n",
" toolkit=toolkit,\n",
" verbose=True\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example: describing a table"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mAction: list_tables_sql_db\n",
"Action Input: \u001B[0m\n",
"Observation: \u001B[38;5;200m\u001B[1;3mtitanic\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3mI found the titanic table. Now I need to get the schema and sample rows for the titanic table.\n",
"Action: schema_sql_db\n",
"Action Input: titanic\u001B[0m\n",
"Observation: \u001B[33;1m\u001B[1;3mCREATE TABLE langchain_example.titanic (\n",
" PassengerId INT,\n",
" Survived INT,\n",
" Pclass INT,\n",
" Name STRING,\n",
" Sex STRING,\n",
" Age DOUBLE,\n",
" SibSp INT,\n",
" Parch INT,\n",
" Ticket STRING,\n",
" Fare DOUBLE,\n",
" Cabin STRING,\n",
" Embarked STRING)\n",
";\n",
"\n",
"/*\n",
"3 rows from titanic table:\n",
"PassengerId\tSurvived\tPclass\tName\tSex\tAge\tSibSp\tParch\tTicket\tFare\tCabin\tEmbarked\n",
"1\t0\t3\tBraund, Mr. Owen Harris\tmale\t22.0\t1\t0\tA/5 21171\t7.25\tNone\tS\n",
"2\t1\t1\tCumings, Mrs. John Bradley (Florence Briggs Thayer)\tfemale\t38.0\t1\t0\tPC 17599\t71.2833\tC85\tC\n",
"3\t1\t3\tHeikkinen, Miss. Laina\tfemale\t26.0\t0\t0\tSTON/O2. 3101282\t7.925\tNone\tS\n",
"*/\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3mI now know the schema and sample rows for the titanic table.\n",
"Final Answer: The titanic table has the following columns: PassengerId (INT), Survived (INT), Pclass (INT), Name (STRING), Sex (STRING), Age (DOUBLE), SibSp (INT), Parch (INT), Ticket (STRING), Fare (DOUBLE), Cabin (STRING), and Embarked (STRING). Here are some sample rows from the table: \n",
"\n",
"1. PassengerId: 1, Survived: 0, Pclass: 3, Name: Braund, Mr. Owen Harris, Sex: male, Age: 22.0, SibSp: 1, Parch: 0, Ticket: A/5 21171, Fare: 7.25, Cabin: None, Embarked: S\n",
"2. PassengerId: 2, Survived: 1, Pclass: 1, Name: Cumings, Mrs. John Bradley (Florence Briggs Thayer), Sex: female, Age: 38.0, SibSp: 1, Parch: 0, Ticket: PC 17599, Fare: 71.2833, Cabin: C85, Embarked: C\n",
"3. PassengerId: 3, Survived: 1, Pclass: 3, Name: Heikkinen, Miss. Laina, Sex: female, Age: 26.0, SibSp: 0, Parch: 0, Ticket: STON/O2. 3101282, Fare: 7.925, Cabin: None, Embarked: S\u001B[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": "'The titanic table has the following columns: PassengerId (INT), Survived (INT), Pclass (INT), Name (STRING), Sex (STRING), Age (DOUBLE), SibSp (INT), Parch (INT), Ticket (STRING), Fare (DOUBLE), Cabin (STRING), and Embarked (STRING). Here are some sample rows from the table: \\n\\n1. PassengerId: 1, Survived: 0, Pclass: 3, Name: Braund, Mr. Owen Harris, Sex: male, Age: 22.0, SibSp: 1, Parch: 0, Ticket: A/5 21171, Fare: 7.25, Cabin: None, Embarked: S\\n2. PassengerId: 2, Survived: 1, Pclass: 1, Name: Cumings, Mrs. John Bradley (Florence Briggs Thayer), Sex: female, Age: 38.0, SibSp: 1, Parch: 0, Ticket: PC 17599, Fare: 71.2833, Cabin: C85, Embarked: C\\n3. PassengerId: 3, Survived: 1, Pclass: 3, Name: Heikkinen, Miss. Laina, Sex: female, Age: 26.0, SibSp: 0, Parch: 0, Ticket: STON/O2. 3101282, Fare: 7.925, Cabin: None, Embarked: S'"
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(\"Describe the titanic table\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example: running queries"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mAction: list_tables_sql_db\n",
"Action Input: \u001B[0m\n",
"Observation: \u001B[38;5;200m\u001B[1;3mtitanic\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3mI should check the schema of the titanic table to see if there is an age column.\n",
"Action: schema_sql_db\n",
"Action Input: titanic\u001B[0m\n",
"Observation: \u001B[33;1m\u001B[1;3mCREATE TABLE langchain_example.titanic (\n",
" PassengerId INT,\n",
" Survived INT,\n",
" Pclass INT,\n",
" Name STRING,\n",
" Sex STRING,\n",
" Age DOUBLE,\n",
" SibSp INT,\n",
" Parch INT,\n",
" Ticket STRING,\n",
" Fare DOUBLE,\n",
" Cabin STRING,\n",
" Embarked STRING)\n",
";\n",
"\n",
"/*\n",
"3 rows from titanic table:\n",
"PassengerId\tSurvived\tPclass\tName\tSex\tAge\tSibSp\tParch\tTicket\tFare\tCabin\tEmbarked\n",
"1\t0\t3\tBraund, Mr. Owen Harris\tmale\t22.0\t1\t0\tA/5 21171\t7.25\tNone\tS\n",
"2\t1\t1\tCumings, Mrs. John Bradley (Florence Briggs Thayer)\tfemale\t38.0\t1\t0\tPC 17599\t71.2833\tC85\tC\n",
"3\t1\t3\tHeikkinen, Miss. Laina\tfemale\t26.0\t0\t0\tSTON/O2. 3101282\t7.925\tNone\tS\n",
"*/\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3mThere is an Age column in the titanic table. I should write a query to calculate the average age and then find the square root of the result.\n",
"Action: query_checker_sql_db\n",
"Action Input: SELECT SQRT(AVG(Age)) as square_root_of_avg_age FROM titanic\u001B[0m\n",
"Observation: \u001B[31;1m\u001B[1;3mThe original query seems to be correct. Here it is again:\n",
"\n",
"SELECT SQRT(AVG(Age)) as square_root_of_avg_age FROM titanic\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3mThe query is correct, so I can execute it to find the square root of the average age.\n",
"Action: query_sql_db\n",
"Action Input: SELECT SQRT(AVG(Age)) as square_root_of_avg_age FROM titanic\u001B[0m\n",
"Observation: \u001B[36;1m\u001B[1;3m[('5.449689683556195',)]\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3mI now know the final answer\n",
"Final Answer: The square root of the average age is approximately 5.45.\u001B[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": "'The square root of the average age is approximately 5.45.'"
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(\"whats the square root of the average age?\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mAction: list_tables_sql_db\n",
"Action Input: \u001B[0m\n",
"Observation: \u001B[38;5;200m\u001B[1;3mtitanic\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3mI should check the schema of the titanic table to see what columns are available.\n",
"Action: schema_sql_db\n",
"Action Input: titanic\u001B[0m\n",
"Observation: \u001B[33;1m\u001B[1;3mCREATE TABLE langchain_example.titanic (\n",
" PassengerId INT,\n",
" Survived INT,\n",
" Pclass INT,\n",
" Name STRING,\n",
" Sex STRING,\n",
" Age DOUBLE,\n",
" SibSp INT,\n",
" Parch INT,\n",
" Ticket STRING,\n",
" Fare DOUBLE,\n",
" Cabin STRING,\n",
" Embarked STRING)\n",
";\n",
"\n",
"/*\n",
"3 rows from titanic table:\n",
"PassengerId\tSurvived\tPclass\tName\tSex\tAge\tSibSp\tParch\tTicket\tFare\tCabin\tEmbarked\n",
"1\t0\t3\tBraund, Mr. Owen Harris\tmale\t22.0\t1\t0\tA/5 21171\t7.25\tNone\tS\n",
"2\t1\t1\tCumings, Mrs. John Bradley (Florence Briggs Thayer)\tfemale\t38.0\t1\t0\tPC 17599\t71.2833\tC85\tC\n",
"3\t1\t3\tHeikkinen, Miss. Laina\tfemale\t26.0\t0\t0\tSTON/O2. 3101282\t7.925\tNone\tS\n",
"*/\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3mI can use the titanic table to find the oldest survived passenger. I will query the Name and Age columns, filtering by Survived and ordering by Age in descending order.\n",
"Action: query_checker_sql_db\n",
"Action Input: SELECT Name, Age FROM titanic WHERE Survived = 1 ORDER BY Age DESC LIMIT 1\u001B[0m\n",
"Observation: \u001B[31;1m\u001B[1;3mSELECT Name, Age FROM titanic WHERE Survived = 1 ORDER BY Age DESC LIMIT 1\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3mThe query is correct. Now I will execute it to find the oldest survived passenger.\n",
"Action: query_sql_db\n",
"Action Input: SELECT Name, Age FROM titanic WHERE Survived = 1 ORDER BY Age DESC LIMIT 1\u001B[0m\n",
"Observation: \u001B[36;1m\u001B[1;3m[('Barkworth, Mr. Algernon Henry Wilson', '80.0')]\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3mI now know the final answer.\n",
"Final Answer: The oldest survived passenger is Barkworth, Mr. Algernon Henry Wilson, who was 80 years old.\u001B[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": "'The oldest survived passenger is Barkworth, Mr. Algernon Henry Wilson, who was 80 years old.'"
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(\"What's the name of the oldest survived passenger?\")"
],
"metadata": {
"collapsed": false
}
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,149 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"# GraphQL tool\n",
"This Jupyter Notebook demonstrates how to use the BaseGraphQLTool component with an Agent.\n",
"\n",
"GraphQL is a query language for APIs and a runtime for executing those queries against your data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.\n",
"\n",
"By including a BaseGraphQLTool in the list of tools provided to an Agent, you can grant your Agent the ability to query data from GraphQL APIs for any purposes you need.\n",
"\n",
"In this example, we'll be using the public Star Wars GraphQL API available at the following endpoint: https://swapi-graphql.netlify.app/.netlify/functions/index.\n",
"\n",
"First, you need to install httpx and gql Python packages."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"pip install httpx gql > /dev/null"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let's create a BaseGraphQLTool instance with the specified Star Wars API endpoint and initialize an Agent with the tool."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain import OpenAI\n",
"from langchain.agents import load_tools, initialize_agent, AgentType\n",
"from langchain.utilities import GraphQLAPIWrapper\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"\n",
"tools = load_tools([\"graphql\"], graphql_endpoint=\"https://swapi-graphql.netlify.app/.netlify/functions/index\", llm=llm)\n",
"\n",
"agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, we can use the Agent to run queries against the Star Wars GraphQL API. Let's ask the Agent to list all the Star Wars films and their release dates."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to query the graphql database to get the titles of all the star wars films\n",
"Action: query_graphql\n",
"Action Input: query { allFilms { films { title } } }\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m\"{\\n \\\"allFilms\\\": {\\n \\\"films\\\": [\\n {\\n \\\"title\\\": \\\"A New Hope\\\"\\n },\\n {\\n \\\"title\\\": \\\"The Empire Strikes Back\\\"\\n },\\n {\\n \\\"title\\\": \\\"Return of the Jedi\\\"\\n },\\n {\\n \\\"title\\\": \\\"The Phantom Menace\\\"\\n },\\n {\\n \\\"title\\\": \\\"Attack of the Clones\\\"\\n },\\n {\\n \\\"title\\\": \\\"Revenge of the Sith\\\"\\n }\\n ]\\n }\\n}\"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the titles of all the star wars films\n",
"Final Answer: The titles of all the star wars films are: A New Hope, The Empire Strikes Back, Return of the Jedi, The Phantom Menace, Attack of the Clones, and Revenge of the Sith.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The titles of all the star wars films are: A New Hope, The Empire Strikes Back, Return of the Jedi, The Phantom Menace, Attack of the Clones, and Revenge of the Sith.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"graphql_fields = \"\"\"allFilms {\n",
" films {\n",
" title\n",
" director\n",
" releaseDate\n",
" speciesConnection {\n",
" species {\n",
" name\n",
" classification\n",
" homeworld {\n",
" name\n",
" }\n",
" }\n",
" }\n",
" }\n",
" }\n",
"\n",
"\"\"\"\n",
"\n",
"suffix = \"Search for the titles of all the stawars films stored in the graphql database that has this schema \"\n",
"\n",
"\n",
"agent.run(suffix + graphql_fields)"
]
}
],
"metadata": {
"interpreter": {
"hash": "f85209c3c4c190dca7367d6a1e623da50a9a4392fd53313a7cf9d4bda9c4b85b"
},
"kernelspec": {
"display_name": "Python 3.9.16 ('langchain')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -19,7 +19,7 @@
"outputs": [],
"source": [
"# Requires transformers>=4.29.0 and huggingface_hub>=0.14.1\n",
"!pip install --uprade transformers huggingface_hub > /dev/null"
"!pip install --upgrade transformers huggingface_hub > /dev/null"
]
},
{

Some files were not shown because too many files have changed in this diff Show More