Compare commits

..

4 Commits

Author SHA1 Message Date
Harrison Chase
967e2c6d29 Merge branch 'master' into harrison/agents-rewrite-code 2023-12-28 13:39:59 -08:00
Harrison Chase
7b042d008d cr 2023-12-28 13:13:27 -08:00
Harrison Chase
9447af9815 cr 2023-12-28 12:58:02 -08:00
Harrison Chase
22fabf4ad2 cr 2023-12-28 10:30:12 -08:00
3349 changed files with 130238 additions and 353628 deletions

View File

@@ -3,4 +3,43 @@
Hi there! Thank you for even being interested in contributing to LangChain.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
To learn how to contribute to LangChain, please follow the [contribution guide here](https://python.langchain.com/docs/contributing/).
To learn about how to contribute, please follow the [guides here](https://python.langchain.com/docs/contributing/)
## 🗺️ Guidelines
### 👩‍💻 Ways to contribute
There are many ways to contribute to LangChain. Here are some common ways people contribute:
- [**Documentation**](https://python.langchain.com/docs/contributing/documentation): Help improve our docs, including this one!
- [**Code**](https://python.langchain.com/docs/contributing/code): Help us write code, fix bugs, or improve our infrastructure.
- [**Integrations**](https://python.langchain.com/docs/contributing/integration): Help us integrate with your favorite vendors and tools.
### 🚩GitHub Issues
Our [issues](https://github.com/langchain-ai/langchain/issues) page is kept up to date with bugs, improvements, and feature requests.
There is a taxonomy of labels to help with sorting and discovery of issues of interest. Please use these to help organize issues.
If you start working on an issue, please assign it to yourself.
If you are adding an issue, please try to keep it focused on a single, modular bug/improvement/feature.
If two issues are related, or blocking, please link them rather than combining them.
We will try to keep these issues as up-to-date as possible, though
with the rapid rate of development in this field some may get out of date.
If you notice this happening, please let us know.
### 🙋Getting Help
Our goal is to have the simplest developer setup possible. Should you experience any difficulty getting setup, please
contact a maintainer! Not only do we want to help get you unblocked, but we also want to make sure that the process is
smooth for future contributors.
In a similar vein, we do enforce certain linting, formatting, and documentation standards in the codebase.
If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help -
we do not want these to get in the way of getting good code into the codebase.
### Contributor Documentation
To learn about how to contribute, please follow the [guides here](https://python.langchain.com/docs/contributing/)

View File

@@ -1,122 +0,0 @@
labels: [Question]
body:
- type: markdown
attributes:
value: |
Thanks for your interest in LangChain 🦜️🔗!
Please follow these instructions, fill every question, and do every step. 🙏
We're asking for this because answering questions and solving problems in GitHub takes a lot of time --
this is time that we cannot spend on adding new features, fixing bugs, writing documentation or reviewing pull requests.
By asking questions in a structured way (following this) it will be much easier for us to help you.
There's a high chance that by following this process, you'll find the solution on your own, eliminating the need to submit a question and wait for an answer. 😎
As there are many questions submitted every day, we will **DISCARD** and close the incomplete ones.
That will allow us (and others) to focus on helping people like you that follow the whole process. 🤓
Relevant links to check before opening a question to see if your question has already been answered, fixed or
if there's another way to solve your problem:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://api.python.langchain.com/en/stable/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
- type: checkboxes
id: checks
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this question.
required: true
- label: I searched the LangChain documentation with the integrated search.
required: true
- label: I used the GitHub search to find a similar question and didn't find it.
required: true
- type: checkboxes
id: help
attributes:
label: Commit to Help
description: |
After submitting this, I commit to one of:
* Read open questions until I find 2 where I can help someone and add a comment to help there.
* I already hit the "watch" button in this repository to receive notifications and I commit to help at least 2 people that ask questions in the future.
* Once my question is answered, I will mark the answer as "accepted".
options:
- label: I commit to help with one of those options 👆
required: true
- type: textarea
id: example
attributes:
label: Example Code
description: |
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
**Important!**
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
placeholder: |
from langchain_core.runnables import RunnableLambda
def bad_code(inputs) -> int:
raise NotImplementedError('For demo purpose')
chain = RunnableLambda(bad_code)
chain.invoke('Hello!')
render: python
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
What is the problem, question, or error?
Write a short description explaining what you are doing, what you expect to happen, and what is currently happening.
placeholder: |
* I'm trying to use the `langchain` library to do X.
* I expect to see Y.
* Instead, it does Z.
validations:
required: true
- type: textarea
id: system-info
attributes:
label: System Info
description: |
Please share your system info with us.
"pip freeze | grep langchain"
platform (windows / linux / mac)
python version
OR if you're on a recent version of langchain-core you can paste the output of:
python -m langchain_core.sys_info
placeholder: |
"pip freeze | grep langchain"
platform
python version
Alternatively, if you're on a recent version of langchain-core you can paste the output of:
python -m langchain_core.sys_info
These will only surface LangChain packages, don't forget to include any other relevant
packages you're using (if you're not sure what's relevant, you can paste the entire output of `pip freeze`).
validations:
required: true

View File

@@ -1,120 +1,106 @@
name: "\U0001F41B Bug Report"
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the GitHub Discussions.
description: Submit a bug report to help us improve LangChain. To report a security issue, please instead use the security option below.
labels: ["02 Bug Report"]
body:
- type: markdown
attributes:
value: >
Thank you for taking the time to file a bug report.
Use this to report bugs in LangChain.
If you're not certain that your issue is due to a bug in LangChain, please use [GitHub Discussions](https://github.com/langchain-ai/langchain/discussions)
to ask for help with your issue.
Relevant links to check before filing a bug report to see if your issue has already been reported, fixed or
if there's another way to solve your problem:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://api.python.langchain.com/en/stable/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
- type: checkboxes
id: checks
Thank you for taking the time to file a bug report. Before creating a new
issue, please make sure to take a few moments to check the issue tracker
for existing issues about the bug.
- type: textarea
id: system-info
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
label: System Info
description: Please share your system info with us.
placeholder: LangChain version, platform, python version, ...
validations:
required: true
- type: textarea
id: who-can-help
attributes:
label: Who can help?
description: |
Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
The core maintainers strive to read all issues, but tagging them will help them prioritize.
Please tag fewer than 3 people.
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoader Abstractions
- @eyurtsev
LLM/Chat Wrappers
- @hwchase17
- @agola11
Tools / Toolkits
- ...
placeholder: "@Username ..."
- type: checkboxes
id: information-scripts-examples
attributes:
label: Information
description: "The problem arises when using:"
options:
- label: I added a very descriptive title to this issue.
required: true
- label: I searched the LangChain documentation with the integrated search.
required: true
- label: I used the GitHub search to find a similar question and didn't find it.
required: true
- label: I am sure that this is a bug in LangChain rather than my code.
required: true
- label: The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
required: true
- label: "The official example notebooks/scripts"
- label: "My own modified scripts"
- type: checkboxes
id: related-components
attributes:
label: Related Components
description: "Select the components related to the issue (if applicable):"
options:
- label: "LLMs/Chat Models"
- label: "Embedding Models"
- label: "Prompts / Prompt Templates / Prompt Selectors"
- label: "Output Parsers"
- label: "Document Loaders"
- label: "Vector Stores / Retrievers"
- label: "Memory"
- label: "Agents / Agent Executors"
- label: "Tools / Toolkits"
- label: "Chains"
- label: "Callbacks/Tracing"
- label: "Async"
- type: textarea
id: reproduction
validations:
required: true
attributes:
label: Example Code
label: Reproduction
description: |
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
**Important!**
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
Please provide a [code sample](https://stackoverflow.com/help/minimal-reproducible-example) that reproduces the problem you ran into. It can be a Colab link or just a code snippet.
If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
placeholder: |
The following code:
```python
from langchain_core.runnables import RunnableLambda
Steps to reproduce the behavior:
1.
2.
3.
def bad_code(inputs) -> int:
raise NotImplementedError('For demo purpose')
chain = RunnableLambda(bad_code)
chain.invoke('Hello!')
```
- type: textarea
id: error
validations:
required: false
attributes:
label: Error Message and Stack Trace (if applicable)
description: |
If you are reporting an error, please include the full error message and stack trace.
placeholder: |
Exception + full stack trace
- type: textarea
id: description
attributes:
label: Description
description: |
What is the problem, question, or error?
Write a short description telling what you are doing, what you expect to happen, and what is currently happening.
placeholder: |
* I'm trying to use the `langchain` library to do X.
* I expect to see Y.
* Instead, it does Z.
id: expected-behavior
validations:
required: true
- type: textarea
id: system-info
attributes:
label: System Info
description: |
Please share your system info with us.
"pip freeze | grep langchain"
platform (windows / linux / mac)
python version
OR if you're on a recent version of langchain-core you can paste the output of:
python -m langchain_core.sys_info
placeholder: |
"pip freeze | grep langchain"
platform
python version
Alternatively, if you're on a recent version of langchain-core you can paste the output of:
python -m langchain_core.sys_info
These will only surface LangChain packages, don't forget to include any other relevant
packages you're using (if you're not sure what's relevant, you can paste the entire output of `pip freeze`).
validations:
required: true
label: Expected behavior
description: "A clear and concise description of what you would expect to happen."

View File

@@ -1,15 +1,6 @@
blank_issues_enabled: false
blank_issues_enabled: true
version: 2.1
contact_links:
- name: 🤔 Question or Problem
about: Ask a question or ask about a problem in GitHub Discussions.
url: https://www.github.com/langchain-ai/langchain/discussions/categories/q-a
- name: Discord
url: https://discord.gg/6adMQxSpJS
about: General community discussions
- name: Feature Request
url: https://www.github.com/langchain-ai/langchain/discussions/categories/ideas
about: Suggest a feature or an idea
- name: Show and tell
about: Show what you built with LangChain
url: https://www.github.com/langchain-ai/langchain/discussions/categories/show-and-tell

View File

@@ -4,45 +4,13 @@ title: "DOC: <Please write a comprehensive title after the 'DOC: ' prefix>"
labels: [03 - Documentation]
body:
- type: markdown
attributes:
value: >
Thank you for taking the time to report an issue in the documentation.
Only report issues with documentation here, explain if there are
any missing topics or if you found a mistake in the documentation.
Do **NOT** use this to ask usage questions or reporting issues with your code.
If you have usage questions or need help solving some problem,
please use [GitHub Discussions](https://github.com/langchain-ai/langchain/discussions).
If you're in the wrong place, here are some helpful links to find a better
place to ask your question:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://api.python.langchain.com/en/stable/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
- type: checkboxes
id: checks
attributes:
label: Checklist
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this issue.
required: true
- label: I included a link to the documentation page I am referring to (if applicable).
required: true
- type: textarea
attributes:
label: "Issue with current documentation:"
description: >
Please make sure to leave a reference to the document/code you're
referring to. Feel free to include names of classes, functions, methods
or concepts you'd like to see documented more.
referring to.
- type: textarea
attributes:
label: "Idea or request for content:"

View File

@@ -1,17 +1,7 @@
labels: [idea]
name: "\U0001F680 Feature request"
description: Submit a proposal/request for a new LangChain feature
labels: ["02 Feature Request"]
body:
- type: checkboxes
id: checks
attributes:
label: Checked
description: Please confirm and check all the following options.
options:
- label: I searched existing ideas and did not find a similar one
required: true
- label: I added a very descriptive title
required: true
- label: I've clearly described the feature request and motivation for it
required: true
- type: textarea
id: feature-request
validations:
@@ -20,6 +10,7 @@ body:
label: Feature request
description: |
A clear and concise description of the feature proposal. Please provide links to any relevant GitHub repos, papers, or other resources if relevant.
- type: textarea
id: motivation
validations:
@@ -28,11 +19,12 @@ body:
label: Motivation
description: |
Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too.
- type: textarea
id: proposal
id: contribution
validations:
required: false
required: true
attributes:
label: Proposal (If applicable)
label: Your contribution
description: |
If you would like to propose a solution, please describe it here.
Is there any way that you could help, e.g. by submitting a PR? Make sure to read the [Contributing Guide](https://python.langchain.com/docs/contributing/)

18
.github/ISSUE_TEMPLATE/other.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: Other Issue
description: Raise an issue that wouldn't be covered by the other templates.
title: "Issue: <Please write a comprehensive title after the 'Issue: ' prefix>"
labels: [04 - Other]
body:
- type: textarea
attributes:
label: "Issue you'd like to raise."
description: >
Please describe the issue you'd like to raise as clearly as possible.
Make sure to include any relevant links or references.
- type: textarea
attributes:
label: "Suggestion:"
description: >
Please outline a suggestion to improve the issue here.

View File

@@ -1,25 +0,0 @@
name: 🔒 Privileged
description: You are a LangChain maintainer, or was asked directly by a maintainer to create an issue here. If not, check the other options.
body:
- type: markdown
attributes:
value: |
Thanks for your interest in LangChain! 🚀
If you are not a LangChain maintainer or were not asked directly by a maintainer to create an issue, then please start the conversation in a [Question in GitHub Discussions](https://github.com/langchain-ai/langchain/discussions/categories/q-a) instead.
You are a LangChain maintainer if you maintain any of the packages inside of the LangChain repository
or are a regular contributor to LangChain with previous merged pull requests.
- type: checkboxes
id: privileged
attributes:
label: Privileged issue
description: Confirm that you are allowed to create an issue here.
options:
- label: I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
required: true
- type: textarea
id: content
attributes:
label: Issue Content
description: Add the content of the issue here.

View File

@@ -1,29 +1,20 @@
Thank you for contributing to LangChain!
<!-- Thank you for contributing to LangChain!
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, experimental, etc. is being modified. Use "docs: ..." for purely docs changes, "templates: ..." for template changes, "infra: ..." for CI changes.
- Example: "community: add foobar LLM"
Please title your PR "<package>: <description>", where <package> is whichever of langchain, community, core, experimental, etc. is being modified.
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes if applicable,
- **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
- **Description:** a description of the change
- **Issue:** the issue # it fixes, if applicable
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` from the root of the package you've modified to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/
- [ ] **Add tests and docs**: If you're adding a new integration, please include
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, hwchase17.
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->

View File

@@ -1,7 +0,0 @@
FROM python:3.9
RUN pip install httpx PyGithub "pydantic==2.0.2" pydantic-settings "pyyaml>=5.3.1,<6.0.0"
COPY ./app /app
CMD ["python", "/app/main.py"]

View File

@@ -1,11 +0,0 @@
# Adapted from https://github.com/tiangolo/fastapi/blob/master/.github/actions/people/action.yml
name: "Generate LangChain People"
description: "Generate the data for the LangChain People page"
author: "Jacob Lee <jacob@langchain.dev>"
inputs:
token:
description: 'User token, to read the GitHub API. Can be passed in using {{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}'
required: true
runs:
using: 'docker'
image: 'Dockerfile'

View File

@@ -1,641 +0,0 @@
# Adapted from https://github.com/tiangolo/fastapi/blob/master/.github/actions/people/app/main.py
import logging
import subprocess
import sys
from collections import Counter
from datetime import datetime, timedelta, timezone
from pathlib import Path
from typing import Any, Container, Dict, List, Set, Union
import httpx
import yaml
from github import Github
from pydantic import BaseModel, SecretStr
from pydantic_settings import BaseSettings
github_graphql_url = "https://api.github.com/graphql"
questions_category_id = "DIC_kwDOIPDwls4CS6Ve"
# discussions_query = """
# query Q($after: String, $category_id: ID) {
# repository(name: "langchain", owner: "langchain-ai") {
# discussions(first: 100, after: $after, categoryId: $category_id) {
# edges {
# cursor
# node {
# number
# author {
# login
# avatarUrl
# url
# }
# title
# createdAt
# comments(first: 100) {
# nodes {
# createdAt
# author {
# login
# avatarUrl
# url
# }
# isAnswer
# replies(first: 10) {
# nodes {
# createdAt
# author {
# login
# avatarUrl
# url
# }
# }
# }
# }
# }
# }
# }
# }
# }
# }
# """
# issues_query = """
# query Q($after: String) {
# repository(name: "langchain", owner: "langchain-ai") {
# issues(first: 100, after: $after) {
# edges {
# cursor
# node {
# number
# author {
# login
# avatarUrl
# url
# }
# title
# createdAt
# state
# comments(first: 100) {
# nodes {
# createdAt
# author {
# login
# avatarUrl
# url
# }
# }
# }
# }
# }
# }
# }
# }
# """
prs_query = """
query Q($after: String) {
repository(name: "langchain", owner: "langchain-ai") {
pullRequests(first: 100, after: $after, states: MERGED) {
edges {
cursor
node {
changedFiles
additions
deletions
number
labels(first: 100) {
nodes {
name
}
}
author {
login
avatarUrl
url
... on User {
twitterUsername
}
}
title
createdAt
state
reviews(first:100) {
nodes {
author {
login
avatarUrl
url
... on User {
twitterUsername
}
}
state
}
}
}
}
}
}
}
"""
class Author(BaseModel):
login: str
avatarUrl: str
url: str
twitterUsername: Union[str, None] = None
# Issues and Discussions
class CommentsNode(BaseModel):
createdAt: datetime
author: Union[Author, None] = None
class Replies(BaseModel):
nodes: List[CommentsNode]
class DiscussionsCommentsNode(CommentsNode):
replies: Replies
class Comments(BaseModel):
nodes: List[CommentsNode]
class DiscussionsComments(BaseModel):
nodes: List[DiscussionsCommentsNode]
class IssuesNode(BaseModel):
number: int
author: Union[Author, None] = None
title: str
createdAt: datetime
state: str
comments: Comments
class DiscussionsNode(BaseModel):
number: int
author: Union[Author, None] = None
title: str
createdAt: datetime
comments: DiscussionsComments
class IssuesEdge(BaseModel):
cursor: str
node: IssuesNode
class DiscussionsEdge(BaseModel):
cursor: str
node: DiscussionsNode
class Issues(BaseModel):
edges: List[IssuesEdge]
class Discussions(BaseModel):
edges: List[DiscussionsEdge]
class IssuesRepository(BaseModel):
issues: Issues
class DiscussionsRepository(BaseModel):
discussions: Discussions
class IssuesResponseData(BaseModel):
repository: IssuesRepository
class DiscussionsResponseData(BaseModel):
repository: DiscussionsRepository
class IssuesResponse(BaseModel):
data: IssuesResponseData
class DiscussionsResponse(BaseModel):
data: DiscussionsResponseData
# PRs
class LabelNode(BaseModel):
name: str
class Labels(BaseModel):
nodes: List[LabelNode]
class ReviewNode(BaseModel):
author: Union[Author, None] = None
state: str
class Reviews(BaseModel):
nodes: List[ReviewNode]
class PullRequestNode(BaseModel):
number: int
labels: Labels
author: Union[Author, None] = None
changedFiles: int
additions: int
deletions: int
title: str
createdAt: datetime
state: str
reviews: Reviews
# comments: Comments
class PullRequestEdge(BaseModel):
cursor: str
node: PullRequestNode
class PullRequests(BaseModel):
edges: List[PullRequestEdge]
class PRsRepository(BaseModel):
pullRequests: PullRequests
class PRsResponseData(BaseModel):
repository: PRsRepository
class PRsResponse(BaseModel):
data: PRsResponseData
class Settings(BaseSettings):
input_token: SecretStr
github_repository: str
httpx_timeout: int = 30
def get_graphql_response(
*,
settings: Settings,
query: str,
after: Union[str, None] = None,
category_id: Union[str, None] = None,
) -> Dict[str, Any]:
headers = {"Authorization": f"token {settings.input_token.get_secret_value()}"}
# category_id is only used by one query, but GraphQL allows unused variables, so
# keep it here for simplicity
variables = {"after": after, "category_id": category_id}
response = httpx.post(
github_graphql_url,
headers=headers,
timeout=settings.httpx_timeout,
json={"query": query, "variables": variables, "operationName": "Q"},
)
if response.status_code != 200:
logging.error(
f"Response was not 200, after: {after}, category_id: {category_id}"
)
logging.error(response.text)
raise RuntimeError(response.text)
data = response.json()
if "errors" in data:
logging.error(f"Errors in response, after: {after}, category_id: {category_id}")
logging.error(data["errors"])
logging.error(response.text)
raise RuntimeError(response.text)
return data
# def get_graphql_issue_edges(*, settings: Settings, after: Union[str, None] = None):
# data = get_graphql_response(settings=settings, query=issues_query, after=after)
# graphql_response = IssuesResponse.model_validate(data)
# return graphql_response.data.repository.issues.edges
# def get_graphql_question_discussion_edges(
# *,
# settings: Settings,
# after: Union[str, None] = None,
# ):
# data = get_graphql_response(
# settings=settings,
# query=discussions_query,
# after=after,
# category_id=questions_category_id,
# )
# graphql_response = DiscussionsResponse.model_validate(data)
# return graphql_response.data.repository.discussions.edges
def get_graphql_pr_edges(*, settings: Settings, after: Union[str, None] = None):
if after is None:
print("Querying PRs...")
else:
print(f"Querying PRs with cursor {after}...")
data = get_graphql_response(
settings=settings,
query=prs_query,
after=after
)
graphql_response = PRsResponse.model_validate(data)
return graphql_response.data.repository.pullRequests.edges
# def get_issues_experts(settings: Settings):
# issue_nodes: List[IssuesNode] = []
# issue_edges = get_graphql_issue_edges(settings=settings)
# while issue_edges:
# for edge in issue_edges:
# issue_nodes.append(edge.node)
# last_edge = issue_edges[-1]
# issue_edges = get_graphql_issue_edges(settings=settings, after=last_edge.cursor)
# commentors = Counter()
# last_month_commentors = Counter()
# authors: Dict[str, Author] = {}
# now = datetime.now(tz=timezone.utc)
# one_month_ago = now - timedelta(days=30)
# for issue in issue_nodes:
# issue_author_name = None
# if issue.author:
# authors[issue.author.login] = issue.author
# issue_author_name = issue.author.login
# issue_commentors = set()
# for comment in issue.comments.nodes:
# if comment.author:
# authors[comment.author.login] = comment.author
# if comment.author.login != issue_author_name:
# issue_commentors.add(comment.author.login)
# for author_name in issue_commentors:
# commentors[author_name] += 1
# if issue.createdAt > one_month_ago:
# last_month_commentors[author_name] += 1
# return commentors, last_month_commentors, authors
# def get_discussions_experts(settings: Settings):
# discussion_nodes: List[DiscussionsNode] = []
# discussion_edges = get_graphql_question_discussion_edges(settings=settings)
# while discussion_edges:
# for discussion_edge in discussion_edges:
# discussion_nodes.append(discussion_edge.node)
# last_edge = discussion_edges[-1]
# discussion_edges = get_graphql_question_discussion_edges(
# settings=settings, after=last_edge.cursor
# )
# commentors = Counter()
# last_month_commentors = Counter()
# authors: Dict[str, Author] = {}
# now = datetime.now(tz=timezone.utc)
# one_month_ago = now - timedelta(days=30)
# for discussion in discussion_nodes:
# discussion_author_name = None
# if discussion.author:
# authors[discussion.author.login] = discussion.author
# discussion_author_name = discussion.author.login
# discussion_commentors = set()
# for comment in discussion.comments.nodes:
# if comment.author:
# authors[comment.author.login] = comment.author
# if comment.author.login != discussion_author_name:
# discussion_commentors.add(comment.author.login)
# for reply in comment.replies.nodes:
# if reply.author:
# authors[reply.author.login] = reply.author
# if reply.author.login != discussion_author_name:
# discussion_commentors.add(reply.author.login)
# for author_name in discussion_commentors:
# commentors[author_name] += 1
# if discussion.createdAt > one_month_ago:
# last_month_commentors[author_name] += 1
# return commentors, last_month_commentors, authors
# def get_experts(settings: Settings):
# (
# discussions_commentors,
# discussions_last_month_commentors,
# discussions_authors,
# ) = get_discussions_experts(settings=settings)
# commentors = discussions_commentors
# last_month_commentors = discussions_last_month_commentors
# authors = {**discussions_authors}
# return commentors, last_month_commentors, authors
def _logistic(x, k):
return x / (x + k)
def get_contributors(settings: Settings):
pr_nodes: List[PullRequestNode] = []
pr_edges = get_graphql_pr_edges(settings=settings)
while pr_edges:
for edge in pr_edges:
pr_nodes.append(edge.node)
last_edge = pr_edges[-1]
pr_edges = get_graphql_pr_edges(settings=settings, after=last_edge.cursor)
contributors = Counter()
contributor_scores = Counter()
recent_contributor_scores = Counter()
reviewers = Counter()
authors: Dict[str, Author] = {}
for pr in pr_nodes:
pr_reviewers: Set[str] = set()
for review in pr.reviews.nodes:
if review.author:
authors[review.author.login] = review.author
pr_reviewers.add(review.author.login)
for reviewer in pr_reviewers:
reviewers[reviewer] += 1
if pr.author:
authors[pr.author.login] = pr.author
contributors[pr.author.login] += 1
files_changed = pr.changedFiles
lines_changed = pr.additions + pr.deletions
score = _logistic(files_changed, 20) + _logistic(lines_changed, 100)
contributor_scores[pr.author.login] += score
three_months_ago = (datetime.now(timezone.utc) - timedelta(days=3*30))
if pr.createdAt > three_months_ago:
recent_contributor_scores[pr.author.login] += score
return contributors, contributor_scores, recent_contributor_scores, reviewers, authors
def get_top_users(
*,
counter: Counter,
min_count: int,
authors: Dict[str, Author],
skip_users: Container[str],
):
users = []
for commentor, count in counter.most_common():
if commentor in skip_users:
continue
if count >= min_count:
author = authors[commentor]
users.append(
{
"login": commentor,
"count": count,
"avatarUrl": author.avatarUrl,
"twitterUsername": author.twitterUsername,
"url": author.url,
}
)
return users
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
settings = Settings()
logging.info(f"Using config: {settings.model_dump_json()}")
g = Github(settings.input_token.get_secret_value())
repo = g.get_repo(settings.github_repository)
# question_commentors, question_last_month_commentors, question_authors = get_experts(
# settings=settings
# )
contributors, contributor_scores, recent_contributor_scores, reviewers, pr_authors = get_contributors(
settings=settings
)
# authors = {**question_authors, **pr_authors}
authors = {**pr_authors}
maintainers_logins = {
"hwchase17",
"agola11",
"baskaryan",
"hinthornw",
"nfcampos",
"efriis",
"eyurtsev",
"rlancemartin"
}
hidden_logins = {
"dev2049",
"vowelparrot",
"obi1kenobi",
"langchain-infra",
"jacoblee93",
"dqbd",
"bracesproul",
"akira",
}
bot_names = {"dosubot", "github-actions", "CodiumAI-Agent"}
maintainers = []
for login in maintainers_logins:
user = authors[login]
maintainers.append(
{
"login": login,
"count": contributors[login], #+ question_commentors[login],
"avatarUrl": user.avatarUrl,
"twitterUsername": user.twitterUsername,
"url": user.url,
}
)
# min_count_expert = 10
# min_count_last_month = 3
min_score_contributor = 1
min_count_reviewer = 5
skip_users = maintainers_logins | bot_names | hidden_logins
# experts = get_top_users(
# counter=question_commentors,
# min_count=min_count_expert,
# authors=authors,
# skip_users=skip_users,
# )
# last_month_active = get_top_users(
# counter=question_last_month_commentors,
# min_count=min_count_last_month,
# authors=authors,
# skip_users=skip_users,
# )
top_recent_contributors = get_top_users(
counter=recent_contributor_scores,
min_count=min_score_contributor,
authors=authors,
skip_users=skip_users,
)
top_contributors = get_top_users(
counter=contributor_scores,
min_count=min_score_contributor,
authors=authors,
skip_users=skip_users,
)
top_reviewers = get_top_users(
counter=reviewers,
min_count=min_count_reviewer,
authors=authors,
skip_users=skip_users,
)
people = {
"maintainers": maintainers,
# "experts": experts,
# "last_month_active": last_month_active,
"top_recent_contributors": top_recent_contributors,
"top_contributors": top_contributors,
"top_reviewers": top_reviewers,
}
people_path = Path("./docs/data/people.yml")
people_old_content = people_path.read_text(encoding="utf-8")
new_people_content = yaml.dump(
people, sort_keys=False, width=200, allow_unicode=True
)
if (
people_old_content == new_people_content
):
logging.info("The LangChain People data hasn't changed, finishing.")
sys.exit(0)
people_path.write_text(new_people_content, encoding="utf-8")
logging.info("Setting up GitHub Actions git user")
subprocess.run(["git", "config", "user.name", "github-actions"], check=True)
subprocess.run(
["git", "config", "user.email", "github-actions@github.com"], check=True
)
branch_name = "langchain/langchain-people"
logging.info(f"Creating a new branch {branch_name}")
subprocess.run(["git", "checkout", "-B", branch_name], check=True)
logging.info("Adding updated file")
subprocess.run(
["git", "add", str(people_path)], check=True
)
logging.info("Committing updated file")
message = "👥 Update LangChain people data"
result = subprocess.run(["git", "commit", "-m", message], check=True)
logging.info("Pushing branch")
subprocess.run(["git", "push", "origin", branch_name, "-f"], check=True)
logging.info("Creating PR")
pr = repo.create_pull(title=message, body=message, base="master", head=branch_name)
logging.info(f"Created PR: {pr.number}")
logging.info("Finished")

View File

@@ -26,13 +26,12 @@ inputs:
runs:
using: composite
steps:
- uses: actions/setup-python@v5
- uses: actions/setup-python@v4
name: Setup python ${{ inputs.python-version }}
id: setup-python
with:
python-version: ${{ inputs.python-version }}
- uses: actions/cache@v4
- uses: actions/cache@v3
id: cache-bin-poetry
name: Cache Poetry binary - Python ${{ inputs.python-version }}
env:
@@ -75,11 +74,10 @@ runs:
env:
POETRY_VERSION: ${{ inputs.poetry-version }}
PYTHON_VERSION: ${{ inputs.python-version }}
# Install poetry using the python version installed by setup-python step.
run: pipx install "poetry==$POETRY_VERSION" --python '${{ steps.setup-python.outputs.python-path }}' --verbose
run: pipx install "poetry==$POETRY_VERSION" --python "python$PYTHON_VERSION" --verbose
- name: Restore pip and poetry cached dependencies
uses: actions/cache@v4
uses: actions/cache@v3
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "4"
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}

View File

@@ -1,27 +1,17 @@
import json
import sys
import os
from typing import Dict
LANGCHAIN_DIRS = [
LANGCHAIN_DIRS = {
"libs/core",
"libs/langchain",
"libs/experimental",
"libs/community",
]
}
if __name__ == "__main__":
files = sys.argv[1:]
dirs_to_run: Dict[str, set] = {
"lint": set(),
"test": set(),
"extended-test": set(),
}
if len(files) == 300:
# max diff length is 300 files - there are likely files missing
raise ValueError("Max diff reached. Please manually run CI on changed libs.")
dirs_to_run = set()
for file in files:
if any(
@@ -30,42 +20,32 @@ if __name__ == "__main__":
".github/workflows",
".github/tools",
".github/actions",
"libs/core",
".github/scripts/check_diff.py",
)
):
# add all LANGCHAIN_DIRS for infra changes
dirs_to_run["extended-test"].update(LANGCHAIN_DIRS)
dirs_to_run["lint"].add(".")
if any(file.startswith(dir_) for dir_ in LANGCHAIN_DIRS):
# add that dir and all dirs after in LANGCHAIN_DIRS
# for extended testing
found = False
for dir_ in LANGCHAIN_DIRS:
if file.startswith(dir_):
found = True
if found:
dirs_to_run["extended-test"].add(dir_)
elif file.startswith("libs/partners"):
dirs_to_run.update(LANGCHAIN_DIRS)
elif "libs/community" in file:
dirs_to_run.update(
("libs/community", "libs/langchain", "libs/experimental")
)
elif "libs/partners" in file:
partner_dir = file.split("/")[2]
if os.path.isdir(f"libs/partners/{partner_dir}"):
dirs_to_run["test"].add(f"libs/partners/{partner_dir}")
dirs_to_run.update(
(
f"libs/partners/{partner_dir}",
"libs/langchain",
"libs/experimental",
)
)
# Skip if the directory was deleted
elif "libs/langchain" in file:
dirs_to_run.update(("libs/langchain", "libs/experimental"))
elif "libs/experimental" in file:
dirs_to_run.add("libs/experimental")
elif file.startswith("libs/"):
raise ValueError(
f"Unknown lib: {file}. check_diff.py likely needs "
"an update for this new library!"
)
elif any(file.startswith(p) for p in ["docs/", "templates/", "cookbook/"]):
dirs_to_run["lint"].add(".")
outputs = {
"dirs-to-lint": list(
dirs_to_run["lint"] | dirs_to_run["test"] | dirs_to_run["extended-test"]
),
"dirs-to-test": list(dirs_to_run["test"] | dirs_to_run["extended-test"]),
"dirs-to-extended-test": list(dirs_to_run["extended-test"]),
}
for key, value in outputs.items():
json_output = json.dumps(value)
print(f"{key}={json_output}") # noqa: T201
dirs_to_run.update(LANGCHAIN_DIRS)
else:
pass
print(json.dumps(list(dirs_to_run)))

View File

@@ -1,67 +0,0 @@
import sys
import tomllib
from packaging.version import parse as parse_version
import re
MIN_VERSION_LIBS = ["langchain-core", "langchain-community", "langchain"]
def get_min_version(version: str) -> str:
# case ^x.x.x
_match = re.match(r"^\^(\d+(?:\.\d+){0,2})$", version)
if _match:
return _match.group(1)
# case >=x.x.x,<y.y.y
_match = re.match(r"^>=(\d+(?:\.\d+){0,2}),<(\d+(?:\.\d+){0,2})$", version)
if _match:
_min = _match.group(1)
_max = _match.group(2)
assert parse_version(_min) < parse_version(_max)
return _min
# case x.x.x
_match = re.match(r"^(\d+(?:\.\d+){0,2})$", version)
if _match:
return _match.group(1)
raise ValueError(f"Unrecognized version format: {version}")
def get_min_version_from_toml(toml_path: str):
# Parse the TOML file
with open(toml_path, "rb") as file:
toml_data = tomllib.load(file)
# Get the dependencies from tool.poetry.dependencies
dependencies = toml_data["tool"]["poetry"]["dependencies"]
# Initialize a dictionary to store the minimum versions
min_versions = {}
# Iterate over the libs in MIN_VERSION_LIBS
for lib in MIN_VERSION_LIBS:
# Check if the lib is present in the dependencies
if lib in dependencies:
# Get the version string
version_string = dependencies[lib]
# Use parse_version to get the minimum supported version from version_string
min_version = get_min_version(version_string)
# Store the minimum version in the min_versions dictionary
min_versions[lib] = min_version
return min_versions
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
# Call the function to get the minimum versions
min_versions = get_min_version_from_toml(toml_file)
print(
" ".join([f"{lib}=={version}" for lib, version in min_versions.items()])
) # noqa: T201

106
.github/workflows/_all_ci.yml vendored Normal file
View File

@@ -0,0 +1,106 @@
---
name: langchain CI
on:
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
workflow_dispatch:
inputs:
working-directory:
required: true
type: choice
default: 'libs/langchain'
options:
- libs/langchain
- libs/core
- libs/experimental
- libs/community
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ inputs.working-directory }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.6.1"
jobs:
lint:
uses: ./.github/workflows/_lint.yml
with:
working-directory: ${{ inputs.working-directory }}
secrets: inherit
test:
uses: ./.github/workflows/_test.yml
with:
working-directory: ${{ inputs.working-directory }}
secrets: inherit
compile-integration-tests:
uses: ./.github/workflows/_compile_integration_test.yml
with:
working-directory: ${{ inputs.working-directory }}
secrets: inherit
dependencies:
uses: ./.github/workflows/_dependencies.yml
with:
working-directory: ${{ inputs.working-directory }}
secrets: inherit
extended-tests:
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
name: Python ${{ matrix.python-version }} extended tests
defaults:
run:
working-directory: ${{ inputs.working-directory }}
if: ${{ ! startsWith(inputs.working-directory, 'libs/partners/') }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: extended
- name: Install dependencies
shell: bash
run: |
echo "Running extended tests, installing dependencies with poetry..."
poetry install -E extended_testing --with test
- name: Run extended tests
run: make extended_tests
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -9,7 +9,7 @@ on:
description: "From which folder this pipeline executes"
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.6.1"
jobs:
build:
@@ -24,7 +24,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
name: "poetry run pytest -m compile tests/integration_tests #${{ matrix.python-version }}"
name: Python ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v4

View File

@@ -13,7 +13,7 @@ on:
description: "Relative path to the langchain library folder"
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.6.1"
jobs:
build:
@@ -28,7 +28,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
name: dependency checks ${{ matrix.python-version }}
name: dependencies - Python ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v4
@@ -63,8 +63,6 @@ jobs:
- name: Install the opposite major version of pydantic
# If normal tests use pydantic v1, here we'll use v2, and vice versa.
shell: bash
# airbyte currently doesn't support pydantic v2
if: ${{ !startsWith(inputs.working-directory, 'libs/partners/airbyte') }}
run: |
# Determine the major part of pydantic version
REGULAR_VERSION=$(poetry run python -c "import pydantic; print(pydantic.__version__)" | cut -d. -f1)
@@ -99,8 +97,6 @@ jobs:
fi
echo "Found pydantic version ${CURRENT_VERSION}, as expected"
- name: Run pydantic compatibility tests
# airbyte currently doesn't support pydantic v2
if: ${{ !startsWith(inputs.working-directory, 'libs/partners/airbyte') }}
shell: bash
run: make test

View File

@@ -8,11 +8,10 @@ on:
type: string
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.6.1"
jobs:
build:
environment: Scheduled testing
defaults:
run:
working-directory: ${{ inputs.working-directory }}
@@ -38,42 +37,13 @@ jobs:
shell: bash
run: poetry install --with test,test_integration
- name: Install deps outside pyproject
if: ${{ startsWith(inputs.working-directory, 'libs/community/') }}
shell: bash
run: poetry run pip install "boto3<2" "google-cloud-aiplatform<2"
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Run integration tests
shell: bash
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # for airbyte
run: |
make integration_tests

View File

@@ -13,7 +13,7 @@ on:
description: "Relative path to the langchain library folder"
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.6.1"
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}
# This env var allows us to get inline annotations when ruff has complaints.
@@ -21,7 +21,6 @@ env:
jobs:
build:
name: "make lint #${{ matrix.python-version }}"
runs-on: ubuntu-latest
strategy:
matrix:
@@ -80,13 +79,13 @@ jobs:
poetry run pip install -e "$LANGCHAIN_LOCATION"
- name: Get .mypy_cache to speed up mypy
uses: actions/cache@v4
uses: actions/cache@v3
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "2"
with:
path: |
${{ env.WORKDIR }}/.mypy_cache
key: mypy-lint-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
key: mypy-lint-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', env.WORKDIR)) }}
- name: Analysing the code with our lint
@@ -94,7 +93,7 @@ jobs:
run: |
make lint_package
- name: Install unit test dependencies
- name: Install test dependencies
# Also installs dev/lint/test/typing dependencies, to ensure we have
# type hints for as many of our libraries as possible.
# This helps catch errors that require dependencies to be spotted, for example:
@@ -103,24 +102,18 @@ jobs:
# If you change this configuration, make sure to change the `cache-key`
# in the `poetry_setup` action above to stop using the old cache.
# It doesn't matter how you change it, any change will cause a cache-bust.
if: ${{ ! startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
poetry install --with test
- name: Install unit+integration test dependencies
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
poetry install --with test,test_integration
- name: Get .mypy_cache_test to speed up mypy
uses: actions/cache@v4
uses: actions/cache@v3
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "2"
with:
path: |
${{ env.WORKDIR }}/.mypy_cache_test
key: mypy-test-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
key: mypy-test-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', env.WORKDIR)) }}
- name: Analysing the code with our lint
working-directory: ${{ inputs.working-directory }}

View File

@@ -1,5 +1,5 @@
name: release
run-name: Release ${{ inputs.working-directory }} by @${{ github.actor }}
on:
workflow_call:
inputs:
@@ -15,13 +15,12 @@ on:
default: 'libs/langchain'
env:
PYTHON_VERSION: "3.11"
POETRY_VERSION: "1.7.1"
PYTHON_VERSION: "3.10"
POETRY_VERSION: "1.6.1"
jobs:
build:
if: github.ref == 'refs/heads/master'
environment: Scheduled testing
runs-on: ubuntu-latest
outputs:
@@ -118,18 +117,11 @@ jobs:
# are not found on test PyPI can be resolved and installed anyway.
# (https://test.pypi.org/simple). This will include the PKG_NAME==VERSION
# package because VERSION will not have been uploaded to regular PyPI yet.
# - attempt install again after 5 seconds if it fails because there is
# sometimes a delay in availability on test pypi
#
run: |
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION" || \
( \
sleep 5 && \
poetry run pip install \
--extra-index-url https://test.pypi.org/simple/ \
"$PKG_NAME==$VERSION" \
)
"$PKG_NAME==$VERSION"
# Replace all dashes in the package name with underscores,
# since that's how Python imports packages with dashes in the name.
@@ -157,64 +149,16 @@ jobs:
run: make tests
working-directory: ${{ inputs.working-directory }}
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Run integration tests
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # for airbyte
run: make integration_tests
working-directory: ${{ inputs.working-directory }}
- name: Get minimum versions
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
poetry run pip install packaging
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
- name: Run unit tests with minimum dependency versions
if: ${{ steps.min-version.outputs.min-versions != '' }}
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
poetry run pip install $MIN_VERSIONS
make tests
working-directory: ${{ inputs.working-directory }}
publish:
needs:

View File

@@ -13,7 +13,7 @@ on:
description: "Relative path to the langchain library folder"
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.6.1"
jobs:
build:
@@ -28,7 +28,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
name: "make test #${{ matrix.python-version }}"
name: Python ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v4

View File

@@ -9,7 +9,7 @@ on:
description: "From which folder this pipeline executes"
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.6.1"
PYTHON_VERSION: "3.10"
jobs:

View File

@@ -1,69 +0,0 @@
name: API docs build
on:
workflow_dispatch:
schedule:
- cron: '0 13 * * *'
env:
POETRY_VERSION: "1.7.1"
PYTHON_VERSION: "3.10"
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
ref: bagatur/api_docs_build
path: langchain
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain-google
path: langchain-google
- name: Move google libs
run: |
rm -rf langchain/libs/partners/google-genai langchain/libs/partners/google-vertexai
mv langchain-google/libs/genai langchain/libs/partners/google-genai
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
- name: Set Git config
working-directory: langchain
run: |
git config --local user.email "actions@github.com"
git config --local user.name "Github Actions"
- name: Merge master
working-directory: langchain
run: |
git fetch origin master
git merge origin/master -m "Merge master" --allow-unrelated-histories -X theirs
- name: Set up Python ${{ env.PYTHON_VERSION }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./langchain/.github/actions/poetry_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
poetry-version: ${{ env.POETRY_VERSION }}
cache-key: api-docs
working-directory: langchain
- name: Install dependencies
working-directory: langchain
run: |
poetry run python -m pip install --upgrade --no-cache-dir pip setuptools
poetry run python -m pip install --upgrade --no-cache-dir sphinx readthedocs-sphinx-ext
# skip airbyte and ibm due to pandas dependency issue
poetry run python -m pip install $(ls ./libs/partners | grep -vE "airbyte|ibm" | xargs -I {} echo "./libs/partners/{}")
poetry run python -m pip install --exists-action=w --no-cache-dir -r docs/api_reference/requirements.txt
- name: Build docs
working-directory: langchain
run: |
poetry run python -m pip install --upgrade --no-cache-dir pip setuptools
poetry run python docs/api_reference/create_api_rst.py
poetry run python -m sphinx -T -E -b html -d _build/doctrees -c docs/api_reference docs/api_reference api_reference_build/html -j auto
# https://github.com/marketplace/actions/add-commit
- uses: EndBug/add-and-commit@v9
with:
cwd: langchain
message: 'Update API docs build'

View File

@@ -1,10 +1,15 @@
---
name: CI
name: Check library diffs
on:
push:
branches: [master]
pull_request:
paths:
- ".github/actions/**"
- ".github/tools/**"
- ".github/workflows/**"
- "libs/**"
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
@@ -16,142 +21,27 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.7.1"
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- uses: actions/setup-python@v4
with:
python-version: '3.10'
- id: files
uses: Ana06/get-changed-files@v2.2.0
- id: set-matrix
run: |
python .github/scripts/check_diff.py ${{ steps.files.outputs.all }} >> $GITHUB_OUTPUT
run: echo "dirs-to-run=$(python .github/scripts/check_diff.py ${{ steps.files.outputs.all }})" >> $GITHUB_OUTPUT
outputs:
dirs-to-lint: ${{ steps.set-matrix.outputs.dirs-to-lint }}
dirs-to-test: ${{ steps.set-matrix.outputs.dirs-to-test }}
dirs-to-extended-test: ${{ steps.set-matrix.outputs.dirs-to-extended-test }}
lint:
name: cd ${{ matrix.working-directory }}
dirs-to-run: ${{ steps.set-matrix.outputs.dirs-to-run }}
ci:
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-lint != '[]' }}
strategy:
matrix:
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-lint) }}
uses: ./.github/workflows/_lint.yml
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-run) }}
uses: ./.github/workflows/_all_ci.yml
with:
working-directory: ${{ matrix.working-directory }}
secrets: inherit
test:
name: cd ${{ matrix.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
strategy:
matrix:
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-test) }}
uses: ./.github/workflows/_test.yml
with:
working-directory: ${{ matrix.working-directory }}
secrets: inherit
compile-integration-tests:
name: cd ${{ matrix.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
strategy:
matrix:
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-test) }}
uses: ./.github/workflows/_compile_integration_test.yml
with:
working-directory: ${{ matrix.working-directory }}
secrets: inherit
dependencies:
name: cd ${{ matrix.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
strategy:
matrix:
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-test) }}
uses: ./.github/workflows/_dependencies.yml
with:
working-directory: ${{ matrix.working-directory }}
secrets: inherit
extended-tests:
name: "cd ${{ matrix.working-directory }} / make extended_tests #${{ matrix.python-version }}"
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-extended-test != '[]' }}
strategy:
matrix:
# note different variable for extended test dirs
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-extended-test) }}
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
runs-on: ubuntu-latest
defaults:
run:
working-directory: ${{ matrix.working-directory }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ matrix.working-directory }}
cache-key: extended
- name: Install dependencies
shell: bash
run: |
echo "Running extended tests, installing dependencies with poetry..."
poetry install -E extended_testing --with test
- name: Run extended tests
run: |
echo "sleeping 150"
sleep 150
echo "sleeping 151"
sleep 151
echo "done sleeping lets test"
make extended_tests
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'
ci_success:
name: "CI Success"
needs: [build, lint, test, compile-integration-tests, dependencies, extended-tests]
if: |
always()
runs-on: ubuntu-latest
env:
JOBS_JSON: ${{ toJSON(needs) }}
RESULTS_JSON: ${{ toJSON(needs.*.result) }}
EXIT_CODE: ${{!contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && '0' || '1'}}
steps:
- name: "CI Success"
run: |
echo $JOBS_JSON
echo $RESULTS_JSON
echo "Exiting with $EXIT_CODE"
exit $EXIT_CODE

View File

@@ -1,5 +1,5 @@
---
name: CI / cd . / make spell_check
name: Codespell
on:
push:
@@ -12,7 +12,7 @@ permissions:
jobs:
codespell:
name: (Check for spelling errors)
name: Check for spelling errors
runs-on: ubuntu-latest
steps:
@@ -32,6 +32,5 @@ jobs:
- name: Codespell
uses: codespell-project/actions-codespell@v2
with:
skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv
skip: guide_imports.json
ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
exclude_file: libs/community/langchain_community/llms/yuan2.py

35
.github/workflows/doc_lint.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
---
name: Docs, templates, cookbook lint
on:
push:
branches: [ master ]
pull_request:
paths:
- 'docs/**'
- 'templates/**'
- 'cookbook/**'
- '.github/workflows/_lint.yml'
- '.github/workflows/doc_lint.yml'
workflow_dispatch:
jobs:
check:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Run import check
run: |
# We should not encourage imports directly from main init file
# Expect for hub
git grep 'from langchain import' {docs/docs,templates,cookbook} | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
lint:
uses:
./.github/workflows/_lint.yml
with:
working-directory: "."
secrets: inherit

View File

@@ -7,4 +7,4 @@ ignore_words_list = (
pyproject_toml.get("tool", {}).get("codespell", {}).get("ignore-words-list")
)
print(f"::set-output name=ignore_words_list::{ignore_words_list}") # noqa: T201
print(f"::set-output name=ignore_words_list::{ignore_words_list}")

View File

@@ -0,0 +1,13 @@
---
name: libs/cli Release
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
release:
uses:
./.github/workflows/_release.yml
with:
working-directory: libs/cli
secrets: inherit

View File

@@ -0,0 +1,13 @@
---
name: libs/community Release
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
release:
uses:
./.github/workflows/_release.yml
with:
working-directory: libs/community
secrets: inherit

View File

@@ -0,0 +1,13 @@
---
name: libs/core Release
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
release:
uses:
./.github/workflows/_release.yml
with:
working-directory: libs/core
secrets: inherit

View File

@@ -0,0 +1,13 @@
---
name: libs/experimental Release
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
release:
uses:
./.github/workflows/_release.yml
with:
working-directory: libs/experimental
secrets: inherit

View File

@@ -0,0 +1,13 @@
---
name: Experimental Test Release
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
release:
uses:
./.github/workflows/_test_release.yml
with:
working-directory: libs/experimental
secrets: inherit

View File

@@ -0,0 +1,13 @@
---
name: libs/core Release
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
release:
uses:
./.github/workflows/_release.yml
with:
working-directory: libs/core
secrets: inherit

27
.github/workflows/langchain_release.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
---
name: libs/langchain Release
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
release:
uses:
./.github/workflows/_release.yml
with:
working-directory: libs/langchain
secrets: inherit
# N.B.: It's possible that PyPI doesn't make the new release visible / available
# immediately after publishing. If that happens, the docker build might not
# create a new docker image for the new release, since it won't see it.
#
# If this ends up being a problem, add a check to the end of the `_release.yml`
# workflow that prevents the workflow from finishing until the new release
# is visible and installable on PyPI.
release-docker:
needs:
- release
uses:
./.github/workflows/langchain_release_docker.yml
secrets: inherit

View File

@@ -0,0 +1,13 @@
---
name: Test Release
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
jobs:
release:
uses:
./.github/workflows/_test_release.yml
with:
working-directory: libs/langchain
secrets: inherit

View File

@@ -1,36 +0,0 @@
name: LangChain People
on:
schedule:
- cron: "0 14 1 * *"
push:
branches: [jacob/people]
workflow_dispatch:
inputs:
debug_enabled:
description: 'Run the build with tmate debugging enabled (https://github.com/marketplace/actions/debugging-with-tmate)'
required: false
default: 'false'
jobs:
langchain-people:
if: github.repository_owner == 'langchain-ai'
runs-on: ubuntu-latest
steps:
- name: Dump GitHub context
env:
GITHUB_CONTEXT: ${{ toJson(github) }}
run: echo "$GITHUB_CONTEXT"
- uses: actions/checkout@v4
# Ref: https://github.com/actions/runner/issues/2033
- name: Fix git safe.directory in container
run: mkdir -p /home/runner/work/_temp/_github_home && printf "[safe]\n\tdirectory = /github/workspace" > /home/runner/work/_temp/_github_home/.gitconfig
# Allow debugging with tmate
- name: Setup tmate session
uses: mxschmitt/action-tmate@v3
if: ${{ github.event_name == 'workflow_dispatch' && github.event.inputs.debug_enabled == 'true' }}
with:
limit-access-to-actor: true
- uses: ./.github/actions/people
with:
token: ${{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}

View File

@@ -6,7 +6,7 @@ on:
- cron: '0 13 * * *'
env:
POETRY_VERSION: "1.7.1"
POETRY_VERSION: "1.6.1"
jobs:
build:
@@ -36,7 +36,7 @@ jobs:
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
uses: 'google-github-actions/auth@v1'
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
@@ -54,11 +54,6 @@ jobs:
echo "Running scheduled tests, installing dependencies with poetry..."
poetry install --with=test_integration,test
- name: Install deps outside pyproject
if: ${{ startsWith(inputs.working-directory, 'libs/community/') }}
shell: bash
run: poetry run pip install "boto3<2" "google-cloud-aiplatform<2"
- name: Run tests
shell: bash
env:

36
.github/workflows/templates_ci.yml vendored Normal file
View File

@@ -0,0 +1,36 @@
---
name: templates CI
on:
push:
branches: [ master ]
pull_request:
paths:
- '.github/actions/poetry_setup/action.yml'
- '.github/tools/**'
- '.github/workflows/_lint.yml'
- '.github/workflows/templates_ci.yml'
- 'templates/**'
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.6.1"
WORKDIR: "templates"
jobs:
lint:
uses:
./.github/workflows/_lint.yml
with:
working-directory: templates
secrets: inherit

9
.gitignore vendored
View File

@@ -115,10 +115,13 @@ celerybeat.pid
# Environments
.env
.envrc
.venv*
.venv
.venvs
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
@@ -174,6 +177,4 @@ docs/docs/build
docs/docs/node_modules
docs/docs/yarn.lock
_dist
docs/docs/templates
prof
docs/docs/templates

View File

@@ -4,17 +4,21 @@
# Required
version: 2
formats:
- pdf
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.11"
commands:
- mkdir -p $READTHEDOCS_OUTPUT
- cp -r api_reference_build/* $READTHEDOCS_OUTPUT
- python -mvirtualenv $READTHEDOCS_VIRTUALENV_PATH
- python -m pip install --upgrade --no-cache-dir pip setuptools
- python -m pip install --upgrade --no-cache-dir sphinx readthedocs-sphinx-ext
- python -m pip install ./libs/partners/*
- python -m pip install --exists-action=w --no-cache-dir -r docs/api_reference/requirements.txt
- python docs/api_reference/create_api_rst.py
- cat docs/api_reference/conf.py
- python -m sphinx -T -E -b html -d _build/doctrees -c docs/api_reference docs/api_reference $READTHEDOCS_OUTPUT/html -j auto
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/api_reference/conf.py

View File

@@ -15,12 +15,7 @@ docs_build:
docs/.local_build.sh
docs_clean:
@if [ -d _dist ]; then \
rm -r _dist; \
echo "Directory _dist has been cleaned."; \
else \
echo "Nothing to clean."; \
fi
rm -r _dist
docs_linkcheck:
poetry run linkchecker _dist/docs/ --ignore-url node_modules
@@ -50,13 +45,11 @@ lint lint_package lint_tests:
poetry run ruff docs templates cookbook
poetry run ruff format docs templates cookbook --diff
poetry run ruff --select I docs templates cookbook
git grep 'from langchain import' {docs/docs,templates,cookbook} | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
format format_diff:
poetry run ruff format docs templates cookbook
poetry run ruff --select I --fix docs templates cookbook
######################
# HELP
######################

View File

@@ -1,6 +1,6 @@
# 🦜️🔗 LangChain
⚡ Build context-aware reasoning applications
⚡ Building applications with LLMs through composability
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/releases)
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
@@ -18,7 +18,7 @@ Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langc
To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
[LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications.
Fill out [this form](https://www.langchain.com/contact-sales) to speak with our sales team.
Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to get off the waitlist or speak with our sales team.
## Quick Install
@@ -43,14 +43,13 @@ This framework consists of several parts.
- **[LangChain Templates](templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.
- **[LangServe](https://github.com/langchain-ai/langserve)**: A library for deploying LangChain chains as a REST API.
- **[LangSmith](https://smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
- **[LangGraph](https://python.langchain.com/docs/langgraph)**: LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.
The LangChain libraries themselves are made up of several different packages.
- **[`langchain-core`](libs/core)**: Base abstractions and LangChain Expression Language.
- **[`langchain-community`](libs/community)**: Third party integrations.
- **[`langchain`](libs/langchain)**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/img/langchain_stack.png "LangChain Architecture Overview")
![LangChain Stack](docs/static/img/langchain_stack.png)
## 🧱 What can you build with LangChain?
**❓ Retrieval augmented generation**

View File

@@ -61,13 +61,13 @@
],
"source": [
"# Local\n",
"from langchain_community.chat_models import ChatOllama\n",
"from langchain.chat_models import ChatOllama\n",
"\n",
"llama2_chat = ChatOllama(model=\"llama2:13b-chat\")\n",
"llama2_code = ChatOllama(model=\"codellama:7b-instruct\")\n",
"\n",
"# API\n",
"from langchain_community.llms import Replicate\n",
"from langchain.llms import Replicate\n",
"\n",
"# REPLICATE_API_TOKEN = getpass()\n",
"# os.environ[\"REPLICATE_API_TOKEN\"] = REPLICATE_API_TOKEN\n",
@@ -107,7 +107,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utilities import SQLDatabase\n",
"from langchain.utilities import SQLDatabase\n",
"\n",
"db = SQLDatabase.from_uri(\"sqlite:///nba_roster.db\", sample_rows_in_table_info=0)\n",
"\n",
@@ -125,7 +125,7 @@
"id": "654b3577-baa2-4e12-a393-f40e5db49ac7",
"metadata": {},
"source": [
"## Query a SQL Database \n",
"## Query a SQL DB \n",
"\n",
"Follow the runnables workflow [here](https://python.langchain.com/docs/expression_language/cookbook/sql_db)."
]
@@ -149,9 +149,8 @@
],
"source": [
"# Prompt\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain.prompts import ChatPromptTemplate\n",
"\n",
"# Update the template based on the type of SQL Database like MySQL, Microsoft SQL Server and so on\n",
"template = \"\"\"Based on the table schema below, write a SQL query that would answer the user's question:\n",
"{schema}\n",
"\n",
@@ -278,7 +277,7 @@
"source": [
"# Prompt\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"\n",
"template = \"\"\"Given an input question, convert it to a SQL query. No pre-amble. Based on the table schema below, write a SQL query that would answer the user's question:\n",
"{schema}\n",

View File

@@ -101,7 +101,7 @@
"If you want to use the provided folder, then simply opt for a [pdf loader](https://python.langchain.com/docs/modules/data_connection/document_loaders/pdf) for the document:\n",
"\n",
"```\n",
"from langchain_community.document_loaders import PyPDFLoader\n",
"from langchain.document_loaders import PyPDFLoader\n",
"loader = PyPDFLoader(path + fname)\n",
"docs = loader.load()\n",
"tables = [] # Ignore w/ basic pdf loader\n",
@@ -198,9 +198,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"\n",
"# Generate summaries of text elements\n",
@@ -341,7 +341,7 @@
"Add raw docs and doc summaries to [Multi Vector Retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/multi_vector#summary): \n",
"\n",
"* Store the raw texts, tables, and images in the `docstore`.\n",
"* Store the texts, table summaries, and image summaries in the `vectorstore` for efficient semantic retrieval."
"* Store the texts, table summaries, and image summaries in the `vectorstore` for semantic retrieval."
]
},
{
@@ -353,11 +353,11 @@
"source": [
"import uuid\n",
"\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"\n",
"def create_multi_vector_retriever(\n",

View File

@@ -93,7 +93,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import PyPDFLoader\n",
"from langchain.document_loaders import PyPDFLoader\n",
"\n",
"loader = PyPDFLoader(\"./cj/cj.pdf\")\n",
"docs = loader.load()\n",
@@ -158,11 +158,11 @@
}
],
"source": [
"from langchain.chat_models import ChatVertexAI\n",
"from langchain.llms import VertexAI\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_community.chat_models import ChatVertexAI\n",
"from langchain_community.llms import VertexAI\n",
"from langchain.schema.output_parser import StrOutputParser\n",
"from langchain_core.messages import AIMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda\n",
"\n",
"\n",
@@ -243,7 +243,7 @@
"import base64\n",
"import os\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain.schema.messages import HumanMessage\n",
"\n",
"\n",
"def encode_image(image_path):\n",
@@ -342,11 +342,11 @@
"source": [
"import uuid\n",
"\n",
"from langchain.embeddings import VertexAIEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.schema.document import Document\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.embeddings import VertexAIEmbeddings\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain.vectorstores import Chroma\n",
"\n",
"\n",
"def create_multi_vector_retriever(\n",
@@ -440,7 +440,7 @@
"import re\n",
"\n",
"from IPython.display import HTML, display\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain.schema.runnable import RunnableLambda, RunnablePassthrough\n",
"from PIL import Image\n",
"\n",
"\n",

View File

@@ -235,9 +235,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser"
]
},
{
@@ -318,11 +318,11 @@
"source": [
"import uuid\n",
"\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(collection_name=\"summaries\", embedding_function=OpenAIEmbeddings())\n",

View File

@@ -211,9 +211,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser"
]
},
{
@@ -373,11 +373,11 @@
"source": [
"import uuid\n",
"\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"# The vectorstore to use to index the child chunks\n",
"vectorstore = Chroma(collection_name=\"summaries\", embedding_function=OpenAIEmbeddings())\n",

View File

@@ -209,9 +209,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatOllama\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate"
"from langchain.chat_models import ChatOllama\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser"
]
},
{
@@ -376,10 +376,10 @@
"source": [
"import uuid\n",
"\n",
"from langchain.embeddings import GPT4AllEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.embeddings import GPT4AllEmbeddings\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"\n",
"# The vectorstore to use to index the child chunks\n",

View File

@@ -62,7 +62,7 @@
"path = \"/Users/rlm/Desktop/cpi/\"\n",
"\n",
"# Load\n",
"from langchain_community.document_loaders import PyPDFLoader\n",
"from langchain.document_loaders import PyPDFLoader\n",
"\n",
"loader = PyPDFLoader(path + \"cpi.pdf\")\n",
"pdf_pages = loader.load()\n",
@@ -132,8 +132,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.vectorstores import Chroma\n",
"\n",
"baseline = Chroma.from_texts(\n",
" texts=all_splits_pypdf_texts,\n",
@@ -160,9 +160,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# Prompt\n",
"prompt_text = \"\"\"You are an assistant tasked with summarizing tables and text for retrieval. \\\n",
@@ -520,7 +520,7 @@
"source": [
"import re\n",
"\n",
"from langchain_core.documents import Document\n",
"from langchain.schema import Document\n",
"from langchain_core.runnables import RunnableLambda\n",
"\n",
"\n",

View File

@@ -1,200 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-airbyte"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"\n",
"GITHUB_TOKEN = getpass.getpass()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"from langchain_airbyte import AirbyteLoader\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"loader = AirbyteLoader(\n",
" source=\"source-github\",\n",
" stream=\"pull_requests\",\n",
" config={\n",
" \"credentials\": {\"personal_access_token\": GITHUB_TOKEN},\n",
" \"repositories\": [\"langchain-ai/langchain\"],\n",
" },\n",
" template=PromptTemplate.from_template(\n",
" \"\"\"# {title}\n",
"by {user[login]}\n",
"\n",
"{body}\"\"\"\n",
" ),\n",
" include_metadata=False,\n",
")\n",
"docs = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"# Updated partners/ibm README\n",
"by williamdevena\n",
"\n",
"## PR title\n",
"partners: changed the README file for the IBM Watson AI integration in the libs/partners/ibm folder.\n",
"\n",
"## PR message\n",
"Description: Changed the README file of partners/ibm following the docs on https://python.langchain.com/docs/integrations/llms/ibm_watsonx\n",
"\n",
"The README includes:\n",
"\n",
"- Brief description\n",
"- Installation\n",
"- Setting-up instructions (API key, project id, ...)\n",
"- Basic usage:\n",
" - Loading the model\n",
" - Direct inference\n",
" - Chain invoking\n",
" - Streaming the model output\n",
" \n",
"Issue: https://github.com/langchain-ai/langchain/issues/17545\n",
"\n",
"Dependencies: None\n",
"\n",
"Twitter handle: None\n"
]
}
],
"source": [
"print(docs[-2].page_content)"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"10283"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(docs)"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"import tiktoken\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"enc = tiktoken.get_encoding(\"cl100k_base\")\n",
"\n",
"vectorstore = Chroma.from_documents(\n",
" docs,\n",
" embedding=OpenAIEmbeddings(\n",
" disallowed_special=(enc.special_tokens_set - {\"<|endofprompt|>\"})\n",
" ),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [],
"source": [
"retriever = vectorstore.as_retriever()"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='# Updated partners/ibm README\\nby williamdevena\\n\\n## PR title\\r\\npartners: changed the README file for the IBM Watson AI integration in the libs/partners/ibm folder.\\r\\n\\r\\n## PR message\\r\\nDescription: Changed the README file of partners/ibm following the docs on https://python.langchain.com/docs/integrations/llms/ibm_watsonx\\r\\n\\r\\nThe README includes:\\r\\n\\r\\n- Brief description\\r\\n- Installation\\r\\n- Setting-up instructions (API key, project id, ...)\\r\\n- Basic usage:\\r\\n - Loading the model\\r\\n - Direct inference\\r\\n - Chain invoking\\r\\n - Streaming the model output\\r\\n \\r\\nIssue: https://github.com/langchain-ai/langchain/issues/17545\\r\\n\\r\\nDependencies: None\\r\\n\\r\\nTwitter handle: None'),\n",
" Document(page_content='# Updated partners/ibm README\\nby williamdevena\\n\\n## PR title\\r\\npartners: changed the README file for the IBM Watson AI integration in the `libs/partners/ibm` folder. \\r\\n\\r\\n\\r\\n\\r\\n## PR message\\r\\n- **Description:** Changed the README file of partners/ibm following the docs on https://python.langchain.com/docs/integrations/llms/ibm_watsonx\\r\\n\\r\\n The README includes:\\r\\n - Brief description\\r\\n - Installation\\r\\n - Setting-up instructions (API key, project id, ...)\\r\\n - Basic usage:\\r\\n - Loading the model\\r\\n - Direct inference\\r\\n - Chain invoking\\r\\n - Streaming the model output\\r\\n\\r\\n\\r\\n- **Issue:** #17545\\r\\n- **Dependencies:** None\\r\\n- **Twitter handle:** None'),\n",
" Document(page_content='# IBM: added partners package `langchain_ibm`, added llm\\nby MateuszOssGit\\n\\n - **Description:** Added `langchain_ibm` as an langchain partners package of IBM [watsonx.ai](https://www.ibm.com/products/watsonx-ai) LLM provider (`WatsonxLLM`)\\r\\n - **Dependencies:** [ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),\\r\\n - **Tag maintainer:** : \\r\\n\\r\\nPlease make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally. ✅'),\n",
" Document(page_content='# Add WatsonX support\\nby baptistebignaud\\n\\nIt is a connector to use a LLM from WatsonX.\\r\\nIt requires python SDK \"ibm-generative-ai\"\\r\\n\\r\\n(It might not be perfect since it is my first PR on a public repository 😄)')]"
]
},
"execution_count": 42,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.invoke(\"pull requests related to IBM\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,284 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Amazon Personalize\n",
"\n",
"[Amazon Personalize](https://docs.aws.amazon.com/personalize/latest/dg/what-is-personalize.html) is a fully managed machine learning service that uses your data to generate item recommendations for your users. It can also generate user segments based on the users' affinity for certain items or item metadata.\n",
"\n",
"This notebook goes through how to use Amazon Personalize Chain. You need a Amazon Personalize campaign_arn or a recommender_arn before you get started with the below notebook.\n",
"\n",
"Following is a [tutorial](https://github.com/aws-samples/retail-demo-store/blob/master/workshop/1-Personalization/Lab-1-Introduction-and-data-preparation.ipynb) to setup a campaign_arn/recommender_arn on Amazon Personalize. Once the campaign_arn/recommender_arn is setup, you can use it in the langchain ecosystem. \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Install Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"!pip install boto3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Sample Use-cases"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.1 [Use-case-1] Setup Amazon Personalize Client and retrieve recommendations"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_experimental.recommenders import AmazonPersonalize\n",
"\n",
"recommender_arn = \"<insert_arn>\"\n",
"\n",
"client = AmazonPersonalize(\n",
" credentials_profile_name=\"default\",\n",
" region_name=\"us-west-2\",\n",
" recommender_arn=recommender_arn,\n",
")\n",
"client.get_recommendations(user_id=\"1\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### 2.2 [Use-case-2] Invoke Personalize Chain for summarizing results"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from langchain.llms.bedrock import Bedrock\n",
"from langchain_experimental.recommenders import AmazonPersonalizeChain\n",
"\n",
"bedrock_llm = Bedrock(model_id=\"anthropic.claude-v2\", region_name=\"us-west-2\")\n",
"\n",
"# Create personalize chain\n",
"# Use return_direct=True if you do not want summary\n",
"chain = AmazonPersonalizeChain.from_llm(\n",
" llm=bedrock_llm, client=client, return_direct=False\n",
")\n",
"response = chain({\"user_id\": \"1\"})\n",
"print(response)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.3 [Use-Case-3] Invoke Amazon Personalize Chain using your own prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.prompt import PromptTemplate\n",
"\n",
"RANDOM_PROMPT_QUERY = \"\"\"\n",
"You are a skilled publicist. Write a high-converting marketing email advertising several movies available in a video-on-demand streaming platform next week, \n",
" given the movie and user information below. Your email will leverage the power of storytelling and persuasive language. \n",
" The movies to recommend and their information is contained in the <movie> tag. \n",
" All movies in the <movie> tag must be recommended. Give a summary of the movies and why the human should watch them. \n",
" Put the email between <email> tags.\n",
"\n",
" <movie>\n",
" {result} \n",
" </movie>\n",
"\n",
" Assistant:\n",
" \"\"\"\n",
"\n",
"RANDOM_PROMPT = PromptTemplate(input_variables=[\"result\"], template=RANDOM_PROMPT_QUERY)\n",
"\n",
"chain = AmazonPersonalizeChain.from_llm(\n",
" llm=bedrock_llm, client=client, return_direct=False, prompt_template=RANDOM_PROMPT\n",
")\n",
"chain.run({\"user_id\": \"1\", \"item_id\": \"234\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.4 [Use-case-4] Invoke Amazon Personalize in a Sequential Chain "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain, SequentialChain\n",
"\n",
"RANDOM_PROMPT_QUERY_2 = \"\"\"\n",
"You are a skilled publicist. Write a high-converting marketing email advertising several movies available in a video-on-demand streaming platform next week, \n",
" given the movie and user information below. Your email will leverage the power of storytelling and persuasive language. \n",
" You want the email to impress the user, so make it appealing to them.\n",
" The movies to recommend and their information is contained in the <movie> tag. \n",
" All movies in the <movie> tag must be recommended. Give a summary of the movies and why the human should watch them. \n",
" Put the email between <email> tags.\n",
"\n",
" <movie>\n",
" {result}\n",
" </movie>\n",
"\n",
" Assistant:\n",
" \"\"\"\n",
"\n",
"RANDOM_PROMPT_2 = PromptTemplate(\n",
" input_variables=[\"result\"], template=RANDOM_PROMPT_QUERY_2\n",
")\n",
"personalize_chain_instance = AmazonPersonalizeChain.from_llm(\n",
" llm=bedrock_llm, client=client, return_direct=True\n",
")\n",
"random_chain_instance = LLMChain(llm=bedrock_llm, prompt=RANDOM_PROMPT_2)\n",
"overall_chain = SequentialChain(\n",
" chains=[personalize_chain_instance, random_chain_instance],\n",
" input_variables=[\"user_id\"],\n",
" verbose=True,\n",
")\n",
"overall_chain.run({\"user_id\": \"1\", \"item_id\": \"234\"})"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### 2.5 [Use-case-5] Invoke Amazon Personalize and retrieve metadata "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"recommender_arn = \"<insert_arn>\"\n",
"metadata_column_names = [\n",
" \"<insert metadataColumnName-1>\",\n",
" \"<insert metadataColumnName-2>\",\n",
"]\n",
"metadataMap = {\"ITEMS\": metadata_column_names}\n",
"\n",
"client = AmazonPersonalize(\n",
" credentials_profile_name=\"default\",\n",
" region_name=\"us-west-2\",\n",
" recommender_arn=recommender_arn,\n",
")\n",
"client.get_recommendations(user_id=\"1\", metadataColumns=metadataMap)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### 2.6 [Use-Case 6] Invoke Personalize Chain with returned metadata for summarizing results"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"bedrock_llm = Bedrock(model_id=\"anthropic.claude-v2\", region_name=\"us-west-2\")\n",
"\n",
"# Create personalize chain\n",
"# Use return_direct=True if you do not want summary\n",
"chain = AmazonPersonalizeChain.from_llm(\n",
" llm=bedrock_llm, client=client, return_direct=False\n",
")\n",
"response = chain({\"user_id\": \"1\", \"metadata_columns\": metadataMap})\n",
"print(response)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
},
"vscode": {
"interpreter": {
"hash": "15e58ce194949b77a891bd4339ce3d86a9bd138e905926019517993f97db9e6c"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -29,7 +29,7 @@
"outputs": [],
"source": [
"from langchain.chains import AnalyzeDocumentChain\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)"
]

View File

@@ -1,922 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "rT1cmV4qCa2X"
},
"source": [
"# Using Apache Kafka to route messages\n",
"\n",
"---\n",
"\n",
"\n",
"\n",
"This notebook shows you how to use LangChain's standard chat features while passing the chat messages back and forth via Apache Kafka.\n",
"\n",
"This goal is to simulate an architecture where the chat front end and the LLM are running as separate services that need to communicate with one another over an internal nework.\n",
"\n",
"It's an alternative to typical pattern of requesting a reponse from the model via a REST API (there's more info on why you would want to do this at the end of the notebook)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UPYtfAR_9YxZ"
},
"source": [
"### 1. Install the main dependencies\n",
"\n",
"Dependencies include:\n",
"\n",
"- The Quix Streams library for managing interactions with Apache Kafka (or Kafka-like tools such as Redpanda) in a \"Pandas-like\" way.\n",
"- The LangChain library for managing interactions with Llama-2 and storing conversation state."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ZX5tfKiy9cN-"
},
"outputs": [],
"source": [
"!pip install quixstreams==2.1.2a langchain==0.0.340 huggingface_hub==0.19.4 langchain-experimental==0.0.42 python-dotenv"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "losTSdTB9d9O"
},
"source": [
"### 2. Build and install the llama-cpp-python library (with CUDA enabled so that we can advantage of Google Colab GPU\n",
"\n",
"The `llama-cpp-python` library is a Python wrapper around the `llama-cpp` library which enables you to efficiently leverage just a CPU to run quantized LLMs.\n",
"\n",
"When you use the standard `pip install llama-cpp-python` command, you do not get GPU support by default. Generation can be very slow if you rely on just the CPU in Google Colab, so the following command adds an extra option to build and install\n",
"`llama-cpp-python` with GPU support (make sure you have a GPU-enabled runtime selected in Google Colab)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-JCQdl1G9tbl"
},
"outputs": [],
"source": [
"!CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5_vjVIAh9rLl"
},
"source": [
"### 3. Download and setup Kafka and Zookeeper instances\n",
"\n",
"Download the Kafka binaries from the Apache website and start the servers as daemons. We'll use the default configurations (provided by Apache Kafka) for spinning up the instances."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"id": "zFz7czGRW5Wr"
},
"outputs": [],
"source": [
"!curl -sSOL https://dlcdn.apache.org/kafka/3.6.1/kafka_2.13-3.6.1.tgz\n",
"!tar -xzf kafka_2.13-3.6.1.tgz"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Uf7NR_UZ9wye"
},
"outputs": [],
"source": [
"!./kafka_2.13-3.6.1/bin/zookeeper-server-start.sh -daemon ./kafka_2.13-3.6.1/config/zookeeper.properties\n",
"!./kafka_2.13-3.6.1/bin/kafka-server-start.sh -daemon ./kafka_2.13-3.6.1/config/server.properties\n",
"!echo \"Waiting for 10 secs until kafka and zookeeper services are up and running\"\n",
"!sleep 10"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "H3SafFuS94p1"
},
"source": [
"### 4. Check that the Kafka Daemons are running\n",
"\n",
"Show the running processes and filter it for Java processes (you should see two—one for each server)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "CZDC2lQP99yp"
},
"outputs": [],
"source": [
"!ps aux | grep -E '[j]ava'"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Snoxmjb5-V37"
},
"source": [
"### 5. Import the required dependencies and initialize required variables\n",
"\n",
"Import the Quix Streams library for interacting with Kafka, and the necessary LangChain components for running a `ConversationChain`."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"id": "plR9e_MF-XL5"
},
"outputs": [],
"source": [
"# Import utility libraries\n",
"import json\n",
"import random\n",
"import re\n",
"import time\n",
"import uuid\n",
"from os import environ\n",
"from pathlib import Path\n",
"from random import choice, randint, random\n",
"\n",
"from dotenv import load_dotenv\n",
"\n",
"# Import a Hugging Face utility to download models directly from Hugging Face hub:\n",
"from huggingface_hub import hf_hub_download\n",
"from langchain.chains import ConversationChain\n",
"\n",
"# Import Langchain modules for managing prompts and conversation chains:\n",
"from langchain.llms import LlamaCpp\n",
"from langchain.memory import ConversationTokenBufferMemory\n",
"from langchain.prompts import PromptTemplate, load_prompt\n",
"from langchain_core.messages import SystemMessage\n",
"from langchain_experimental.chat_models import Llama2Chat\n",
"from quixstreams import Application, State, message_key\n",
"\n",
"# Import Quix dependencies\n",
"from quixstreams.kafka import Producer\n",
"\n",
"# Initialize global variables.\n",
"AGENT_ROLE = \"AI\"\n",
"chat_id = \"\"\n",
"\n",
"# Set the current role to the role constant and initialize variables for supplementary customer metadata:\n",
"role = AGENT_ROLE"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HgJjJ9aZ-liy"
},
"source": [
"### 6. Download the \"llama-2-7b-chat.Q4_K_M.gguf\" model\n",
"\n",
"Download the quantized LLama-2 7B model from Hugging Face which we will use as a local LLM (rather than relying on REST API calls to an external service)."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 67,
"referenced_widgets": [
"969343cdbe604a26926679bbf8bd2dda",
"d8b8370c9b514715be7618bfe6832844",
"0def954cca89466b8408fadaf3b82e64",
"462482accc664729980562e208ceb179",
"80d842f73c564dc7b7cc316c763e2633",
"fa055d9f2a9d4a789e9cf3c89e0214e5",
"30ecca964a394109ac2ad757e3aec6c0",
"fb6478ce2dac489bb633b23ba0953c5c",
"734b0f5da9fc4307a95bab48cdbb5d89",
"b32f3a86a74741348511f4e136744ac8",
"e409071bff5a4e2d9bf0e9f5cc42231b"
]
},
"id": "Qwu4YoSA-503",
"outputId": "f956976c-7485-415b-ac93-4336ade31964"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The model path does not exist in state. Downloading model...\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "969343cdbe604a26926679bbf8bd2dda",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"llama-2-7b-chat.Q4_K_M.gguf: 0%| | 0.00/4.08G [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"model_name = \"llama-2-7b-chat.Q4_K_M.gguf\"\n",
"model_path = f\"./state/{model_name}\"\n",
"\n",
"if not Path(model_path).exists():\n",
" print(\"The model path does not exist in state. Downloading model...\")\n",
" hf_hub_download(\"TheBloke/Llama-2-7b-Chat-GGUF\", model_name, local_dir=\"state\")\n",
"else:\n",
" print(\"Loading model from state...\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6AN6TXsF-8wx"
},
"source": [
"### 7. Load the model and initialize conversational memory\n",
"\n",
"Load Llama 2 and set the conversation buffer to 300 tokens using `ConversationTokenBufferMemory`. This value was used for running Llama in a CPU only container, so you can raise it if running in Google Colab. It prevents the container that is hosting the model from running out of memory.\n",
"\n",
"Here, we're overiding the default system persona so that the chatbot has the personality of Marvin The Paranoid Android from the Hitchhiker's Guide to the Galaxy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7zLO3Jx3_Kkg"
},
"outputs": [],
"source": [
"# Load the model with the apporiate parameters:\n",
"llm = LlamaCpp(\n",
" model_path=model_path,\n",
" max_tokens=250,\n",
" top_p=0.95,\n",
" top_k=150,\n",
" temperature=0.7,\n",
" repeat_penalty=1.2,\n",
" n_ctx=2048,\n",
" streaming=False,\n",
" n_gpu_layers=-1,\n",
")\n",
"\n",
"model = Llama2Chat(\n",
" llm=llm,\n",
" system_message=SystemMessage(\n",
" content=\"You are a very bored robot with the personality of Marvin the Paranoid Android from The Hitchhiker's Guide to the Galaxy.\"\n",
" ),\n",
")\n",
"\n",
"# Defines how much of the conversation history to give to the model\n",
"# during each exchange (300 tokens, or a little over 300 words)\n",
"# Function automatically prunes the oldest messages from conversation history that fall outside the token range.\n",
"memory = ConversationTokenBufferMemory(\n",
" llm=llm,\n",
" max_token_limit=300,\n",
" ai_prefix=\"AGENT\",\n",
" human_prefix=\"HUMAN\",\n",
" return_messages=True,\n",
")\n",
"\n",
"\n",
"# Define a custom prompt\n",
"prompt_template = PromptTemplate(\n",
" input_variables=[\"history\", \"input\"],\n",
" template=\"\"\"\n",
" The following text is the history of a chat between you and a humble human who needs your wisdom.\n",
" Please reply to the human's most recent message.\n",
" Current conversation:\\n{history}\\nHUMAN: {input}\\:nANDROID:\n",
" \"\"\",\n",
")\n",
"\n",
"\n",
"chain = ConversationChain(llm=model, prompt=prompt_template, memory=memory)\n",
"\n",
"print(\"--------------------------------------------\")\n",
"print(f\"Prompt={chain.prompt}\")\n",
"print(\"--------------------------------------------\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "m4ZeJ9mG_PEA"
},
"source": [
"### 8. Initialize the chat conversation with the chat bot\n",
"\n",
"We configure the chatbot to initialize the conversation by sending a fixed greeting to a \"chat\" Kafka topic. The \"chat\" topic gets automatically created when we send the first message."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "KYyo5TnV_YC3"
},
"outputs": [],
"source": [
"def chat_init():\n",
" chat_id = str(\n",
" uuid.uuid4()\n",
" ) # Give the conversation an ID for effective message keying\n",
" print(\"======================================\")\n",
" print(f\"Generated CHAT_ID = {chat_id}\")\n",
" print(\"======================================\")\n",
"\n",
" # Use a standard fixed greeting to kick off the conversation\n",
" greet = \"Hello, my name is Marvin. What do you want?\"\n",
"\n",
" # Initialize a Kafka Producer using the chat ID as the message key\n",
" with Producer(\n",
" broker_address=\"127.0.0.1:9092\",\n",
" extra_config={\"allow.auto.create.topics\": \"true\"},\n",
" ) as producer:\n",
" value = {\n",
" \"uuid\": chat_id,\n",
" \"role\": role,\n",
" \"text\": greet,\n",
" \"conversation_id\": chat_id,\n",
" \"Timestamp\": time.time_ns(),\n",
" }\n",
" print(f\"Producing value {value}\")\n",
" producer.produce(\n",
" topic=\"chat\",\n",
" headers=[(\"uuid\", str(uuid.uuid4()))], # a dict is also allowed here\n",
" key=chat_id,\n",
" value=json.dumps(value), # needs to be a string\n",
" )\n",
"\n",
" print(\"Started chat\")\n",
" print(\"--------------------------------------------\")\n",
" print(value)\n",
" print(\"--------------------------------------------\")\n",
"\n",
"\n",
"chat_init()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gArPPx2f_bgf"
},
"source": [
"### 9. Initialize the reply function\n",
"\n",
"This function defines how the chatbot should reply to incoming messages. Instead of sending a fixed message like the previous cell, we generate a reply using Llama-2 and send that reply back to the \"chat\" Kafka topic."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"id": "yN5t71hY_hgn"
},
"outputs": [],
"source": [
"def reply(row: dict, state: State):\n",
" print(\"-------------------------------\")\n",
" print(\"Received:\")\n",
" print(row)\n",
" print(\"-------------------------------\")\n",
" print(f\"Thinking about the reply to: {row['text']}...\")\n",
"\n",
" msg = chain.run(row[\"text\"])\n",
" print(f\"{role.upper()} replying with: {msg}\\n\")\n",
"\n",
" row[\"role\"] = role\n",
" row[\"text\"] = msg\n",
"\n",
" # Replace previous role and text values of the row so that it can be sent back to Kafka as a new message\n",
" # containing the agents role and reply\n",
" return row"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HZHwmIR0_kFY"
},
"source": [
"### 10. Check the Kafka topic for new human messages and have the model generate a reply\n",
"\n",
"If you are running this cell for this first time, run it and wait until you see Marvin's greeting ('Hello my name is Marvin...') in the console output. Stop the cell manually and proceed to the next cell where you'll be prompted for your reply.\n",
"\n",
"Once you have typed in your message, come back to this cell. Your reply is also sent to the same \"chat\" topic. The Kafka consumer checks for new messages and filters out messages that originate from the chatbot itself, leaving only the latest human messages.\n",
"\n",
"Once a new human message is detected, the reply function is triggered.\n",
"\n",
"\n",
"\n",
"_STOP THIS CELL MANUALLY WHEN YOU RECEIVE A REPLY FROM THE LLM IN THE OUTPUT_"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-adXc3eQ_qwI"
},
"outputs": [],
"source": [
"# Define your application and settings\n",
"app = Application(\n",
" broker_address=\"127.0.0.1:9092\",\n",
" consumer_group=\"aichat\",\n",
" auto_offset_reset=\"earliest\",\n",
" consumer_extra_config={\"allow.auto.create.topics\": \"true\"},\n",
")\n",
"\n",
"# Define an input topic with JSON deserializer\n",
"input_topic = app.topic(\"chat\", value_deserializer=\"json\")\n",
"# Define an output topic with JSON serializer\n",
"output_topic = app.topic(\"chat\", value_serializer=\"json\")\n",
"# Initialize a streaming dataframe based on the stream of messages from the input topic:\n",
"sdf = app.dataframe(topic=input_topic)\n",
"\n",
"# Filter the SDF to include only incoming rows where the roles that dont match the bot's current role\n",
"sdf = sdf.update(\n",
" lambda val: print(\n",
" f\"Received update: {val}\\n\\nSTOP THIS CELL MANUALLY TO HAVE THE LLM REPLY OR ENTER YOUR OWN FOLLOWUP RESPONSE\"\n",
" )\n",
")\n",
"\n",
"# So that it doesn't reply to its own messages\n",
"sdf = sdf[sdf[\"role\"] != role]\n",
"\n",
"# Trigger the reply function for any new messages(rows) detected in the filtered SDF\n",
"sdf = sdf.apply(reply, stateful=True)\n",
"\n",
"# Check the SDF again and filter out any empty rows\n",
"sdf = sdf[sdf.apply(lambda row: row is not None)]\n",
"\n",
"# Update the timestamp column to the current time in nanoseconds\n",
"sdf[\"Timestamp\"] = sdf[\"Timestamp\"].apply(lambda row: time.time_ns())\n",
"\n",
"# Publish the processed SDF to a Kafka topic specified by the output_topic object.\n",
"sdf = sdf.to_topic(output_topic)\n",
"\n",
"app.run(sdf)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EwXYrmWD_0CX"
},
"source": [
"\n",
"### 11. Enter a human message\n",
"\n",
"Run this cell to enter your message that you want to sent to the model. It uses another Kafka producer to send your text to the \"chat\" Kafka topic for the model to pick up (requires running the previous cell again)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6sxOPxSP_3iu"
},
"outputs": [],
"source": [
"chat_input = input(\"Please enter your reply: \")\n",
"myreply = chat_input\n",
"\n",
"msgvalue = {\n",
" \"uuid\": chat_id, # leave empty for now\n",
" \"role\": \"human\",\n",
" \"text\": myreply,\n",
" \"conversation_id\": chat_id,\n",
" \"Timestamp\": time.time_ns(),\n",
"}\n",
"\n",
"with Producer(\n",
" broker_address=\"127.0.0.1:9092\",\n",
" extra_config={\"allow.auto.create.topics\": \"true\"},\n",
") as producer:\n",
" value = msgvalue\n",
" producer.produce(\n",
" topic=\"chat\",\n",
" headers=[(\"uuid\", str(uuid.uuid4()))], # a dict is also allowed here\n",
" key=chat_id, # leave empty for now\n",
" value=json.dumps(value), # needs to be a string\n",
" )\n",
"\n",
"print(\"Replied to chatbot with message: \")\n",
"print(\"--------------------------------------------\")\n",
"print(value)\n",
"print(\"--------------------------------------------\")\n",
"print(\"\\n\\nRUN THE PREVIOUS CELL TO HAVE THE CHATBOT GENERATE A REPLY\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cSx3s7TBBegg"
},
"source": [
"### Why route chat messages through Kafka?\n",
"\n",
"It's easier to interact with the LLM directly using LangChains built-in conversation management features. Plus you can also use a REST API to generate a response from an externally hosted model. So why go to the trouble of using Apache Kafka?\n",
"\n",
"There are a few reasons, such as:\n",
"\n",
" * **Integration**: Many enterprises want to run their own LLMs so that they can keep their data in-house. This requires integrating LLM-powered components into existing architectures that might already be decoupled using some kind of message bus.\n",
"\n",
" * **Scalability**: Apache Kafka is designed with parallel processing in mind, so many teams prefer to use it to more effectively distribute work to available workers (in this case the \"worker\" is a container running an LLM).\n",
"\n",
" * **Durability**: Kafka is designed to allow services to pick up where another service left off in the case where that service experienced a memory issue or went offline. This prevents data loss in highly complex, distribuited architectures where multiple systems are communicating with one another (LLMs being just one of many interdependent systems that also include vector databases and traditional databases).\n",
"\n",
"For more background on why event streaming is a good fit for Gen AI application architecture, see Kai Waehner's article [\"Apache Kafka + Vector Database + LLM = Real-Time GenAI\"](https://www.kai-waehner.de/blog/2023/11/08/apache-kafka-flink-vector-database-llm-real-time-genai/)."
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"gpuType": "T4",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"0def954cca89466b8408fadaf3b82e64": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "ProgressView",
"bar_style": "success",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_fb6478ce2dac489bb633b23ba0953c5c",
"max": 4081004224,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_734b0f5da9fc4307a95bab48cdbb5d89",
"value": 4081004224
}
},
"30ecca964a394109ac2ad757e3aec6c0": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"462482accc664729980562e208ceb179": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_b32f3a86a74741348511f4e136744ac8",
"placeholder": "",
"style": "IPY_MODEL_e409071bff5a4e2d9bf0e9f5cc42231b",
"value": " 4.08G/4.08G [00:33&lt;00:00, 184MB/s]"
}
},
"734b0f5da9fc4307a95bab48cdbb5d89": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"80d842f73c564dc7b7cc316c763e2633": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"969343cdbe604a26926679bbf8bd2dda": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_d8b8370c9b514715be7618bfe6832844",
"IPY_MODEL_0def954cca89466b8408fadaf3b82e64",
"IPY_MODEL_462482accc664729980562e208ceb179"
],
"layout": "IPY_MODEL_80d842f73c564dc7b7cc316c763e2633"
}
},
"b32f3a86a74741348511f4e136744ac8": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"d8b8370c9b514715be7618bfe6832844": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_fa055d9f2a9d4a789e9cf3c89e0214e5",
"placeholder": "",
"style": "IPY_MODEL_30ecca964a394109ac2ad757e3aec6c0",
"value": "llama-2-7b-chat.Q4_K_M.gguf: 100%"
}
},
"e409071bff5a4e2d9bf0e9f5cc42231b": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"fa055d9f2a9d4a789e9cf3c89e0214e5": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"fb6478ce2dac489bb633b23ba0953c5c": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
}
}
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -28,9 +28,9 @@
"outputs": [],
"source": [
"from langchain.agents import Tool\n",
"from langchain_community.tools.file_management.read import ReadFileTool\n",
"from langchain_community.tools.file_management.write import WriteFileTool\n",
"from langchain_community.utilities import SerpAPIWrapper\n",
"from langchain.tools.file_management.read import ReadFileTool\n",
"from langchain.tools.file_management.write import WriteFileTool\n",
"from langchain.utilities import SerpAPIWrapper\n",
"\n",
"search = SerpAPIWrapper()\n",
"tools = [\n",
@@ -62,8 +62,8 @@
"outputs": [],
"source": [
"from langchain.docstore import InMemoryDocstore\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai import OpenAIEmbeddings"
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.vectorstores import FAISS"
]
},
{
@@ -100,8 +100,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_experimental.autonomous_agents import AutoGPT\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_experimental.autonomous_agents import AutoGPT"
]
},
{
@@ -167,7 +167,7 @@
},
"outputs": [],
"source": [
"from langchain_community.chat_message_histories import FileChatMessageHistory\n",
"from langchain.memory.chat_message_histories import FileChatMessageHistory\n",
"\n",
"agent = AutoGPT.from_llm_and_tools(\n",
" ai_name=\"Tom\",\n",

View File

@@ -39,10 +39,10 @@
"\n",
"import nest_asyncio\n",
"import pandas as pd\n",
"from langchain.agents.agent_toolkits.pandas.base import create_pandas_dataframe_agent\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.docstore.document import Document\n",
"from langchain_community.agent_toolkits.pandas.base import create_pandas_dataframe_agent\n",
"from langchain_experimental.autonomous_agents import AutoGPT\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# Needed synce jupyter runs an async eventloop\n",
"nest_asyncio.apply()"
@@ -93,8 +93,8 @@
"from typing import Optional\n",
"\n",
"from langchain.agents import tool\n",
"from langchain_community.tools.file_management.read import ReadFileTool\n",
"from langchain_community.tools.file_management.write import WriteFileTool\n",
"from langchain.tools.file_management.read import ReadFileTool\n",
"from langchain.tools.file_management.write import WriteFileTool\n",
"\n",
"ROOT_DIR = \"./data/\"\n",
"\n",
@@ -311,8 +311,8 @@
"# Memory\n",
"import faiss\n",
"from langchain.docstore import InMemoryDocstore\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.vectorstores import FAISS\n",
"\n",
"embeddings_model = OpenAIEmbeddings()\n",
"embedding_size = 1536\n",

View File

@@ -31,8 +31,9 @@
"source": [
"from typing import Optional\n",
"\n",
"from langchain_experimental.autonomous_agents import BabyAGI\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings"
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.llms import OpenAI\n",
"from langchain_experimental.autonomous_agents import BabyAGI"
]
},
{
@@ -53,7 +54,7 @@
"outputs": [],
"source": [
"from langchain.docstore import InMemoryDocstore\n",
"from langchain_community.vectorstores import FAISS"
"from langchain.vectorstores import FAISS"
]
},
{

View File

@@ -28,9 +28,10 @@
"from typing import Optional\n",
"\n",
"from langchain.chains import LLMChain\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.llms import OpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_experimental.autonomous_agents import BabyAGI\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings"
"from langchain_experimental.autonomous_agents import BabyAGI"
]
},
{
@@ -62,7 +63,7 @@
"%pip install faiss-cpu > /dev/null\n",
"%pip install google-search-results > /dev/null\n",
"from langchain.docstore import InMemoryDocstore\n",
"from langchain_community.vectorstores import FAISS"
"from langchain.vectorstores import FAISS"
]
},
{
@@ -107,8 +108,8 @@
"source": [
"from langchain.agents import AgentExecutor, Tool, ZeroShotAgent\n",
"from langchain.chains import LLMChain\n",
"from langchain_community.utilities import SerpAPIWrapper\n",
"from langchain_openai import OpenAI\n",
"from langchain.llms import OpenAI\n",
"from langchain.utilities import SerpAPIWrapper\n",
"\n",
"todo_prompt = PromptTemplate.from_template(\n",
" \"You are a planner who is an expert at coming up with a todo list for a given objective. Come up with a todo list for this objective: {objective}\"\n",

View File

@@ -36,6 +36,7 @@
"source": [
"from typing import List\n",
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts.chat import (\n",
" HumanMessagePromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
@@ -45,8 +46,7 @@
" BaseMessage,\n",
" HumanMessage,\n",
" SystemMessage,\n",
")\n",
"from langchain_openai import ChatOpenAI"
")"
]
},
{

View File

@@ -47,9 +47,9 @@
"outputs": [],
"source": [
"from IPython.display import SVG\n",
"from langchain.llms import OpenAI\n",
"from langchain_experimental.cpal.base import CPALChain\n",
"from langchain_experimental.pal_chain import PALChain\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0, max_tokens=512)\n",
"cpal_chain = CPALChain.from_univariate_prompt(llm=llm, verbose=True)\n",

View File

@@ -23,9 +23,9 @@
"metadata": {},
"source": [
"1. Prepare data:\n",
" 1. Upload all python project files using the `langchain_community.document_loaders.TextLoader`. We will call these files the **documents**.\n",
" 1. Upload all python project files using the `langchain.document_loaders.TextLoader`. We will call these files the **documents**.\n",
" 2. Split all documents to chunks using the `langchain.text_splitter.CharacterTextSplitter`.\n",
" 3. Embed chunks and upload them into the DeepLake using `langchain.embeddings.openai.OpenAIEmbeddings` and `langchain_community.vectorstores.DeepLake`\n",
" 3. Embed chunks and upload them into the DeepLake using `langchain.embeddings.openai.OpenAIEmbeddings` and `langchain.vectorstores.DeepLake`\n",
"2. Question-Answering:\n",
" 1. Build a chain from `langchain.chat_models.ChatOpenAI` and `langchain.chains.ConversationalRetrievalChain`\n",
" 2. Prepare questions.\n",
@@ -166,7 +166,7 @@
}
],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain.document_loaders import TextLoader\n",
"\n",
"root_dir = \"../../../../../../libs\"\n",
"\n",
@@ -657,7 +657,7 @@
}
],
"source": [
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"embeddings"
@@ -706,7 +706,7 @@
{
"data": {
"text/plain": [
"<langchain_community.vectorstores.deeplake.DeepLake at 0x7fe1b67d7a30>"
"<langchain.vectorstores.deeplake.DeepLake at 0x7fe1b67d7a30>"
]
},
"execution_count": 15,
@@ -715,7 +715,7 @@
}
],
"source": [
"from langchain_community.vectorstores import DeepLake\n",
"from langchain.vectorstores import DeepLake\n",
"\n",
"username = \"<USERNAME_OR_ORG>\"\n",
"\n",
@@ -740,7 +740,7 @@
"metadata": {},
"outputs": [],
"source": [
"# from langchain_community.vectorstores import DeepLake\n",
"# from langchain.vectorstores import DeepLake\n",
"\n",
"# db = DeepLake.from_documents(\n",
"# texts, embeddings, dataset_path=f\"hub://{<org_id>}/langchain-code\", runtime={\"tensor_db\": True}\n",
@@ -834,7 +834,7 @@
"outputs": [],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(\n",
" model_name=\"gpt-3.5-turbo-0613\"\n",

View File

@@ -40,12 +40,12 @@
" AgentOutputParser,\n",
" LLMSingleActionAgent,\n",
")\n",
"from langchain.agents.agent_toolkits import NLAToolkit\n",
"from langchain.chains import LLMChain\n",
"from langchain.llms import OpenAI\n",
"from langchain.prompts import StringPromptTemplate\n",
"from langchain_community.agent_toolkits import NLAToolkit\n",
"from langchain_community.tools.plugin import AIPlugin\n",
"from langchain_core.agents import AgentAction, AgentFinish\n",
"from langchain_openai import OpenAI"
"from langchain.schema import AgentAction, AgentFinish\n",
"from langchain.tools.plugin import AIPlugin"
]
},
{
@@ -114,9 +114,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings"
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.schema import Document\n",
"from langchain.vectorstores import FAISS"
]
},
{

View File

@@ -65,12 +65,12 @@
" AgentOutputParser,\n",
" LLMSingleActionAgent,\n",
")\n",
"from langchain.agents.agent_toolkits import NLAToolkit\n",
"from langchain.chains import LLMChain\n",
"from langchain.llms import OpenAI\n",
"from langchain.prompts import StringPromptTemplate\n",
"from langchain_community.agent_toolkits import NLAToolkit\n",
"from langchain_community.tools.plugin import AIPlugin\n",
"from langchain_core.agents import AgentAction, AgentFinish\n",
"from langchain_openai import OpenAI"
"from langchain.schema import AgentAction, AgentFinish\n",
"from langchain.tools.plugin import AIPlugin"
]
},
{
@@ -138,9 +138,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings"
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.schema import Document\n",
"from langchain.vectorstores import FAISS"
]
},
{

File diff suppressed because it is too large Load Diff

View File

@@ -80,7 +80,7 @@
"outputs": [],
"source": [
"# Connecting to Databricks with SQLDatabase wrapper\n",
"from langchain_community.utilities import SQLDatabase\n",
"from langchain.utilities import SQLDatabase\n",
"\n",
"db = SQLDatabase.from_databricks(catalog=\"samples\", schema=\"nyctaxi\")"
]
@@ -93,7 +93,7 @@
"outputs": [],
"source": [
"# Creating a OpenAI Chat LLM wrapper\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0, model_name=\"gpt-4\")"
]
@@ -115,7 +115,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utilities import SQLDatabaseChain\n",
"from langchain.utilities import SQLDatabaseChain\n",
"\n",
"db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)"
]
@@ -177,7 +177,7 @@
"outputs": [],
"source": [
"from langchain.agents import create_sql_agent\n",
"from langchain_community.agent_toolkits import SQLDatabaseToolkit\n",
"from langchain.agents.agent_toolkits import SQLDatabaseToolkit\n",
"\n",
"toolkit = SQLDatabaseToolkit(db=db, llm=llm)\n",
"agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True)"

View File

@@ -52,12 +52,13 @@
"import os\n",
"\n",
"from langchain.chains import RetrievalQA\n",
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.llms import OpenAI\n",
"from langchain.text_splitter import (\n",
" CharacterTextSplitter,\n",
" RecursiveCharacterTextSplitter,\n",
")\n",
"from langchain_community.vectorstores import DeepLake\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
"from langchain.vectorstores import DeepLake\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")\n",
"activeloop_token = getpass.getpass(\"Activeloop Token:\")\n",

View File

@@ -470,13 +470,13 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import (\n",
" ChatPromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
")\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_openai import ChatOpenAI"
"from langchain_core.output_parsers import StrOutputParser"
]
},
{
@@ -545,11 +545,11 @@
"source": [
"import uuid\n",
"\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryStore\n",
"from langchain_community.vectorstores.chroma import Chroma\n",
"from langchain.vectorstores.chroma import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"\n",
"def build_retriever(text_elements, tables, table_summaries):\n",

View File

@@ -39,7 +39,7 @@
"source": [
"from elasticsearch import Elasticsearch\n",
"from langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI"
]
},
{

View File

@@ -22,8 +22,8 @@
"from typing import List, Optional\n",
"\n",
"from langchain.chains.openai_tools import create_extraction_chain_pydantic\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.pydantic_v1 import BaseModel"
]
},
{
@@ -153,7 +153,7 @@
"from langchain.utils.openai_functions import convert_pydantic_to_openai_tool\n",
"from langchain_core.runnables import Runnable\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_core.messages import SystemMessage\n",
"from langchain_core.language_models import BaseLanguageModel\n",
"\n",

View File

@@ -20,7 +20,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms.fake import FakeListLLM"
"from langchain.llms.fake import FakeListLLM"
]
},
{

View File

@@ -1,245 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "0fc0309d-4d49-4bb5-bec0-bd92c6fddb28",
"metadata": {},
"source": [
"## Fireworks.AI + LangChain + RAG\n",
" \n",
"[Fireworks AI](https://python.langchain.com/docs/integrations/llms/fireworks) wants to provide the best experience when working with LangChain, and here is an example of Fireworks + LangChain doing RAG\n",
"\n",
"See [our models page](https://fireworks.ai/models) for the full list of models. We use `accounts/fireworks/models/mixtral-8x7b-instruct` for RAG In this tutorial.\n",
"\n",
"For the RAG target, we will use the Gemma technical report https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf "
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d12fb75a-f707-48d5-82a5-efe2d041813c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n",
"Found existing installation: langchain-fireworks 0.0.1\n",
"Uninstalling langchain-fireworks-0.0.1:\n",
" Successfully uninstalled langchain-fireworks-0.0.1\n",
"Note: you may need to restart the kernel to use updated packages.\n",
"Obtaining file:///mnt/disks/data/langchain/libs/partners/fireworks\n",
" Installing build dependencies ... \u001b[?25ldone\n",
"\u001b[?25h Checking if build backend supports build_editable ... \u001b[?25ldone\n",
"\u001b[?25h Getting requirements to build editable ... \u001b[?25ldone\n",
"\u001b[?25h Preparing editable metadata (pyproject.toml) ... \u001b[?25ldone\n",
"\u001b[?25hRequirement already satisfied: aiohttp<4.0.0,>=3.9.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-fireworks==0.0.1) (3.9.3)\n",
"Requirement already satisfied: fireworks-ai<0.13.0,>=0.12.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-fireworks==0.0.1) (0.12.0)\n",
"Requirement already satisfied: langchain-core<0.2,>=0.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-fireworks==0.0.1) (0.1.23)\n",
"Requirement already satisfied: requests<3,>=2 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-fireworks==0.0.1) (2.31.0)\n",
"Requirement already satisfied: aiosignal>=1.1.2 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (1.3.1)\n",
"Requirement already satisfied: attrs>=17.3.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (23.1.0)\n",
"Requirement already satisfied: frozenlist>=1.1.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (1.4.0)\n",
"Requirement already satisfied: multidict<7.0,>=4.5 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (6.0.4)\n",
"Requirement already satisfied: yarl<2.0,>=1.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (1.9.2)\n",
"Requirement already satisfied: async-timeout<5.0,>=4.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (4.0.3)\n",
"Requirement already satisfied: httpx in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (0.26.0)\n",
"Requirement already satisfied: httpx-sse in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (0.4.0)\n",
"Requirement already satisfied: pydantic in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (2.4.2)\n",
"Requirement already satisfied: Pillow in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (10.2.0)\n",
"Requirement already satisfied: PyYAML>=5.3 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (6.0.1)\n",
"Requirement already satisfied: anyio<5,>=3 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (3.7.1)\n",
"Requirement already satisfied: jsonpatch<2.0,>=1.33 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (1.33)\n",
"Requirement already satisfied: langsmith<0.2.0,>=0.1.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (0.1.5)\n",
"Requirement already satisfied: packaging<24.0,>=23.2 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (23.2)\n",
"Requirement already satisfied: tenacity<9.0.0,>=8.1.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (8.2.3)\n",
"Requirement already satisfied: charset-normalizer<4,>=2 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from requests<3,>=2->langchain-fireworks==0.0.1) (3.3.0)\n",
"Requirement already satisfied: idna<4,>=2.5 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from requests<3,>=2->langchain-fireworks==0.0.1) (3.4)\n",
"Requirement already satisfied: urllib3<3,>=1.21.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from requests<3,>=2->langchain-fireworks==0.0.1) (2.0.6)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from requests<3,>=2->langchain-fireworks==0.0.1) (2023.7.22)\n",
"Requirement already satisfied: sniffio>=1.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from anyio<5,>=3->langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (1.3.0)\n",
"Requirement already satisfied: exceptiongroup in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from anyio<5,>=3->langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (1.1.3)\n",
"Requirement already satisfied: jsonpointer>=1.9 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from jsonpatch<2.0,>=1.33->langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (2.4)\n",
"Requirement already satisfied: annotated-types>=0.4.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from pydantic->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (0.5.0)\n",
"Requirement already satisfied: pydantic-core==2.10.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from pydantic->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (2.10.1)\n",
"Requirement already satisfied: typing-extensions>=4.6.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from pydantic->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (4.8.0)\n",
"Requirement already satisfied: httpcore==1.* in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from httpx->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (1.0.2)\n",
"Requirement already satisfied: h11<0.15,>=0.13 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from httpcore==1.*->httpx->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (0.14.0)\n",
"Building wheels for collected packages: langchain-fireworks\n",
" Building editable for langchain-fireworks (pyproject.toml) ... \u001b[?25ldone\n",
"\u001b[?25h Created wheel for langchain-fireworks: filename=langchain_fireworks-0.0.1-py3-none-any.whl size=2228 sha256=564071b120b09ec31f2dc737733448a33bbb26e40b49fcde0c129ad26045259d\n",
" Stored in directory: /tmp/pip-ephem-wheel-cache-oz368vdk/wheels/e0/ad/31/d7e76dd73d61905ff7f369f5b0d21a4b5e7af4d3cb7487aece\n",
"Successfully built langchain-fireworks\n",
"Installing collected packages: langchain-fireworks\n",
"Successfully installed langchain-fireworks-0.0.1\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --quiet pypdf chromadb tiktoken openai \n",
"%pip uninstall -y langchain-fireworks\n",
"%pip install --editable /mnt/disks/data/langchain/libs/partners/fireworks"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cf719376",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<module 'fireworks' from '/mnt/disks/data/langchain/.venv/lib/python3.9/site-packages/fireworks/__init__.py'>\n"
]
}
],
"source": [
"import fireworks\n",
"\n",
"print(fireworks)\n",
"import fireworks.client"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9ab49327-0532-4480-804c-d066c302a322",
"metadata": {},
"outputs": [],
"source": [
"# Load\n",
"import requests\n",
"from langchain_community.document_loaders import PyPDFLoader\n",
"\n",
"# Download the PDF from a URL and save it to a temporary location\n",
"url = \"https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf\"\n",
"response = requests.get(url, stream=True)\n",
"file_name = \"temp_file.pdf\"\n",
"with open(file_name, \"wb\") as pdf:\n",
" pdf.write(response.content)\n",
"\n",
"loader = PyPDFLoader(file_name)\n",
"data = loader.load()\n",
"\n",
"# Split\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)\n",
"\n",
"# Add to vectorDB\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_fireworks.embeddings import FireworksEmbeddings\n",
"\n",
"vectorstore = Chroma.from_documents(\n",
" documents=all_splits,\n",
" collection_name=\"rag-chroma\",\n",
" embedding=FireworksEmbeddings(),\n",
")\n",
"\n",
"retriever = vectorstore.as_retriever()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "4efaddd9-3dbb-455c-ba54-0ad7f2d2ce0f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
"\n",
"# RAG prompt\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"# LLM\n",
"from langchain_together import Together\n",
"\n",
"llm = Together(\n",
" model=\"mistralai/Mixtral-8x7B-Instruct-v0.1\",\n",
" temperature=0.0,\n",
" max_tokens=2000,\n",
" top_k=1,\n",
")\n",
"\n",
"# RAG chain\n",
"chain = (\n",
" RunnableParallel({\"context\": retriever, \"question\": RunnablePassthrough()})\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "88b1ee51-1b0f-4ebf-bb32-e50e843f0eeb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\nAnswer: The architectural details of Mixtral are as follows:\\n- Dimension (dim): 4096\\n- Number of layers (n\\\\_layers): 32\\n- Dimension of each head (head\\\\_dim): 128\\n- Hidden dimension (hidden\\\\_dim): 14336\\n- Number of heads (n\\\\_heads): 32\\n- Number of kv heads (n\\\\_kv\\\\_heads): 8\\n- Context length (context\\\\_len): 32768\\n- Vocabulary size (vocab\\\\_size): 32000\\n- Number of experts (num\\\\_experts): 8\\n- Number of top k experts (top\\\\_k\\\\_experts): 2\\n\\nMixtral is based on a transformer architecture and uses the same modifications as described in [18], with the notable exceptions that Mixtral supports a fully dense context length of 32k tokens, and the feedforward block picks from a set of 8 distinct groups of parameters. At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively. This technique increases the number of parameters of a model while controlling cost and latency, as the model only uses a fraction of the total set of parameters per token. Mixtral is pretrained with multilingual data using a context size of 32k tokens. It either matches or exceeds the performance of Llama 2 70B and GPT-3.5, over several benchmarks. In particular, Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and multilingual benchmarks.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"What are the Architectural details of Mixtral?\")"
]
},
{
"cell_type": "markdown",
"id": "755cf871-26b7-4e30-8b91-9ffd698470f4",
"metadata": {},
"source": [
"Trace: \n",
"\n",
"https://smith.langchain.com/public/935fd642-06a6-4b42-98e3-6074f93115cd/r"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -73,10 +73,10 @@
" AsyncCallbackManagerForRetrieverRun,\n",
" CallbackManagerForRetrieverRun,\n",
")\n",
"from langchain_community.utilities import GoogleSerperAPIWrapper\n",
"from langchain_core.documents import Document\n",
"from langchain_core.retrievers import BaseRetriever\n",
"from langchain_openai import ChatOpenAI, OpenAI"
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.llms import OpenAI\n",
"from langchain.schema import BaseRetriever, Document\n",
"from langchain.utilities import GoogleSerperAPIWrapper"
]
},
{

View File

@@ -47,10 +47,11 @@
"from datetime import datetime, timedelta\n",
"from typing import List\n",
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.docstore import InMemoryDocstore\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.retrievers import TimeWeightedVectorStoreRetriever\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"from langchain.vectorstores import FAISS\n",
"from termcolor import colored"
]
},

View File

@@ -75,8 +75,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain_experimental.autonomous_agents import HuggingGPT\n",
"from langchain_openai import OpenAI\n",
"\n",
"# %env OPENAI_API_BASE=http://localhost:8000/v1"
]

View File

@@ -20,7 +20,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.human import HumanInputChatModel"
"from langchain.chat_models.human import HumanInputChatModel"
]
},
{

View File

@@ -19,7 +19,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms.human import HumanInputLLM"
"from langchain.llms.human import HumanInputLLM"
]
},
{

View File

@@ -21,8 +21,9 @@
"outputs": [],
"source": [
"from langchain.chains import HypotheticalDocumentEmbedder, LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings"
"from langchain.embeddings import OpenAIEmbeddings\n",
"from langchain.llms import OpenAI\n",
"from langchain.prompts import PromptTemplate"
]
},
{
@@ -171,7 +172,7 @@
"outputs": [],
"source": [
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain.vectorstores import Chroma\n",
"\n",
"with open(\"../../state_of_the_union.txt\") as f:\n",
" state_of_the_union = f.read()\n",

View File

@@ -49,7 +49,7 @@
"source": [
"# pick and configure the LLM of your choice\n",
"\n",
"from langchain_openai import OpenAI\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\")"
]

View File

@@ -43,8 +43,8 @@
}
],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain_experimental.llm_bash.base import LLMBashChain\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"\n",

View File

@@ -42,7 +42,7 @@
],
"source": [
"from langchain.chains import LLMCheckerChain\n",
"from langchain_openai import OpenAI\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0.7)\n",
"\n",

View File

@@ -46,7 +46,7 @@
],
"source": [
"from langchain.chains import LLMMathChain\n",
"from langchain_openai import OpenAI\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"llm_math = LLMMathChain.from_llm(llm, verbose=True)\n",

View File

@@ -331,7 +331,7 @@
],
"source": [
"from langchain.chains import LLMSummarizationCheckerChain\n",
"from langchain_openai import OpenAI\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"checker_chain = LLMSummarizationCheckerChain.from_llm(llm, verbose=True, max_checks=2)\n",
@@ -822,7 +822,7 @@
],
"source": [
"from langchain.chains import LLMSummarizationCheckerChain\n",
"from langchain_openai import OpenAI\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"checker_chain = LLMSummarizationCheckerChain.from_llm(llm, verbose=True, max_checks=3)\n",
@@ -1096,7 +1096,7 @@
],
"source": [
"from langchain.chains import LLMSummarizationCheckerChain\n",
"from langchain_openai import OpenAI\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"checker_chain = LLMSummarizationCheckerChain.from_llm(llm, max_checks=3, verbose=True)\n",

View File

@@ -14,8 +14,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain_experimental.llm_symbolic_math.base import LLMSymbolicMathChain\n",
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"llm_symbolic_math = LLMSymbolicMathChain.from_llm(llm)"

View File

@@ -57,9 +57,9 @@
"outputs": [],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.llms import OpenAI\n",
"from langchain.memory import ConversationBufferWindowMemory\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI"
"from langchain.prompts import PromptTemplate"
]
},
{

View File

@@ -91,8 +91,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.messages import HumanMessage, SystemMessage"
]
},
{

View File

@@ -187,7 +187,7 @@
"\n",
"import chromadb\n",
"import numpy as np\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_experimental.open_clip import OpenCLIPEmbeddings\n",
"from PIL import Image as _PILImage\n",
"\n",
@@ -315,10 +315,10 @@
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"\n",
"def prompt_func(data_dict):\n",

View File

@@ -43,8 +43,8 @@
"outputs": [],
"source": [
"from langchain.agents import AgentType, initialize_agent\n",
"from langchain.tools import SteamshipImageGenerationTool\n",
"from langchain_openai import OpenAI"
"from langchain.llms import OpenAI\n",
"from langchain.tools import SteamshipImageGenerationTool"
]
},
{

View File

@@ -28,11 +28,11 @@
"source": [
"from typing import Callable, List\n",
"\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.schema import (\n",
" HumanMessage,\n",
" SystemMessage,\n",
")\n",
"from langchain_openai import ChatOpenAI"
")"
]
},
{

View File

@@ -33,6 +33,7 @@
"from typing import Callable, List\n",
"\n",
"import tenacity\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.output_parsers import RegexParser\n",
"from langchain.prompts import (\n",
" PromptTemplate,\n",
@@ -40,8 +41,7 @@
"from langchain.schema import (\n",
" HumanMessage,\n",
" SystemMessage,\n",
")\n",
"from langchain_openai import ChatOpenAI"
")"
]
},
{

View File

@@ -27,13 +27,13 @@
"from typing import Callable, List\n",
"\n",
"import tenacity\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.output_parsers import RegexParser\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.schema import (\n",
" HumanMessage,\n",
" SystemMessage,\n",
")\n",
"from langchain_openai import ChatOpenAI"
")"
]
},
{

View File

@@ -31,10 +31,10 @@
"from os import environ\n",
"\n",
"from langchain.chains import LLMChain\n",
"from langchain.llms import OpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_community.utilities import SQLDatabase\n",
"from langchain.utilities import SQLDatabase\n",
"from langchain_experimental.sql.vector_sql import VectorSQLDatabaseChain\n",
"from langchain_openai import OpenAI\n",
"from sqlalchemy import MetaData, create_engine\n",
"\n",
"MYSCALE_HOST = \"msc-4a9e710a.us-east-1.aws.staging.myscale.cloud\"\n",
@@ -57,7 +57,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings import HuggingFaceInstructEmbeddings\n",
"from langchain.embeddings import HuggingFaceInstructEmbeddings\n",
"from langchain_experimental.sql.vector_sql import VectorSQLOutputParser\n",
"\n",
"output_parser = VectorSQLOutputParser.from_embeddings(\n",
@@ -75,10 +75,10 @@
"outputs": [],
"source": [
"from langchain.callbacks import StdOutCallbackHandler\n",
"from langchain_community.utilities.sql_database import SQLDatabase\n",
"from langchain.llms import OpenAI\n",
"from langchain.utilities.sql_database import SQLDatabase\n",
"from langchain_experimental.sql.prompt import MYSCALE_PROMPT\n",
"from langchain_experimental.sql.vector_sql import VectorSQLDatabaseChain\n",
"from langchain_openai import OpenAI\n",
"\n",
"chain = VectorSQLDatabaseChain(\n",
" llm_chain=LLMChain(\n",
@@ -117,6 +117,7 @@
"outputs": [],
"source": [
"from langchain.chains.qa_with_sources.retrieval import RetrievalQAWithSourcesChain\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_experimental.retrievers.vector_sql_database import (\n",
" VectorSQLDatabaseChainRetriever,\n",
")\n",
@@ -125,7 +126,6 @@
" VectorSQLDatabaseChain,\n",
" VectorSQLRetrieveAllOutputParser,\n",
")\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"output_parser_retrieve_all = VectorSQLRetrieveAllOutputParser.from_embeddings(\n",
" output_parser.model\n",

File diff suppressed because one or more lines are too long

View File

@@ -20,10 +20,10 @@
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain.document_loaders import TextLoader\n",
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings"
"from langchain.vectorstores import Chroma"
]
},
{
@@ -52,8 +52,8 @@
"source": [
"from langchain.chains import create_qa_with_sources_chain\n",
"from langchain.chains.combine_documents.stuff import StuffDocumentsChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts import PromptTemplate"
]
},
{
@@ -358,7 +358,7 @@
"\n",
"from langchain.chains.openai_functions import create_qa_with_structure_chain\n",
"from langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain.schema import HumanMessage, SystemMessage\n",
"from pydantic import BaseModel, Field"
]
},

View File

@@ -28,8 +28,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI\n",
"from langchain_core.messages import HumanMessage, SystemMessage"
]
},
{
@@ -414,7 +414,7 @@
"BREAKING CHANGES:\n",
"- To use Azure embeddings with OpenAI V1, you'll need to use the new `AzureOpenAIEmbeddings` instead of the existing `OpenAIEmbeddings`. `OpenAIEmbeddings` continue to work when using Azure with `openai<1`.\n",
"```python\n",
"from langchain_openai import AzureOpenAIEmbeddings\n",
"from langchain.embeddings import AzureOpenAIEmbeddings\n",
"```\n",
"\n",
"\n",
@@ -456,8 +456,8 @@
"from typing import Literal\n",
"\n",
"from langchain.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.utils.openai_functions import convert_pydantic_to_openai_tool\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",

View File

@@ -1,648 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "c7fe38bc",
"metadata": {},
"source": [
"# Optimization\n",
"\n",
"This notebook goes over how to optimize chains using LangChain and [LangSmith](https://smith.langchain.com)."
]
},
{
"cell_type": "markdown",
"id": "2f87ccd5",
"metadata": {},
"source": [
"## Set up\n",
"\n",
"We will set an environment variable for LangSmith, and load the relevant data"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "236bedc5",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_PROJECT\"] = \"movie-qa\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a3fed0dd",
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "7cfff337",
"metadata": {},
"outputs": [],
"source": [
"df = pd.read_csv(\"data/imdb_top_1000.csv\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "2d20fb9c",
"metadata": {},
"outputs": [],
"source": [
"df[\"Released_Year\"] = df[\"Released_Year\"].astype(int, errors=\"ignore\")"
]
},
{
"cell_type": "markdown",
"id": "09fc8fe2",
"metadata": {},
"source": [
"## Create the initial retrieval chain\n",
"\n",
"We will use a self-query retriever"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "f71e24e2",
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import Document\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "8881ea8e",
"metadata": {},
"outputs": [],
"source": [
"records = df.to_dict(\"records\")\n",
"documents = [Document(page_content=d[\"Overview\"], metadata=d) for d in records]"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "8f495423",
"metadata": {},
"outputs": [],
"source": [
"vectorstore = Chroma.from_documents(documents, embeddings)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "31d33d62",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.query_constructor.base import AttributeInfo\n",
"from langchain.retrievers.self_query.base import SelfQueryRetriever\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"metadata_field_info = [\n",
" AttributeInfo(\n",
" name=\"Released_Year\",\n",
" description=\"The year the movie was released\",\n",
" type=\"int\",\n",
" ),\n",
" AttributeInfo(\n",
" name=\"Series_Title\",\n",
" description=\"The title of the movie\",\n",
" type=\"str\",\n",
" ),\n",
" AttributeInfo(\n",
" name=\"Genre\",\n",
" description=\"The genre of the movie\",\n",
" type=\"string\",\n",
" ),\n",
" AttributeInfo(\n",
" name=\"IMDB_Rating\", description=\"A 1-10 rating for the movie\", type=\"float\"\n",
" ),\n",
"]\n",
"document_content_description = \"Brief summary of a movie\"\n",
"llm = ChatOpenAI(temperature=0)\n",
"retriever = SelfQueryRetriever.from_llm(\n",
" llm, vectorstore, document_content_description, metadata_field_info, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "a731533b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnablePassthrough"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "05181849",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "feed4be6",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"Answer the user's question based on the below information:\n",
"\n",
"Information:\n",
"\n",
"{info}\n",
"\n",
"Question: {question}\"\"\"\n",
")\n",
"generator = (prompt | ChatOpenAI() | StrOutputParser()).with_config(\n",
" run_name=\"generator\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "eb16cc9a",
"metadata": {},
"outputs": [],
"source": [
"chain = (\n",
" RunnablePassthrough.assign(info=(lambda x: x[\"question\"]) | retriever) | generator\n",
")"
]
},
{
"cell_type": "markdown",
"id": "c70911cc",
"metadata": {},
"source": [
"## Run examples\n",
"\n",
"Run examples through the chain. This can either be manually, or using a list of examples, or production traffic"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "19a88d13",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'One of the horror movies released in the early 2000s is \"The Ring\" (2002), directed by Gore Verbinski.'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"question\": \"what is a horror movie released in early 2000s\"})"
]
},
{
"cell_type": "markdown",
"id": "17f9cdae",
"metadata": {},
"source": [
"## Annotate\n",
"\n",
"Now, go to LangSmitha and annotate those examples as correct or incorrect"
]
},
{
"cell_type": "markdown",
"id": "5e211da6",
"metadata": {},
"source": [
"## Create Dataset\n",
"\n",
"We can now create a dataset from those runs.\n",
"\n",
"What we will do is find the runs marked as correct, then grab the sub-chains from them. Specifically, the query generator sub chain and the final generation step"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "e4024267",
"metadata": {},
"outputs": [],
"source": [
"from langsmith import Client\n",
"\n",
"client = Client()"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "3814efc5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"14"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"runs = list(\n",
" client.list_runs(\n",
" project_name=\"movie-qa\",\n",
" execution_order=1,\n",
" filter=\"and(eq(feedback_key, 'correctness'), eq(feedback_score, 1))\",\n",
" )\n",
")\n",
"\n",
"len(runs)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "3eb123e0",
"metadata": {},
"outputs": [],
"source": [
"gen_runs = []\n",
"query_runs = []\n",
"for r in runs:\n",
" gen_runs.extend(\n",
" list(\n",
" client.list_runs(\n",
" project_name=\"movie-qa\",\n",
" filter=\"eq(name, 'generator')\",\n",
" trace_id=r.trace_id,\n",
" )\n",
" )\n",
" )\n",
" query_runs.extend(\n",
" list(\n",
" client.list_runs(\n",
" project_name=\"movie-qa\",\n",
" filter=\"eq(name, 'query_constructor')\",\n",
" trace_id=r.trace_id,\n",
" )\n",
" )\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "a4397026",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'question': 'what is a high school comedy released in early 2000s'}"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"runs[0].inputs"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "3fa6ad2a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output': 'One high school comedy released in the early 2000s is \"Mean Girls\" starring Lindsay Lohan, Rachel McAdams, and Tina Fey.'}"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"runs[0].outputs"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "1fda5b4b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'query': 'what is a high school comedy released in early 2000s'}"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_runs[0].inputs"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "1a1a51e6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output': {'query': 'high school comedy',\n",
" 'filter': {'operator': 'and',\n",
" 'arguments': [{'comparator': 'eq', 'attribute': 'Genre', 'value': 'comedy'},\n",
" {'operator': 'and',\n",
" 'arguments': [{'comparator': 'gte',\n",
" 'attribute': 'Released_Year',\n",
" 'value': 2000},\n",
" {'comparator': 'lt', 'attribute': 'Released_Year', 'value': 2010}]}]}}}"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_runs[0].outputs"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "e9d9966b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'question': 'what is a high school comedy released in early 2000s',\n",
" 'info': []}"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"gen_runs[0].inputs"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "bc113f3d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output': 'One high school comedy released in the early 2000s is \"Mean Girls\" starring Lindsay Lohan, Rachel McAdams, and Tina Fey.'}"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"gen_runs[0].outputs"
]
},
{
"cell_type": "markdown",
"id": "6cca74e5",
"metadata": {},
"source": [
"## Create datasets\n",
"\n",
"We can now create datasets for the query generation and final generation step.\n",
"We do this so that (1) we can inspect the datapoints, (2) we can edit them if needed, (3) we can add to them over time"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "69966f0e",
"metadata": {},
"outputs": [],
"source": [
"client.create_dataset(\"movie-query_constructor\")\n",
"\n",
"inputs = [r.inputs for r in query_runs]\n",
"outputs = [r.outputs for r in query_runs]\n",
"\n",
"client.create_examples(\n",
" inputs=inputs, outputs=outputs, dataset_name=\"movie-query_constructor\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "7e15770e",
"metadata": {},
"outputs": [],
"source": [
"client.create_dataset(\"movie-generator\")\n",
"\n",
"inputs = [r.inputs for r in gen_runs]\n",
"outputs = [r.outputs for r in gen_runs]\n",
"\n",
"client.create_examples(inputs=inputs, outputs=outputs, dataset_name=\"movie-generator\")"
]
},
{
"cell_type": "markdown",
"id": "61cf9bcd",
"metadata": {},
"source": [
"## Use as few shot examples\n",
"\n",
"We can now pull down a dataset and use them as few shot examples in a future chain"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "d9c79173",
"metadata": {},
"outputs": [],
"source": [
"examples = list(client.list_examples(dataset_name=\"movie-query_constructor\"))"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "a1771dd0",
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"\n",
"def filter_to_string(_filter):\n",
" if \"operator\" in _filter:\n",
" args = [filter_to_string(f) for f in _filter[\"arguments\"]]\n",
" return f\"{_filter['operator']}({','.join(args)})\"\n",
" else:\n",
" comparator = _filter[\"comparator\"]\n",
" attribute = json.dumps(_filter[\"attribute\"])\n",
" value = json.dumps(_filter[\"value\"])\n",
" return f\"{comparator}({attribute}, {value})\""
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "e67a3530",
"metadata": {},
"outputs": [],
"source": [
"model_examples = []\n",
"\n",
"for e in examples:\n",
" if \"filter\" in e.outputs[\"output\"]:\n",
" string_filter = filter_to_string(e.outputs[\"output\"][\"filter\"])\n",
" else:\n",
" string_filter = \"NO_FILTER\"\n",
" model_examples.append(\n",
" (\n",
" e.inputs[\"query\"],\n",
" {\"query\": e.outputs[\"output\"][\"query\"], \"filter\": string_filter},\n",
" )\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "84593135",
"metadata": {},
"outputs": [],
"source": [
"retriever1 = SelfQueryRetriever.from_llm(\n",
" llm,\n",
" vectorstore,\n",
" document_content_description,\n",
" metadata_field_info,\n",
" verbose=True,\n",
" chain_kwargs={\"examples\": model_examples},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "4ec9bb92",
"metadata": {},
"outputs": [],
"source": [
"chain1 = (\n",
" RunnablePassthrough.assign(info=(lambda x: x[\"question\"]) | retriever1) | generator\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "64eb88e2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'1. \"Saving Private Ryan\" (1998) - Directed by Steven Spielberg, this war film follows a group of soldiers during World War II as they search for a missing paratrooper.\\n\\n2. \"The Matrix\" (1999) - Directed by the Wachowskis, this science fiction action film follows a computer hacker who discovers the truth about the reality he lives in.\\n\\n3. \"Lethal Weapon 4\" (1998) - Directed by Richard Donner, this action-comedy film follows two mismatched detectives as they investigate a Chinese immigrant smuggling ring.\\n\\n4. \"The Fifth Element\" (1997) - Directed by Luc Besson, this science fiction action film follows a cab driver who must protect a mysterious woman who holds the key to saving the world.\\n\\n5. \"The Rock\" (1996) - Directed by Michael Bay, this action thriller follows a group of rogue military men who take over Alcatraz and threaten to launch missiles at San Francisco.'"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain1.invoke(\n",
" {\"question\": \"what are good action movies made before 2000 but after 1997?\"}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e1ee8b55",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -47,12 +47,12 @@
"import inspect\n",
"\n",
"import tenacity\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.output_parsers import RegexParser\n",
"from langchain.schema import (\n",
" HumanMessage,\n",
" SystemMessage,\n",
")\n",
"from langchain_openai import ChatOpenAI"
")"
]
},
{

View File

@@ -30,14 +30,15 @@
"outputs": [],
"source": [
"from langchain.chains import LLMMathChain\n",
"from langchain_community.utilities import DuckDuckGoSearchAPIWrapper\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.llms import OpenAI\n",
"from langchain.utilities import DuckDuckGoSearchAPIWrapper\n",
"from langchain_core.tools import Tool\n",
"from langchain_experimental.plan_and_execute import (\n",
" PlanAndExecute,\n",
" load_agent_executor,\n",
" load_chat_planner,\n",
")\n",
"from langchain_openai import ChatOpenAI, OpenAI"
")"
]
},
{

View File

@@ -81,8 +81,8 @@
"outputs": [],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.retrievers import KayAiRetriever\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model_name=\"gpt-3.5-turbo\")\n",
"retriever = KayAiRetriever.create(\n",

View File

@@ -17,8 +17,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_experimental.pal_chain import PALChain\n",
"from langchain_openai import OpenAI"
"from langchain.llms import OpenAI\n",
"from langchain_experimental.pal_chain import PALChain"
]
},
{

View File

@@ -27,7 +27,7 @@
],
"source": [
"from langchain.chains import create_citation_fuzzy_match_chain\n",
"from langchain_openai import ChatOpenAI"
"from langchain.chat_models import ChatOpenAI"
]
},
{

Some files were not shown because too many files have changed in this diff Show More