mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-08 02:00:06 +00:00
Compare commits
217 Commits
rlm/rag_fr
...
erick/auto
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
bfabf99aa3 | ||
|
|
304f3f5fc1 | ||
|
|
ae33979813 | ||
|
|
e022bfaa7d | ||
|
|
dcf973c22c | ||
|
|
66e45e8ab7 | ||
|
|
d93de71d08 | ||
|
|
af5ae24af2 | ||
|
|
75b6fa1134 | ||
|
|
ae56fd020a | ||
|
|
f0ffebb944 | ||
|
|
e7b3290d30 | ||
|
|
6ffd5b15bc | ||
|
|
1183769cf7 | ||
|
|
4eda647fdd | ||
|
|
db6af21395 | ||
|
|
fb245451d2 | ||
|
|
2145636f1d | ||
|
|
2ef69fe11b | ||
|
|
b3c3b58f2c | ||
|
|
54fcd476bb | ||
|
|
bcfce146d8 | ||
|
|
849051102a | ||
|
|
35446c814e | ||
|
|
0826d87ecd | ||
|
|
267e71606e | ||
|
|
2b7e47a668 | ||
|
|
afdd636999 | ||
|
|
06660bc78c | ||
|
|
f8943e8739 | ||
|
|
66adb95284 | ||
|
|
71f9ea33b6 | ||
|
|
2a510c71a0 | ||
|
|
d80c612c92 | ||
|
|
c29e9b6412 | ||
|
|
3b0fa9079d | ||
|
|
cc17334473 | ||
|
|
bdacfafa05 | ||
|
|
e02efd513f | ||
|
|
131c043864 | ||
|
|
6fc2835255 | ||
|
|
a265878d71 | ||
|
|
b1a847366c | ||
|
|
f7c709b40e | ||
|
|
c2ca6612fe | ||
|
|
13a6756067 | ||
|
|
b1e7130d8a | ||
|
|
0bca0f4c24 | ||
|
|
5b3fc86cfd | ||
|
|
c5b01ac621 | ||
|
|
9d458d089a | ||
|
|
93366861c7 | ||
|
|
78a1af4848 | ||
|
|
7d03d8f586 | ||
|
|
c2d09fb151 | ||
|
|
65ba5c220b | ||
|
|
9e7d9f9390 | ||
|
|
db442c635b | ||
|
|
2b4abed25c | ||
|
|
bb73251146 | ||
|
|
a0ec045495 | ||
|
|
17e886388b | ||
|
|
2e5949b6f8 | ||
|
|
af8c5c185b | ||
|
|
c37ca45825 | ||
|
|
36c0392dbe | ||
|
|
bb3b6bde33 | ||
|
|
bf9068516e | ||
|
|
daf820c77b | ||
|
|
ef2bd745cb | ||
|
|
881dc28d2c | ||
|
|
b0347f3e2b | ||
|
|
4acd2654a3 | ||
|
|
a372b23675 | ||
|
|
442fa52b30 | ||
|
|
c6724a39f4 | ||
|
|
546b757303 | ||
|
|
a1ce7ab672 | ||
|
|
744070ee85 | ||
|
|
f8f2649f12 | ||
|
|
1d082359ee | ||
|
|
1703fe2361 | ||
|
|
52f4ad8216 | ||
|
|
32cad38ec6 | ||
|
|
4a027e622f | ||
|
|
12d2b2ebcf | ||
|
|
85e93e05ed | ||
|
|
6d6226d96d | ||
|
|
84ebfb5b9d | ||
|
|
32c5be8b73 | ||
|
|
8457c31c04 | ||
|
|
c95facc293 | ||
|
|
a08f9a7ff9 | ||
|
|
39eb00d304 | ||
|
|
47bd58dc11 | ||
|
|
a1aa3a657c | ||
|
|
e9d3527b79 | ||
|
|
7c6a2a8384 | ||
|
|
7237dc67d4 | ||
|
|
2db79ab111 | ||
|
|
1bc8d9a943 | ||
|
|
8e44363ec9 | ||
|
|
d3d9244fee | ||
|
|
f3fdc5c5da | ||
|
|
815896ff13 | ||
|
|
1bfadecdd2 | ||
|
|
22d90800c8 | ||
|
|
ba70630829 | ||
|
|
ec0ae23645 | ||
|
|
0866a984fe | ||
|
|
0600998f38 | ||
|
|
6ef718c5f4 | ||
|
|
e451c8adc1 | ||
|
|
2e3af04080 | ||
|
|
bc7607a4e9 | ||
|
|
0255c5808b | ||
|
|
88e3129587 | ||
|
|
36e432672a | ||
|
|
38425c99d2 | ||
|
|
3c387bc12d | ||
|
|
481493dbce | ||
|
|
f01fb47597 | ||
|
|
508bde7f40 | ||
|
|
5e73603e8a | ||
|
|
3e87b67a3c | ||
|
|
c314137f5b | ||
|
|
27665e3546 | ||
|
|
5975bf39ec | ||
|
|
4915c3cd86 | ||
|
|
e86fd946c8 | ||
|
|
0bc397957b | ||
|
|
52ccae3fb1 | ||
|
|
570b4f8e66 | ||
|
|
4e189cd89a | ||
|
|
a936472512 | ||
|
|
6543e585a5 | ||
|
|
6a75ef74ca | ||
|
|
70ff54eace | ||
|
|
5b5115c408 | ||
|
|
a989f82027 | ||
|
|
e30c6662df | ||
|
|
08d3fd7f2e | ||
|
|
42db96477f | ||
|
|
a79345f199 | ||
|
|
bcc71d1a57 | ||
|
|
68f7468754 | ||
|
|
61e876aad8 | ||
|
|
5df8ab574e | ||
|
|
f3d61a6e47 | ||
|
|
61b200947f | ||
|
|
75ad0bba2d | ||
|
|
1e3ce338ca | ||
|
|
6c89507988 | ||
|
|
31790d15ec | ||
|
|
db80832e4f | ||
|
|
ef42d9d559 | ||
|
|
148347e858 | ||
|
|
35e60728b7 | ||
|
|
6023953ea7 | ||
|
|
3b8eba32f9 | ||
|
|
e86e66bad7 | ||
|
|
e510cfaa23 | ||
|
|
355ef2a4a6 | ||
|
|
9dd7cbb447 | ||
|
|
0785432e7b | ||
|
|
adc008407e | ||
|
|
c4e9c9ca29 | ||
|
|
b8768bd6e7 | ||
|
|
f6a05e964b | ||
|
|
c173a69908 | ||
|
|
f9976b9630 | ||
|
|
5c2538b9f7 | ||
|
|
a91181fe6d | ||
|
|
04651f0248 | ||
|
|
1113700b09 | ||
|
|
54dd8e52a8 | ||
|
|
fe382fcf20 | ||
|
|
06f66f25e1 | ||
|
|
b1b351b37e | ||
|
|
4fad71882e | ||
|
|
ce595f0203 | ||
|
|
fdbfa6b2c8 | ||
|
|
643fb3ab50 | ||
|
|
8d990ba67b | ||
|
|
63da14d620 | ||
|
|
8d299645f9 | ||
|
|
dfd94fb2f0 | ||
|
|
0b740ebd49 | ||
|
|
13cf4594f4 | ||
|
|
6004e9706f | ||
|
|
66aafc0573 | ||
|
|
9e95699277 | ||
|
|
b3ed98dec0 | ||
|
|
3f38e1a457 | ||
|
|
61da2ff24c | ||
|
|
d628a80a5d | ||
|
|
4c7755778d | ||
|
|
2b2285dac0 | ||
|
|
476bf8b763 | ||
|
|
019b6ebe8d | ||
|
|
80fcc50c65 | ||
|
|
5c6e123757 | ||
|
|
0e2e7d8b83 | ||
|
|
d898d2f07b | ||
|
|
ff3163297b | ||
|
|
4ec3fe4680 | ||
|
|
4e160540ff | ||
|
|
c69f599594 | ||
|
|
95ee69a301 | ||
|
|
e135e5257c | ||
|
|
90f5a1c40e | ||
|
|
92e6a641fd | ||
|
|
9ce177580a | ||
|
|
20fcd49348 | ||
|
|
cfc225ecb3 | ||
|
|
26b2ad6d5b | ||
|
|
e529939c54 |
2
.github/CONTRIBUTING.md
vendored
2
.github/CONTRIBUTING.md
vendored
@@ -13,7 +13,7 @@ There are many ways to contribute to LangChain. Here are some common ways people
|
||||
|
||||
- [**Documentation**](https://python.langchain.com/docs/contributing/documentation): Help improve our docs, including this one!
|
||||
- [**Code**](https://python.langchain.com/docs/contributing/code): Help us write code, fix bugs, or improve our infrastructure.
|
||||
- [**Integrations**](https://python.langchain.com/docs/contributing/integration): Help us integrate with your favorite vendors and tools.
|
||||
- [**Integrations**](https://python.langchain.com/docs/contributing/integrations): Help us integrate with your favorite vendors and tools.
|
||||
|
||||
### 🚩GitHub Issues
|
||||
|
||||
|
||||
@@ -1,7 +1,17 @@
|
||||
name: "\U0001F680 Feature request"
|
||||
description: Submit a proposal/request for a new LangChain feature
|
||||
labels: ["02 Feature Request"]
|
||||
labels: [idea]
|
||||
body:
|
||||
- type: checkboxes
|
||||
id: checks
|
||||
attributes:
|
||||
label: Checked
|
||||
description: Please confirm and check all the following options.
|
||||
options:
|
||||
- label: I searched existing ideas and did not find a similar one
|
||||
required: true
|
||||
- label: I added a very descriptive title
|
||||
required: true
|
||||
- label: I've clearly described the feature request and motivation for it
|
||||
required: true
|
||||
- type: textarea
|
||||
id: feature-request
|
||||
validations:
|
||||
@@ -10,7 +20,6 @@ body:
|
||||
label: Feature request
|
||||
description: |
|
||||
A clear and concise description of the feature proposal. Please provide links to any relevant GitHub repos, papers, or other resources if relevant.
|
||||
|
||||
- type: textarea
|
||||
id: motivation
|
||||
validations:
|
||||
@@ -19,12 +28,11 @@ body:
|
||||
label: Motivation
|
||||
description: |
|
||||
Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too.
|
||||
|
||||
- type: textarea
|
||||
id: contribution
|
||||
id: proposal
|
||||
validations:
|
||||
required: true
|
||||
required: false
|
||||
attributes:
|
||||
label: Your contribution
|
||||
label: Proposal (If applicable)
|
||||
description: |
|
||||
Is there any way that you could help, e.g. by submitting a PR? Make sure to read the [Contributing Guide](https://python.langchain.com/docs/contributing/)
|
||||
If you would like to propose a solution, please describe it here.
|
||||
122
.github/DISCUSSION_TEMPLATE/q-a.yml
vendored
Normal file
122
.github/DISCUSSION_TEMPLATE/q-a.yml
vendored
Normal file
@@ -0,0 +1,122 @@
|
||||
labels: [Question]
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thanks for your interest in 🦜️🔗 LangChain!
|
||||
|
||||
Please follow these instructions, fill every question, and do every step. 🙏
|
||||
|
||||
We're asking for this because answering questions and solving problems in GitHub takes a lot of time --
|
||||
this is time that we cannot spend on adding new features, fixing bugs, write documentation or reviewing pull requests.
|
||||
|
||||
By asking questions in a structured way (following this) it will be much easier to help you.
|
||||
|
||||
And there's a high chance that you will find the solution along the way and you won't even have to submit it and wait for an answer. 😎
|
||||
|
||||
As there are too many questions, we will **DISCARD** and close the incomplete ones.
|
||||
|
||||
That will allow us (and others) to focus on helping people like you that follow the whole process. 🤓
|
||||
|
||||
Relevant links to check before opening a question to see if your question has already been answered, fixed or
|
||||
if there's another way to solve your problem:
|
||||
|
||||
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
|
||||
[API Reference](https://api.python.langchain.com/en/stable/),
|
||||
[GitHub search](https://github.com/langchain-ai/langchain),
|
||||
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
|
||||
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
|
||||
[LangChain ChatBot](https://chat.langchain.com/)
|
||||
- type: checkboxes
|
||||
id: checks
|
||||
attributes:
|
||||
label: Checked other resources
|
||||
description: Please confirm and check all the following options.
|
||||
options:
|
||||
- label: I added a very descriptive title to this question.
|
||||
required: true
|
||||
- label: I searched the LangChain documentation with the integrated search.
|
||||
required: true
|
||||
- label: I used the GitHub search to find a similar question and didn't find it.
|
||||
required: true
|
||||
- type: checkboxes
|
||||
id: help
|
||||
attributes:
|
||||
label: Commit to Help
|
||||
description: |
|
||||
After submitting this, I commit to one of:
|
||||
|
||||
* Read open questions until I find 2 where I can help someone and add a comment to help there.
|
||||
* I already hit the "watch" button in this repository to receive notifications and I commit to help at least 2 people that ask questions in the future.
|
||||
* Once my question is answered, I will mark the answer as "accepted".
|
||||
options:
|
||||
- label: I commit to help with one of those options 👆
|
||||
required: true
|
||||
- type: textarea
|
||||
id: example
|
||||
attributes:
|
||||
label: Example Code
|
||||
description: |
|
||||
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
|
||||
|
||||
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
|
||||
|
||||
**Important!**
|
||||
|
||||
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
|
||||
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
|
||||
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
|
||||
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
|
||||
|
||||
placeholder: |
|
||||
from langchain_core.runnables import RunnableLambda
|
||||
|
||||
def bad_code(inputs) -> int:
|
||||
raise NotImplementedError('For demo purpose')
|
||||
|
||||
chain = RunnableLambda(bad_code)
|
||||
chain.invoke('Hello!')
|
||||
render: python
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
label: Description
|
||||
description: |
|
||||
What is the problem, question, or error?
|
||||
|
||||
Write a short description explaining what you are doing, what you expect to happen, and what is currently happening.
|
||||
placeholder: |
|
||||
* I'm trying to use the `langchain` library to do X.
|
||||
* I expect to see Y.
|
||||
* Instead, it does Z.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: system-info
|
||||
attributes:
|
||||
label: System Info
|
||||
description: |
|
||||
Please share your system info with us.
|
||||
|
||||
"pip freeze | grep langchain"
|
||||
platform (windows / linux / mac)
|
||||
python version
|
||||
|
||||
OR if you're on a recent version of langchain-core you can paste the output of:
|
||||
|
||||
python -m langchain_core.sys_info
|
||||
placeholder: |
|
||||
"pip freeze | grep langchain"
|
||||
platform
|
||||
python version
|
||||
|
||||
Alternatively, if you're on a recent version of langchain-core you can paste the output of:
|
||||
|
||||
python -m langchain_core.sys_info
|
||||
|
||||
These will only surface LangChain packages, don't forget to include any other relevant
|
||||
packages you're using (if you're not sure what's relevant, you can paste the entire output of `pip freeze`).
|
||||
validations:
|
||||
required: true
|
||||
69
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
69
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
@@ -1,5 +1,5 @@
|
||||
name: "\U0001F41B Bug Report"
|
||||
description: Submit a bug report to help us improve LangChain. To report a security issue, please instead use the security option below.
|
||||
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the GitHub Discussions.
|
||||
labels: ["02 Bug Report"]
|
||||
body:
|
||||
- type: markdown
|
||||
@@ -7,6 +7,11 @@ body:
|
||||
value: >
|
||||
Thank you for taking the time to file a bug report.
|
||||
|
||||
Use this to report bugs in LangChain.
|
||||
|
||||
If you're not certain that your issue is due to a bug in LangChain, please use [GitHub Discussions](https://github.com/langchain-ai/langchain/discussions)
|
||||
to ask for help with your issue.
|
||||
|
||||
Relevant links to check before filing a bug report to see if your issue has already been reported, fixed or
|
||||
if there's another way to solve your problem:
|
||||
|
||||
@@ -14,7 +19,8 @@ body:
|
||||
[API Reference](https://api.python.langchain.com/en/stable/),
|
||||
[GitHub search](https://github.com/langchain-ai/langchain),
|
||||
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
|
||||
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue)
|
||||
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
|
||||
[LangChain ChatBot](https://chat.langchain.com/)
|
||||
- type: checkboxes
|
||||
id: checks
|
||||
attributes:
|
||||
@@ -27,6 +33,8 @@ body:
|
||||
required: true
|
||||
- label: I used the GitHub search to find a similar question and didn't find it.
|
||||
required: true
|
||||
- label: I am sure that this is a bug in LangChain rather than my code.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: reproduction
|
||||
validations:
|
||||
@@ -38,10 +46,12 @@ body:
|
||||
|
||||
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
|
||||
|
||||
If you're including an error message, please include the full stack trace not just the last error.
|
||||
**Important!**
|
||||
|
||||
**Important!** Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
|
||||
Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
|
||||
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
|
||||
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
|
||||
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
|
||||
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
|
||||
|
||||
placeholder: |
|
||||
The following code:
|
||||
@@ -55,9 +65,16 @@ body:
|
||||
chain = RunnableLambda(bad_code)
|
||||
chain.invoke('Hello!')
|
||||
```
|
||||
|
||||
Include both the error and the full stack trace if reporting an exception!
|
||||
|
||||
- type: textarea
|
||||
id: error
|
||||
validations:
|
||||
required: false
|
||||
attributes:
|
||||
label: Error Message and Stack Trace (if applicable)
|
||||
description: |
|
||||
If you are reporting an error, please include the full error message and stack trace.
|
||||
placeholder: |
|
||||
Exception + full stack trace
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
@@ -76,28 +93,26 @@ body:
|
||||
id: system-info
|
||||
attributes:
|
||||
label: System Info
|
||||
description: Please share your system info with us.
|
||||
description: |
|
||||
Please share your system info with us.
|
||||
|
||||
"pip freeze | grep langchain"
|
||||
platform (windows / linux / mac)
|
||||
python version
|
||||
|
||||
OR if you're on a recent version of langchain-core you can paste the output of:
|
||||
|
||||
python -m langchain_core.sys_info
|
||||
placeholder: |
|
||||
"pip freeze | grep langchain"
|
||||
platform
|
||||
python version
|
||||
|
||||
Alternatively, if you're on a recent version of langchain-core you can paste the output of:
|
||||
|
||||
python -m langchain_core.sys_info
|
||||
|
||||
These will only surface LangChain packages, don't forget to include any other relevant
|
||||
packages you're using (if you're not sure what's relevant, you can paste the entire output of `pip freeze`).
|
||||
validations:
|
||||
required: true
|
||||
- type: checkboxes
|
||||
id: related-components
|
||||
attributes:
|
||||
label: Related Components
|
||||
description: "Select the components related to the issue (if applicable):"
|
||||
options:
|
||||
- label: "LLMs/Chat Models"
|
||||
- label: "Embedding Models"
|
||||
- label: "Prompts / Prompt Templates / Prompt Selectors"
|
||||
- label: "Output Parsers"
|
||||
- label: "Document Loaders"
|
||||
- label: "Vector Stores / Retrievers"
|
||||
- label: "Memory"
|
||||
- label: "Agents / Agent Executors"
|
||||
- label: "Tools / Toolkits"
|
||||
- label: "Chains"
|
||||
- label: "Callbacks/Tracing"
|
||||
- label: "Async"
|
||||
|
||||
3
.github/ISSUE_TEMPLATE/config.yml
vendored
3
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -7,6 +7,9 @@ contact_links:
|
||||
- name: Discord
|
||||
url: https://discord.gg/6adMQxSpJS
|
||||
about: General community discussions
|
||||
- name: Feature Request
|
||||
url: https://www.github.com/langchain-ai/langchain/discussions/categories/ideas
|
||||
about: Suggest a feature or an idea
|
||||
- name: Show and tell
|
||||
about: Show what you built with LangChain
|
||||
url: https://www.github.com/langchain-ai/langchain/discussions/categories/show-and-tell
|
||||
|
||||
8
.github/scripts/check_diff.py
vendored
8
.github/scripts/check_diff.py
vendored
@@ -36,13 +36,7 @@ if __name__ == "__main__":
|
||||
elif "libs/partners" in file:
|
||||
partner_dir = file.split("/")[2]
|
||||
if os.path.isdir(f"libs/partners/{partner_dir}"):
|
||||
dirs_to_run.update(
|
||||
(
|
||||
f"libs/partners/{partner_dir}",
|
||||
"libs/langchain",
|
||||
"libs/experimental",
|
||||
)
|
||||
)
|
||||
dirs_to_run.add(f"libs/partners/{partner_dir}")
|
||||
# Skip if the directory was deleted
|
||||
elif "libs/langchain" in file:
|
||||
dirs_to_run.update(("libs/langchain", "libs/experimental"))
|
||||
|
||||
6
.github/workflows/_all_ci.yml
vendored
6
.github/workflows/_all_ci.yml
vendored
@@ -36,30 +36,35 @@ env:
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
name: "-"
|
||||
uses: ./.github/workflows/_lint.yml
|
||||
with:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
secrets: inherit
|
||||
|
||||
test:
|
||||
name: "-"
|
||||
uses: ./.github/workflows/_test.yml
|
||||
with:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
secrets: inherit
|
||||
|
||||
compile-integration-tests:
|
||||
name: "-"
|
||||
uses: ./.github/workflows/_compile_integration_test.yml
|
||||
with:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
secrets: inherit
|
||||
|
||||
dependencies:
|
||||
name: "-"
|
||||
uses: ./.github/workflows/_dependencies.yml
|
||||
with:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
secrets: inherit
|
||||
|
||||
extended-tests:
|
||||
name: "make extended_tests #${{ matrix.python-version }}"
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
@@ -68,7 +73,6 @@ jobs:
|
||||
- "3.9"
|
||||
- "3.10"
|
||||
- "3.11"
|
||||
name: Python ${{ matrix.python-version }} extended tests
|
||||
defaults:
|
||||
run:
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
@@ -24,7 +24,7 @@ jobs:
|
||||
- "3.9"
|
||||
- "3.10"
|
||||
- "3.11"
|
||||
name: Python ${{ matrix.python-version }}
|
||||
name: "poetry run pytest -m compile tests/integration_tests #${{ matrix.python-version }}"
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
|
||||
2
.github/workflows/_dependencies.yml
vendored
2
.github/workflows/_dependencies.yml
vendored
@@ -28,7 +28,7 @@ jobs:
|
||||
- "3.9"
|
||||
- "3.10"
|
||||
- "3.11"
|
||||
name: dependencies - Python ${{ matrix.python-version }}
|
||||
name: dependency checks ${{ matrix.python-version }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
|
||||
2
.github/workflows/_integration_test.yml
vendored
2
.github/workflows/_integration_test.yml
vendored
@@ -55,6 +55,8 @@ jobs:
|
||||
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
|
||||
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
|
||||
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
|
||||
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
|
||||
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
|
||||
run: |
|
||||
make integration_tests
|
||||
|
||||
|
||||
5
.github/workflows/_lint.yml
vendored
5
.github/workflows/_lint.yml
vendored
@@ -21,6 +21,7 @@ env:
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: "make lint #${{ matrix.python-version }}"
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
@@ -85,7 +86,7 @@ jobs:
|
||||
with:
|
||||
path: |
|
||||
${{ env.WORKDIR }}/.mypy_cache
|
||||
key: mypy-lint-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', env.WORKDIR)) }}
|
||||
key: mypy-lint-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
|
||||
|
||||
|
||||
- name: Analysing the code with our lint
|
||||
@@ -113,7 +114,7 @@ jobs:
|
||||
with:
|
||||
path: |
|
||||
${{ env.WORKDIR }}/.mypy_cache_test
|
||||
key: mypy-test-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', env.WORKDIR)) }}
|
||||
key: mypy-test-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
|
||||
|
||||
- name: Analysing the code with our lint
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
2
.github/workflows/_release.yml
vendored
2
.github/workflows/_release.yml
vendored
@@ -174,6 +174,8 @@ jobs:
|
||||
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
|
||||
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
|
||||
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
|
||||
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
|
||||
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
|
||||
run: make integration_tests
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
|
||||
|
||||
2
.github/workflows/_test.yml
vendored
2
.github/workflows/_test.yml
vendored
@@ -28,7 +28,7 @@ jobs:
|
||||
- "3.9"
|
||||
- "3.10"
|
||||
- "3.11"
|
||||
name: Python ${{ matrix.python-version }}
|
||||
name: "make test #${{ matrix.python-version }}"
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
|
||||
3
.github/workflows/check_diffs.yml
vendored
3
.github/workflows/check_diffs.yml
vendored
@@ -1,5 +1,5 @@
|
||||
---
|
||||
name: Check library diffs
|
||||
name: CI
|
||||
|
||||
on:
|
||||
push:
|
||||
@@ -32,6 +32,7 @@ jobs:
|
||||
outputs:
|
||||
dirs-to-run: ${{ steps.set-matrix.outputs.dirs-to-run }}
|
||||
ci:
|
||||
name: cd ${{ matrix.working-directory }}
|
||||
needs: [ build ]
|
||||
strategy:
|
||||
matrix:
|
||||
|
||||
4
.github/workflows/codespell.yml
vendored
4
.github/workflows/codespell.yml
vendored
@@ -1,5 +1,5 @@
|
||||
---
|
||||
name: Codespell
|
||||
name: CI / cd . / make spell_check
|
||||
|
||||
on:
|
||||
push:
|
||||
@@ -12,7 +12,7 @@ permissions:
|
||||
|
||||
jobs:
|
||||
codespell:
|
||||
name: Check for spelling errors
|
||||
name: (Check for spelling errors)
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
|
||||
4
.github/workflows/doc_lint.yml
vendored
4
.github/workflows/doc_lint.yml
vendored
@@ -1,5 +1,5 @@
|
||||
---
|
||||
name: Docs, templates, cookbook lint
|
||||
name: CI / cd .
|
||||
|
||||
on:
|
||||
push:
|
||||
@@ -15,6 +15,7 @@ on:
|
||||
|
||||
jobs:
|
||||
check:
|
||||
name: Check for "from langchain import x" imports
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
@@ -28,6 +29,7 @@ jobs:
|
||||
git grep 'from langchain import' {docs/docs,templates,cookbook} | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
|
||||
|
||||
lint:
|
||||
name: "-"
|
||||
uses:
|
||||
./.github/workflows/_lint.yml
|
||||
with:
|
||||
|
||||
13
.github/workflows/langchain_cli_release.yml
vendored
13
.github/workflows/langchain_cli_release.yml
vendored
@@ -1,13 +0,0 @@
|
||||
---
|
||||
name: libs/cli Release
|
||||
|
||||
on:
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
jobs:
|
||||
release:
|
||||
uses:
|
||||
./.github/workflows/_release.yml
|
||||
with:
|
||||
working-directory: libs/cli
|
||||
secrets: inherit
|
||||
@@ -1,13 +0,0 @@
|
||||
---
|
||||
name: libs/community Release
|
||||
|
||||
on:
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
jobs:
|
||||
release:
|
||||
uses:
|
||||
./.github/workflows/_release.yml
|
||||
with:
|
||||
working-directory: libs/community
|
||||
secrets: inherit
|
||||
13
.github/workflows/langchain_core_release.yml
vendored
13
.github/workflows/langchain_core_release.yml
vendored
@@ -1,13 +0,0 @@
|
||||
---
|
||||
name: libs/core Release
|
||||
|
||||
on:
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
jobs:
|
||||
release:
|
||||
uses:
|
||||
./.github/workflows/_release.yml
|
||||
with:
|
||||
working-directory: libs/core
|
||||
secrets: inherit
|
||||
@@ -1,13 +0,0 @@
|
||||
---
|
||||
name: libs/experimental Release
|
||||
|
||||
on:
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
jobs:
|
||||
release:
|
||||
uses:
|
||||
./.github/workflows/_release.yml
|
||||
with:
|
||||
working-directory: libs/experimental
|
||||
secrets: inherit
|
||||
@@ -1,13 +0,0 @@
|
||||
---
|
||||
name: Experimental Test Release
|
||||
|
||||
on:
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
jobs:
|
||||
release:
|
||||
uses:
|
||||
./.github/workflows/_test_release.yml
|
||||
with:
|
||||
working-directory: libs/experimental
|
||||
secrets: inherit
|
||||
13
.github/workflows/langchain_openai_release.yml
vendored
13
.github/workflows/langchain_openai_release.yml
vendored
@@ -1,13 +0,0 @@
|
||||
---
|
||||
name: libs/core Release
|
||||
|
||||
on:
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
jobs:
|
||||
release:
|
||||
uses:
|
||||
./.github/workflows/_release.yml
|
||||
with:
|
||||
working-directory: libs/core
|
||||
secrets: inherit
|
||||
27
.github/workflows/langchain_release.yml
vendored
27
.github/workflows/langchain_release.yml
vendored
@@ -1,27 +0,0 @@
|
||||
---
|
||||
name: libs/langchain Release
|
||||
|
||||
on:
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
jobs:
|
||||
release:
|
||||
uses:
|
||||
./.github/workflows/_release.yml
|
||||
with:
|
||||
working-directory: libs/langchain
|
||||
secrets: inherit
|
||||
|
||||
# N.B.: It's possible that PyPI doesn't make the new release visible / available
|
||||
# immediately after publishing. If that happens, the docker build might not
|
||||
# create a new docker image for the new release, since it won't see it.
|
||||
#
|
||||
# If this ends up being a problem, add a check to the end of the `_release.yml`
|
||||
# workflow that prevents the workflow from finishing until the new release
|
||||
# is visible and installable on PyPI.
|
||||
release-docker:
|
||||
needs:
|
||||
- release
|
||||
uses:
|
||||
./.github/workflows/langchain_release_docker.yml
|
||||
secrets: inherit
|
||||
13
.github/workflows/langchain_test_release.yml
vendored
13
.github/workflows/langchain_test_release.yml
vendored
@@ -1,13 +0,0 @@
|
||||
---
|
||||
name: Test Release
|
||||
|
||||
on:
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
jobs:
|
||||
release:
|
||||
uses:
|
||||
./.github/workflows/_test_release.yml
|
||||
with:
|
||||
working-directory: libs/langchain
|
||||
secrets: inherit
|
||||
36
.github/workflows/templates_ci.yml
vendored
36
.github/workflows/templates_ci.yml
vendored
@@ -1,36 +0,0 @@
|
||||
---
|
||||
name: templates CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ master ]
|
||||
pull_request:
|
||||
paths:
|
||||
- '.github/actions/poetry_setup/action.yml'
|
||||
- '.github/tools/**'
|
||||
- '.github/workflows/_lint.yml'
|
||||
- '.github/workflows/templates_ci.yml'
|
||||
- 'templates/**'
|
||||
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
||||
|
||||
# If another push to the same PR or branch happens while this workflow is still running,
|
||||
# cancel the earlier run in favor of the next run.
|
||||
#
|
||||
# There's no point in testing an outdated version of the code. GitHub only allows
|
||||
# a limited number of job runners to be active at the same time, so it's better to cancel
|
||||
# pointless jobs early so that more useful jobs can run sooner.
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
POETRY_VERSION: "1.7.1"
|
||||
WORKDIR: "templates"
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
uses:
|
||||
./.github/workflows/_lint.yml
|
||||
with:
|
||||
working-directory: templates
|
||||
secrets: inherit
|
||||
@@ -4,6 +4,9 @@
|
||||
# Required
|
||||
version: 2
|
||||
|
||||
formats:
|
||||
- pdf
|
||||
|
||||
# Set the version of Python and other tools you might need
|
||||
build:
|
||||
os: ubuntu-22.04
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# 🦜️🔗 LangChain
|
||||
|
||||
⚡ Building applications with LLMs through composability ⚡
|
||||
⚡ Build context-aware reasoning applications ⚡
|
||||
|
||||
[](https://github.com/langchain-ai/langchain/releases)
|
||||
[](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
|
||||
|
||||
350
cookbook/nomic_embedding_rag.ipynb
Normal file
350
cookbook/nomic_embedding_rag.ipynb
Normal file
File diff suppressed because one or more lines are too long
@@ -1,464 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "41ce62a8-251f-4f9e-b375-e93a5861c3fe",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Enviornment\n",
|
||||
"\n",
|
||||
"`(1) Packages`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e4bef162-17e2-4a27-9a97-1f63dcf9726a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip[ install langchain_community tiktoken langchain-openai langchainhub chromadb"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "75a8ab66-8477-429f-bbbe-ba439322d085",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"`(2) LangSmith`\n",
|
||||
"\n",
|
||||
"https://docs.smith.langchain.com/"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b76f68a8-4745-4377-8057-6090b87377d1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"os.environ['LANGCHAIN_TRACING_V2'] = 'true'\n",
|
||||
"os.environ['LANGCHAIN_ENDPOINT'] = 'https://api.smith.langchain.com'\n",
|
||||
"os.environ['LANGCHAIN_API_KEY'] = <your-api-key>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1eae0ab7-d43b-43e0-8b99-6122a636fe0c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Part 1: Overview\n",
|
||||
" \n",
|
||||
"[RAG quickstart](https://python.langchain.com/docs/use_cases/question_answering/quickstart)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "98070313-0c2f-4ba6-ae3e-79e2418ce4df",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import bs4\n",
|
||||
"from langchain import hub\n",
|
||||
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||
"from langchain_community.document_loaders import WebBaseLoader\n",
|
||||
"from langchain_community.vectorstores import Chroma\n",
|
||||
"from langchain_core.output_parsers import StrOutputParser\n",
|
||||
"from langchain_core.runnables import RunnablePassthrough\n",
|
||||
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
|
||||
"\n",
|
||||
"#### INDEXING ####\n",
|
||||
"\n",
|
||||
"# Load Documents\n",
|
||||
"loader = WebBaseLoader(\n",
|
||||
" web_paths=(\"https://lilianweng.github.io/posts/2023-06-23-agent/\",),\n",
|
||||
" bs_kwargs=dict(\n",
|
||||
" parse_only=bs4.SoupStrainer(\n",
|
||||
" class_=(\"post-content\", \"post-title\", \"post-header\")\n",
|
||||
" )\n",
|
||||
" ),\n",
|
||||
")\n",
|
||||
"docs = loader.load()\n",
|
||||
"\n",
|
||||
"# Split\n",
|
||||
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n",
|
||||
"splits = text_splitter.split_documents(docs)\n",
|
||||
"\n",
|
||||
"# Embed\n",
|
||||
"vectorstore = Chroma.from_documents(documents=splits, \n",
|
||||
" embedding=OpenAIEmbeddings())\n",
|
||||
"\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"#### RETRIEVAL and GENERATION ####\n",
|
||||
"\n",
|
||||
"# Prompt\n",
|
||||
"prompt = hub.pull(\"rlm/rag-prompt\")\n",
|
||||
"\n",
|
||||
"# LLM\n",
|
||||
"llm = ChatOpenAI(model_name=\"gpt-3.5-turbo\", temperature=0)\n",
|
||||
"\n",
|
||||
"# Post-processing\n",
|
||||
"def format_docs(docs):\n",
|
||||
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
|
||||
"\n",
|
||||
"# Chain\n",
|
||||
"rag_chain = (\n",
|
||||
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
|
||||
" | prompt\n",
|
||||
" | llm\n",
|
||||
" | StrOutputParser()\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Question\n",
|
||||
"rag_chain.invoke(\"What is Task Decomposition?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "18e8e856-bafd-469e-b99a-11596b18aad4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Part 2: Indexing"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "edd7beeb-21fa-4f4b-b8fa-5a4f70489a16",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Documents\n",
|
||||
"question = \"What kinds of pets do I like?\"\n",
|
||||
"document = \"My favorite pet is a cat.\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e0552ea4-935d-4dfa-bd2b-56d148e96304",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"[Count tokens](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb) considering [~4 char / token](https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "df119cca-1676-4caa-bad4-11805d69e616",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import tiktoken\n",
|
||||
"\n",
|
||||
"def num_tokens_from_string(string: str, encoding_name: str) -> int:\n",
|
||||
" \"\"\"Returns the number of tokens in a text string.\"\"\"\n",
|
||||
" encoding = tiktoken.get_encoding(encoding_name)\n",
|
||||
" num_tokens = len(encoding.encode(string))\n",
|
||||
" return num_tokens\n",
|
||||
"\n",
|
||||
"num_tokens_from_string(question, \"cl100k_base\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4f04fd74-829f-472c-a1bc-ec6521a0529f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"[Text embedding models](https://python.langchain.com/docs/integrations/text_embedding/openai)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6bd98786-755d-4d49-ba97-30c5a623b74e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_openai import OpenAIEmbeddings\n",
|
||||
"embd = OpenAIEmbeddings()\n",
|
||||
"query_result = embd.embed_query(question)\n",
|
||||
"document_result = embd.embed_query(document)\n",
|
||||
"len(query_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f5e0e35f-6861-4c5e-9301-04fd5408f8f8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"[Cosine similarity](https://platform.openai.com/docs/guides/embeddings/frequently-asked-questions) is reccomended (1 indicates identical)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b8001998-b08c-4560-b124-bfa1fced8958",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import numpy as np\n",
|
||||
"\n",
|
||||
"def cosine_similarity(vec1, vec2):\n",
|
||||
" dot_product = np.dot(vec1, vec2)\n",
|
||||
" norm_vec1 = np.linalg.norm(vec1)\n",
|
||||
" norm_vec2 = np.linalg.norm(vec2)\n",
|
||||
" return dot_product / (norm_vec1 * norm_vec2)\n",
|
||||
"\n",
|
||||
"similarity = cosine_similarity(query_result, document_result)\n",
|
||||
"print(\"Cosine Similarity:\", similarity)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8aea73bc-98e3-4fdc-ba72-d190736bed20",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"[Document Loaders](https://python.langchain.com/docs/integrations/document_loaders/)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "5778c31a-6138-4130-8865-31a08e82b9fb",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#### INDEXING ####\n",
|
||||
"\n",
|
||||
"# Load blog\n",
|
||||
"import bs4\n",
|
||||
"from langchain_community.document_loaders import WebBaseLoader\n",
|
||||
"loader = WebBaseLoader(\n",
|
||||
" web_paths=(\"https://lilianweng.github.io/posts/2023-06-23-agent/\",),\n",
|
||||
" bs_kwargs=dict(\n",
|
||||
" parse_only=bs4.SoupStrainer(\n",
|
||||
" class_=(\"post-content\", \"post-title\", \"post-header\")\n",
|
||||
" )\n",
|
||||
" ),\n",
|
||||
")\n",
|
||||
"blog_docs = loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "798e731e-c6ff-46e3-a8bc-386832362af2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"[Splitter](https://python.langchain.com/docs/modules/data_connection/document_transformers/recursive_text_splitter)\n",
|
||||
"\n",
|
||||
"> This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is [\"\\n\\n\", \"\\n\", \" \", \"\"]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e668d339-3951-4662-8387-c3d296646906",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Split\n",
|
||||
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||
"text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(\n",
|
||||
" chunk_size=300, \n",
|
||||
" chunk_overlap=50)\n",
|
||||
"\n",
|
||||
"# Make splits\n",
|
||||
"splits = text_splitter.split_documents(blog_docs)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "427303a1-3ed4-430c-bfc7-cb3e48022f1d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"[Vectorstores](https://python.langchain.com/docs/integrations/vectorstores/)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "baa90aaf-cc1b-46a1-9fba-cf20804dcb41",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Index\n",
|
||||
"from langchain_openai import OpenAIEmbeddings\n",
|
||||
"from langchain_community.vectorstores import Chroma\n",
|
||||
"vectorstore = Chroma.from_documents(documents=splits, \n",
|
||||
" embedding=OpenAIEmbeddings())\n",
|
||||
"\n",
|
||||
"retriever = vectorstore.as_retriever()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ba890329-1411-4922-bd27-fe0490dd1208",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Part 3: Retrieval"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "57c2de7a-93e6-4072-bc5b-db6516f96dda",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"docs = retriever.get_relevant_documents(\"What is Task Decomposition?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "0582e6bd-8e97-4cf4-80b9-a04434ea7135",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"len(docs)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "beda1b07-7bd2-4f5b-8d44-1fc52f5d2ce2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Part 4: Generation"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "8beb6c14-5e18-43e7-9d04-59e3b8a81cc9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"\n",
|
||||
"# Prompt\n",
|
||||
"template = \"\"\"Answer the question based only on the following context:\n",
|
||||
"{context}\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||||
"prompt"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e4461264-5cac-479a-917c-9bf589826da4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# LLM\n",
|
||||
"llm = ChatOpenAI(model_name=\"gpt-3.5-turbo\", temperature=0)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "55d6629f-18ec-4372-a557-b254fbb1dd2d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Chain\n",
|
||||
"chain = prompt | llm"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "94470770-8df4-4359-9504-ef6c8b3137ff",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Run\n",
|
||||
"chain.invoke({\"context\":docs,\"question\":\"What is Task Decomposition?\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "65770e2d-3d5e-4371-abc9-0aeca9646885",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain import hub\n",
|
||||
"prompt_hub_rag = hub.pull(\"rlm/rag-prompt\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f53e5840-0a0f-4428-a4a4-6922800aff89",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt_hub_rag"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8ffe29a1-5527-419e-9f12-8a3061d12885",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"[RAG chains](https://python.langchain.com/docs/expression_language/get_started#rag-search-example)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "8208a8bc-c75f-4e8e-8601-680746cd6276",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.output_parsers import StrOutputParser\n",
|
||||
"from langchain_core.runnables import RunnablePassthrough\n",
|
||||
"\n",
|
||||
"rag_chain = (\n",
|
||||
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
|
||||
" | prompt\n",
|
||||
" | llm\n",
|
||||
" | StrOutputParser()\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"rag_chain.invoke(\"What is Task Decomposition?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d21023fb-c570-4163-aa46-56dae4c67122",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -670,8 +670,6 @@ local_llm = HuggingFacePipeline(pipeline=pipe)
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
/workspace/langchain/.venv/lib/python3.9/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
|
||||
from .autonotebook import tqdm as notebook_tqdm
|
||||
Loading checkpoint shards: 100%|██████████| 8/8 [00:32<00:00, 4.11s/it]
|
||||
```
|
||||
|
||||
|
||||
@@ -16,7 +16,8 @@ cp ../cookbook/README.md src/pages/cookbook.mdx
|
||||
mkdir -p docs/templates
|
||||
cp ../templates/docs/INDEX.md docs/templates/index.md
|
||||
poetry run python scripts/copy_templates.py
|
||||
wget https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/langserve.md
|
||||
wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/langserve.md
|
||||
wget -q https://raw.githubusercontent.com/langchain-ai/langgraph/main/README.md -O docs/langgraph.md
|
||||
|
||||
yarn
|
||||
|
||||
|
||||
@@ -88,16 +88,20 @@ html_last_updated_fmt = "%b %d, %Y"
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||
# ones.
|
||||
extensions = [
|
||||
"sphinx.ext.autodoc",
|
||||
"sphinx.ext.autodoc.typehints",
|
||||
"sphinx.ext.autosummary",
|
||||
"sphinx.ext.napoleon",
|
||||
"sphinx.ext.viewcode",
|
||||
"sphinxcontrib.autodoc_pydantic",
|
||||
"sphinx_copybutton",
|
||||
"sphinx_panels",
|
||||
"IPython.sphinxext.ipython_console_highlighting",
|
||||
"autoapi.extension"
|
||||
# "sphinx.ext.autodoc",
|
||||
# "sphinx.ext.autodoc.typehints",
|
||||
# "sphinx.ext.autosummary",
|
||||
# "sphinx.ext.napoleon",
|
||||
# "sphinx.ext.viewcode",
|
||||
# "sphinxcontrib.autodoc_pydantic",
|
||||
# "sphinx_copybutton",
|
||||
# "sphinx_panels",
|
||||
# "IPython.sphinxext.ipython_console_highlighting",
|
||||
]
|
||||
autoapi_dirs = ["/Users/erickfriis/langchain/oss-py/libs/core/langchain_core"]
|
||||
autoapi_python_use_implicit_namespaces = True
|
||||
|
||||
source_suffix = [".rst"]
|
||||
|
||||
# some autodoc pydantic options are repeated in the actual template.
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -6,9 +6,9 @@ pydantic<2
|
||||
autodoc_pydantic==1.8.0
|
||||
myst_parser
|
||||
nbsphinx==0.8.9
|
||||
sphinx>=5
|
||||
sphinx>=7
|
||||
sphinx-autobuild==2021.3.14
|
||||
sphinx_rtd_theme==1.0.0
|
||||
sphinx_rtd_theme==2.0.0
|
||||
sphinx-typlog-theme==0.8.0
|
||||
sphinx-panels
|
||||
toml
|
||||
|
||||
4
docs/docs/_templates/integration.mdx
vendored
4
docs/docs/_templates/integration.mdx
vendored
@@ -37,7 +37,7 @@ from langchain_community.llms import integration_class_REPLACE_ME
|
||||
|
||||
## Text Embedding Models
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/INCLUDE_REAL_NAME)
|
||||
See a [usage example](/docs/integrations/text_embedding/INCLUDE_REAL_NAME).
|
||||
|
||||
```python
|
||||
from langchain_community.embeddings import integration_class_REPLACE_ME
|
||||
@@ -45,7 +45,7 @@ from langchain_community.embeddings import integration_class_REPLACE_ME
|
||||
|
||||
## Chat models
|
||||
|
||||
See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME)
|
||||
See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME).
|
||||
|
||||
```python
|
||||
from langchain_community.chat_models import integration_class_REPLACE_ME
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -85,21 +85,10 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": null,
|
||||
"id": "2448b6c2",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Graph(nodes={'7308e6063c6d40818c5a0cc1cc7444f2': Node(id='7308e6063c6d40818c5a0cc1cc7444f2', data=<class 'pydantic.main.RunnableParallel<context,question>Input'>), '292bbd8021d44ec3a31fbe724d9002c1': Node(id='292bbd8021d44ec3a31fbe724d9002c1', data=<class 'pydantic.main.RunnableParallel<context,question>Output'>), '9212f219cf05488f95229c56ea02b192': Node(id='9212f219cf05488f95229c56ea02b192', data=VectorStoreRetriever(tags=['FAISS', 'OpenAIEmbeddings'], vectorstore=<langchain_community.vectorstores.faiss.FAISS object at 0x117334f70>)), 'c7a8e65fa5cf44b99dbe7d1d6e36886f': Node(id='c7a8e65fa5cf44b99dbe7d1d6e36886f', data=RunnablePassthrough()), '818b9bfd40a341008373d5b9f9d0784b': Node(id='818b9bfd40a341008373d5b9f9d0784b', data=ChatPromptTemplate(input_variables=['context', 'question'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context', 'question'], template='Answer the question based only on the following context:\\n{context}\\n\\nQuestion: {question}\\n'))])), 'b9f1d3ddfa6b4334a16ea439df22b11e': Node(id='b9f1d3ddfa6b4334a16ea439df22b11e', data=ChatOpenAI(client=<class 'openai.api_resources.chat_completion.ChatCompletion'>, openai_api_key='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', openai_proxy='')), '2bf84f6355c44731848345ca7d0f8ab9': Node(id='2bf84f6355c44731848345ca7d0f8ab9', data=StrOutputParser()), '1aeb2da5da5a43bb8771d3f338a473a2': Node(id='1aeb2da5da5a43bb8771d3f338a473a2', data=<class 'pydantic.main.StrOutputParserOutput'>)}, edges=[Edge(source='7308e6063c6d40818c5a0cc1cc7444f2', target='9212f219cf05488f95229c56ea02b192'), Edge(source='9212f219cf05488f95229c56ea02b192', target='292bbd8021d44ec3a31fbe724d9002c1'), Edge(source='7308e6063c6d40818c5a0cc1cc7444f2', target='c7a8e65fa5cf44b99dbe7d1d6e36886f'), Edge(source='c7a8e65fa5cf44b99dbe7d1d6e36886f', target='292bbd8021d44ec3a31fbe724d9002c1'), Edge(source='292bbd8021d44ec3a31fbe724d9002c1', target='818b9bfd40a341008373d5b9f9d0784b'), Edge(source='818b9bfd40a341008373d5b9f9d0784b', target='b9f1d3ddfa6b4334a16ea439df22b11e'), Edge(source='2bf84f6355c44731848345ca7d0f8ab9', target='1aeb2da5da5a43bb8771d3f338a473a2'), Edge(source='b9f1d3ddfa6b4334a16ea439df22b11e', target='2bf84f6355c44731848345ca7d0f8ab9')])"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain.get_graph()"
|
||||
]
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"source": [
|
||||
"# Add message history (memory)\n",
|
||||
"\n",
|
||||
"The `RunnableWithMessageHistory` let's us add message history to certain types of chains.\n",
|
||||
"The `RunnableWithMessageHistory` let us add message history to certain types of chains.\n",
|
||||
"\n",
|
||||
"Specifically, it can be used for any Runnable that takes as input one of\n",
|
||||
"\n",
|
||||
|
||||
@@ -30,7 +30,7 @@
|
||||
"- [`ainvoke`](#async-invoke): call the chain on an input async\n",
|
||||
"- [`abatch`](#async-batch): call the chain on a list of inputs async\n",
|
||||
"- [`astream_log`](#async-stream-intermediate-steps): stream back intermediate steps as they happen, in addition to the final response\n",
|
||||
"- [`astream_events`](#async-stream-events): **beta** stream events as they happen in the chain (introduced in `langchain-core` 0.2.0)\n",
|
||||
"- [`astream_events`](#async-stream-events): **beta** stream events as they happen in the chain (introduced in `langchain-core` 0.1.14)\n",
|
||||
"\n",
|
||||
"The **input type** and **output type** varies by component:\n",
|
||||
"\n",
|
||||
|
||||
1393
docs/docs/expression_language/streaming.ipynb
Normal file
1393
docs/docs/expression_language/streaming.ipynb
Normal file
File diff suppressed because it is too large
Load Diff
@@ -93,6 +93,3 @@ Head to the reference section for full documentation of all classes and methods
|
||||
### [Developer's guide](/docs/contributing)
|
||||
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.
|
||||
|
||||
### [Community](/docs/community)
|
||||
Head to the [Community navigator](/docs/community) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLM’s.
|
||||
|
||||
|
||||
@@ -184,7 +184,6 @@ A Retriever can be backed by anything - a SQL table, the internet, etc - but in
|
||||
|
||||
First, we need to load the data that we want to index. In order to do this, we will use the WebBaseLoader. This requires installing [BeautifulSoup](https://beautiful-soup-4.readthedocs.io/en/latest/):
|
||||
|
||||
```
|
||||
```shell
|
||||
pip install beautifulsoup4
|
||||
```
|
||||
|
||||
@@ -98,7 +98,7 @@ The LLM landscape is evolving at an unprecedented pace, with new libraries and m
|
||||
|
||||
### Model composition
|
||||
|
||||
Deploying systems like LangChain demands the ability to piece together different models and connect them via logic. Take the example of building a natural language input SQL query engine. Querying an LLM and obtaining the SQL command is only part of the system. You need to extract metadata from the connected database, construct a prompt for the LLM, run the SQL query on an engine, collect and feed back the response to the LLM as the query runs, and present the results to the user. This demonstrates the need to seamlessly integrate various complex components built in Python into a dynamic chain of logical blocks that can be served together.
|
||||
Deploying systems like LangChain demands the ability to piece together different models and connect them via logic. Take the example of building a natural language input SQL query engine. Querying an LLM and obtaining the SQL command is only part of the system. You need to extract metadata from the connected database, construct a prompt for the LLM, run the SQL query on an engine, collect and feedback the response to the LLM as the query runs, and present the results to the user. This demonstrates the need to seamlessly integrate various complex components built in Python into a dynamic chain of logical blocks that can be served together.
|
||||
|
||||
## Cloud providers
|
||||
|
||||
|
||||
@@ -35,6 +35,22 @@
|
||||
"from langchain_openai import OpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "3dd69cb4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"# get a new token: https://dashboard.cohere.ai/\n",
|
||||
"os.environ[\"COHERE_API_KEY\"] = getpass.getpass(\"Cohere API Key:\")\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Open API Key:\")\n",
|
||||
"os.environ[\"HUGGINGFACEHUB_API_TOKEN\"] = getpass.getpass(\"Hugging Face API Key:\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
@@ -44,7 +60,7 @@
|
||||
"source": [
|
||||
"llms = [\n",
|
||||
" OpenAI(temperature=0),\n",
|
||||
" Cohere(model=\"command-xlarge-20221108\", max_tokens=20, temperature=0),\n",
|
||||
" Cohere(temperature=0),\n",
|
||||
" HuggingFaceHub(repo_id=\"google/flan-t5-xl\", model_kwargs={\"temperature\": 1}),\n",
|
||||
"]"
|
||||
]
|
||||
@@ -160,7 +176,7 @@
|
||||
" llm=open_ai_llm, search_chain=search, verbose=True\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"cohere_llm = Cohere(temperature=0, model=\"command-xlarge-20221108\")\n",
|
||||
"cohere_llm = Cohere(temperature=0)\n",
|
||||
"search = SerpAPIWrapper()\n",
|
||||
"self_ask_with_search_cohere = SelfAskWithSearchChain(\n",
|
||||
" llm=cohere_llm, search_chain=search, verbose=True\n",
|
||||
@@ -241,14 +257,6 @@
|
||||
"source": [
|
||||
"model_lab.compare(\"What is the hometown of the reigning men's U.S. Open champion?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "94159131",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -22,44 +22,84 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 1,
|
||||
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T11:25:00.590587Z",
|
||||
"start_time": "2024-01-19T11:25:00.127293Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema import HumanMessage\n",
|
||||
"from langchain_community.chat_models import ChatAnthropic"
|
||||
"from langchain_community.chat_models import ChatAnthropic\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 2,
|
||||
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T11:25:04.349676Z",
|
||||
"start_time": "2024-01-19T11:25:03.964930Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatAnthropic()"
|
||||
"chat = ChatAnthropic(temperature=0, model_name=\"claude-2\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d1f9df276476f0bc",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"The code provided assumes that your ANTHROPIC_API_KEY is set in your environment variables. If you would like to manually specify your API key and also choose a different model, you can use the following code:\n",
|
||||
"```python\n",
|
||||
"chat = ChatAnthropic(temperature=0, anthropic_api_key=\"YOUR_API_KEY\", model_name=\"claude-instant-1.2\")\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"Please note that the default model is \"claude-2,\" and you can check the available models at [here](https://docs.anthropic.com/claude/reference/selecting-a-model)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 3,
|
||||
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T11:25:07.274418Z",
|
||||
"start_time": "2024-01-19T11:25:05.898031Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": "AIMessage(content=' 저는 파이썬을 좋아합니다.')"
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"Translate this sentence from English to French. I love programming.\"\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"chat.invoke(messages)"
|
||||
"system = \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
|
||||
"human = \"{text}\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
|
||||
"\n",
|
||||
"chain = prompt | chat\n",
|
||||
"chain.invoke({\n",
|
||||
" \"input_language\": \"English\",\n",
|
||||
" \"output_language\": \"Korean\",\n",
|
||||
" \"text\": \"I love Python\",\n",
|
||||
"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -72,44 +112,78 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "93a21c5c-6ef9-4688-be60-b2e1f94842fb",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.callbacks.manager import CallbackManager\n",
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 4,
|
||||
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T11:25:10.448733Z",
|
||||
"start_time": "2024-01-19T11:25:08.866277Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": "AIMessage(content=\" Why don't bears like fast food? Because they can't catch it!\")"
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await chat.ainvoke([messages])"
|
||||
"chat = ChatAnthropic(temperature=0, model_name=\"claude-2\")\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([(\"human\", \"Tell me a joke about {topic}\")])\n",
|
||||
"chain = prompt | chat\n",
|
||||
"await chain.ainvoke({\"topic\": \"bear\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 5,
|
||||
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T11:25:24.438696Z",
|
||||
"start_time": "2024-01-19T11:25:14.687480Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" Here are some of the most famous tourist attractions in Japan:\n",
|
||||
"\n",
|
||||
"- Tokyo - Tokyo Tower, Tokyo Skytree, Imperial Palace, Sensoji Temple, Meiji Shrine, Shibuya Crossing\n",
|
||||
"\n",
|
||||
"- Kyoto - Kinkakuji (Golden Pavilion), Fushimi Inari Shrine, Kiyomizu-dera Temple, Arashiyama Bamboo Grove, Gion Geisha District\n",
|
||||
"\n",
|
||||
"- Osaka - Osaka Castle, Dotonbori, Universal Studios Japan, Osaka Aquarium Kaiyukan \n",
|
||||
"\n",
|
||||
"- Hiroshima - Hiroshima Peace Memorial Park and Museum, Itsukushima Shrine (Miyajima Island)\n",
|
||||
"\n",
|
||||
"- Mount Fuji - Iconic and famous mountain, popular for hiking and viewing from places like Hakone and Kawaguchiko Lake\n",
|
||||
"\n",
|
||||
"- Himeji - Himeji Castle, one of Japan's most impressive feudal castles\n",
|
||||
"\n",
|
||||
"- Nara - Todaiji Temple, Nara Park with its bowing deer, Horyuji Temple with some of world's oldest wooden structures \n",
|
||||
"\n",
|
||||
"- Nikko - Elaborate shrines and temples nestled around Nikko National Park\n",
|
||||
"\n",
|
||||
"- Sapporo - Snow"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat = ChatAnthropic(\n",
|
||||
" streaming=True,\n",
|
||||
" verbose=True,\n",
|
||||
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
|
||||
"chat = ChatAnthropic(temperature=0.3, model_name=\"claude-2\")\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [(\"human\", \"Give me a list of famous tourist attractions in Japan\")]\n",
|
||||
")\n",
|
||||
"chat.stream(messages)"
|
||||
"chain = prompt | chat\n",
|
||||
"for chunk in chain.stream({}):\n",
|
||||
" print(chunk.content, end=\"\", flush=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -134,15 +208,130 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 6,
|
||||
"id": "07c47c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T11:25:25.288133Z",
|
||||
"start_time": "2024-01-19T11:25:24.438968Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": "AIMessage(content='파이썬을 사랑합니다.')"
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_anthropic import ChatAnthropicMessages\n",
|
||||
"\n",
|
||||
"chat = ChatAnthropicMessages(model_name=\"claude-instant-1.2\")\n",
|
||||
"chat.invoke(messages)"
|
||||
"system = (\n",
|
||||
" \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
|
||||
")\n",
|
||||
"human = \"{text}\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
|
||||
"\n",
|
||||
"chain = prompt | chat\n",
|
||||
"chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"input_language\": \"English\",\n",
|
||||
" \"output_language\": \"Korean\",\n",
|
||||
" \"text\": \"I love Python\",\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "19e53d75935143fd",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"ChatAnthropicMessages also requires the anthropic_api_key argument, or the ANTHROPIC_API_KEY environment variable must be set. \n",
|
||||
"\n",
|
||||
"ChatAnthropicMessages also supports async and streaming functionality:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "e20a139d30e3d333",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T11:25:26.012325Z",
|
||||
"start_time": "2024-01-19T11:25:25.288358Z"
|
||||
},
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": "AIMessage(content='파이썬을 사랑합니다.')"
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await chain.ainvoke(\n",
|
||||
" {\n",
|
||||
" \"input_language\": \"English\",\n",
|
||||
" \"output_language\": \"Korean\",\n",
|
||||
" \"text\": \"I love Python\",\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "6f34f1073d7e7120",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T11:25:28.323455Z",
|
||||
"start_time": "2024-01-19T11:25:26.012040Z"
|
||||
},
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Here are some of the most famous tourist attractions in Japan:\n",
|
||||
"\n",
|
||||
"- Tokyo Tower - A communication and observation tower in Tokyo modeled after the Eiffel Tower. It offers stunning views of the city.\n",
|
||||
"\n",
|
||||
"- Mount Fuji - Japan's highest and most famous mountain. It's a iconic symbol of Japan and a UNESCO World Heritage Site. \n",
|
||||
"\n",
|
||||
"- Itsukushima Shrine (Miyajima) - A shrine located on an island in Hiroshima prefecture, known for its \"floating\" torii gate that seems to float on water during high tide.\n",
|
||||
"\n",
|
||||
"- Himeji Castle - A UNESCO World Heritage Site famous for having withstood numerous battles without destruction to its intricate white walls and sloping, triangular roofs. \n",
|
||||
"\n",
|
||||
"- Kawaguchiko Station - Near Mount Fuji, this area is known for its scenic Fuji Five Lakes region. \n",
|
||||
"\n",
|
||||
"- Hiroshima Peace Memorial Park and Museum - Commemorates the world's first atomic bombing in Hiroshima on August 6, 1945. \n",
|
||||
"\n",
|
||||
"- Arashiyama Bamboo Grove - A renowned bamboo forest located in Kyoto that draws many visitors.\n",
|
||||
"\n",
|
||||
"- Kegon Falls - One of Japan's largest waterfalls"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [(\"human\", \"Give me a list of famous tourist attractions in Japan\")]\n",
|
||||
")\n",
|
||||
"chain = prompt | chat\n",
|
||||
"for chunk in chain.stream({}):\n",
|
||||
" print(chunk.content, end=\"\", flush=True)"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -15,9 +15,9 @@
|
||||
"source": [
|
||||
"# AzureMLChatOnlineEndpoint\n",
|
||||
"\n",
|
||||
">[Azure Machine Learning](https://azure.microsoft.com/en-us/products/machine-learning/) is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. `Azure Foundation Models` include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.\n",
|
||||
">[Azure Machine Learning](https://azure.microsoft.com/en-us/products/machine-learning/) is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides foundational and general purpose models from different providers.\n",
|
||||
">\n",
|
||||
">[Azure Machine Learning Online Endpoints](https://learn.microsoft.com/en-us/azure/machine-learning/concept-endpoints). After you train machine learning models or pipelines, you need to deploy them to production so that others can use them for inference. Inference is the process of applying new input data to the machine learning model or pipeline to generate outputs. While these outputs are typically referred to as \"predictions,\" inferencing can be used to generate outputs for other machine learning tasks, such as classification and clustering. In `Azure Machine Learning`, you perform inferencing by using endpoints and deployments. `Endpoints` and `Deployments` allow you to decouple the interface of your production workload from the implementation that serves it.\n",
|
||||
">In general, you need to deploy models in order to consume its predictions (inference). In `Azure Machine Learning`, [Online Endpoints](https://learn.microsoft.com/en-us/azure/machine-learning/concept-endpoints) are used to deploy these models with a real-time serving. They are based on the ideas of `Endpoints` and `Deployments` which allow you to decouple the interface of your production workload from the implementation that serves it.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use a chat model hosted on an `Azure Machine Learning Endpoint`."
|
||||
]
|
||||
@@ -37,10 +37,11 @@
|
||||
"source": [
|
||||
"## Set up\n",
|
||||
"\n",
|
||||
"To use the wrapper, you must [deploy a model on AzureML](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models?view=azureml-api-2#deploying-foundation-models-to-endpoints-for-inferencing) and obtain the following parameters:\n",
|
||||
"You must [deploy a model on Azure ML](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models?view=azureml-api-2#deploying-foundation-models-to-endpoints-for-inferencing) or [to Azure AI studio](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-open) and obtain the following parameters:\n",
|
||||
"\n",
|
||||
"* `endpoint_api_key`: The API key provided by the endpoint\n",
|
||||
"* `endpoint_url`: The REST endpoint url provided by the endpoint"
|
||||
"* `endpoint_url`: The REST endpoint url provided by the endpoint.\n",
|
||||
"* `endpoint_api_type`: Use `endpoint_type='realtime'` when deploying models to **Realtime endpoints** (hosted managed infrastructure). Use `endpoint_type='serverless'` when deploying models using the **Pay-as-you-go** offering (model as a service).\n",
|
||||
"* `endpoint_api_key`: The API key provided by the endpoint"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -51,7 +52,40 @@
|
||||
"\n",
|
||||
"The `content_formatter` parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a `ContentFormatterBase` class is provided to allow users to transform data to their liking. The following content formatters are provided:\n",
|
||||
"\n",
|
||||
"* `LLamaContentFormatter`: Formats request and response data for LLaMa2-chat"
|
||||
"* `LLamaChatContentFormatter`: Formats request and response data for LLaMa2-chat\n",
|
||||
"\n",
|
||||
"*Note: `langchain.chat_models.azureml_endpoint.LLamaContentFormatter` is being deprecated and replaced with `langchain.chat_models.azureml_endpoint.LLamaChatContentFormatter`.*\n",
|
||||
"\n",
|
||||
"You can implement custom content formatters specific for your model deriving from the class `langchain_community.llms.azureml_endpoint.ContentFormatterBase`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Examples\n",
|
||||
"\n",
|
||||
"The following section cotain examples about how to use this class:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema import HumanMessage\n",
|
||||
"from langchain_community.chat_models.azureml_endpoint import (\n",
|
||||
" AzureMLEndpointApiType,\n",
|
||||
" LlamaChatContentFormatter,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example: Chat completions with real-time endpoints"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -76,11 +110,79 @@
|
||||
"\n",
|
||||
"chat = AzureMLChatOnlineEndpoint(\n",
|
||||
" endpoint_url=\"https://<your-endpoint>.<your_region>.inference.ml.azure.com/score\",\n",
|
||||
" endpoint_api_type=AzureMLEndpointApiType.realtime,\n",
|
||||
" endpoint_api_key=\"my-api-key\",\n",
|
||||
" content_formatter=LlamaContentFormatter,\n",
|
||||
" content_formatter=LlamaChatContentFormatter(),\n",
|
||||
")\n",
|
||||
"response = chat(\n",
|
||||
" messages=[HumanMessage(content=\"Will the Collatz conjecture ever be solved?\")]\n",
|
||||
"response = chat.invoke(\n",
|
||||
" [HumanMessage(content=\"Will the Collatz conjecture ever be solved?\")]\n",
|
||||
")\n",
|
||||
"response"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example: Chat completions with pay-as-you-go deployments (model as a service)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = AzureMLChatOnlineEndpoint(\n",
|
||||
" endpoint_url=\"https://<your-endpoint>.<your_region>.inference.ml.azure.com/v1/chat/completions\",\n",
|
||||
" endpoint_api_type=AzureMLEndpointApiType.serverless,\n",
|
||||
" endpoint_api_key=\"my-api-key\",\n",
|
||||
" content_formatter=LlamaChatContentFormatter,\n",
|
||||
")\n",
|
||||
"response = chat.invoke(\n",
|
||||
" [HumanMessage(content=\"Will the Collatz conjecture ever be solved?\")]\n",
|
||||
")\n",
|
||||
"response"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If you need to pass additional parameters to the model, use `model_kwards` argument:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = AzureMLChatOnlineEndpoint(\n",
|
||||
" endpoint_url=\"https://<your-endpoint>.<your_region>.inference.ml.azure.com/v1/chat/completions\",\n",
|
||||
" endpoint_api_type=AzureMLEndpointApiType.serverless,\n",
|
||||
" endpoint_api_key=\"my-api-key\",\n",
|
||||
" content_formatter=LlamaChatContentFormatter,\n",
|
||||
" model_kwargs={\"temperature\": 0.8},\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Parameters can also be passed during invocation:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"response = chat.invoke(\n",
|
||||
" [HumanMessage(content=\"Will the Collatz conjecture ever be solved?\")],\n",
|
||||
" max_tokens=512,\n",
|
||||
")\n",
|
||||
"response"
|
||||
]
|
||||
|
||||
@@ -13,7 +13,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ChatBaichuan\n",
|
||||
"# Chat with Baichuan-192K\n",
|
||||
"\n",
|
||||
"Baichuan chat models API by Baichuan Intelligent Technology. For more information, see [https://platform.baichuan-ai.com/docs/api](https://platform.baichuan-ai.com/docs/api)"
|
||||
]
|
||||
@@ -44,20 +44,25 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatBaichuan(\n",
|
||||
" baichuan_api_key=\"YOUR_API_KEY\", baichuan_secret_key=\"YOUR_SECRET_KEY\"\n",
|
||||
")"
|
||||
"chat = ChatBaichuan(baichuan_api_key=\"YOUR_API_KEY\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"or you can set `api_key` and `secret_key` in your environment variables\n",
|
||||
"```bash\n",
|
||||
"export BAICHUAN_API_KEY=YOUR_API_KEY\n",
|
||||
"export BAICHUAN_SECRET_KEY=YOUR_SECRET_KEY\n",
|
||||
"```"
|
||||
"Alternatively, you can set your API key with:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"BAICHUAN_API_KEY\"] = \"YOUR_API_KEY\""
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -91,7 +96,7 @@
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"## For ChatBaichuan with Streaming"
|
||||
"## Chat with Baichuan-192K with Streaming"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -108,7 +113,6 @@
|
||||
"source": [
|
||||
"chat = ChatBaichuan(\n",
|
||||
" baichuan_api_key=\"YOUR_API_KEY\",\n",
|
||||
" baichuan_secret_key=\"YOUR_SECRET_KEY\",\n",
|
||||
" streaming=True,\n",
|
||||
")"
|
||||
]
|
||||
|
||||
272
docs/docs/integrations/chat/edenai.ipynb
Normal file
272
docs/docs/integrations/chat/edenai.ipynb
Normal file
@@ -0,0 +1,272 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Eden AI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Eden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website: https://edenai.co/)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This example goes over how to use LangChain to interact with Eden AI models\n",
|
||||
"\n",
|
||||
"-----------------------------------------------------------------------------------"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"`EdenAI` goes beyond mere model invocation. It empowers you with advanced features, including:\n",
|
||||
"\n",
|
||||
"- **Multiple Providers**: Gain access to a diverse range of language models offered by various providers, giving you the freedom to choose the best-suited model for your use case.\n",
|
||||
"\n",
|
||||
"- **Fallback Mechanism**: Set a fallback mechanism to ensure seamless operations even if the primary provider is unavailable, you can easily switches to an alternative provider.\n",
|
||||
"\n",
|
||||
"- **Usage Tracking**: Track usage statistics on a per-project and per-API key basis. This feature allows you to monitor and manage resource consumption effectively.\n",
|
||||
"\n",
|
||||
"- **Monitoring and Observability**: `EdenAI` provides comprehensive monitoring and observability tools on the platform. Monitor the performance of your language models, analyze usage patterns, and gain valuable insights to optimize your applications.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Accessing the EDENAI's API requires an API key, \n",
|
||||
"\n",
|
||||
"which you can get by creating an account https://app.edenai.run/user/register and heading here https://app.edenai.run/admin/iam/api-keys\n",
|
||||
"\n",
|
||||
"Once we have a key we'll want to set it as an environment variable by running:\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"export EDENAI_API_KEY=\"...\"\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"You can find more details on the API reference : https://docs.edenai.co/reference"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If you'd prefer not to set an environment variable you can pass the key in directly via the edenai_api_key named parameter\n",
|
||||
"\n",
|
||||
" when initiating the EdenAI Chat Model class."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.chat_models.edenai import ChatEdenAI\n",
|
||||
"from langchain_core.messages import HumanMessage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatEdenAI(\n",
|
||||
" edenai_api_key=\"...\", provider=\"openai\", temperature=0.2, max_tokens=250\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='Hello! How can I assist you today?')"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages = [HumanMessage(content=\"Hello !\")]\n",
|
||||
"chat.invoke(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='Hello! How can I assist you today?')"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await chat.ainvoke(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Streaming and Batching\n",
|
||||
"\n",
|
||||
"`ChatEdenAI` supports streaming and batching. Below is an example."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Hello! How can I assist you today?"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"for chunk in chat.stream(messages):\n",
|
||||
" print(chunk.content, end=\"\", flush=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[AIMessage(content='Hello! How can I assist you today?')]"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat.batch([messages])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Fallback mecanism\n",
|
||||
"\n",
|
||||
"With Eden AI you can set a fallback mechanism to ensure seamless operations even if the primary provider is unavailable, you can easily switches to an alternative provider."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatEdenAI(\n",
|
||||
" edenai_api_key=\"...\",\n",
|
||||
" provider=\"openai\",\n",
|
||||
" temperature=0.2,\n",
|
||||
" max_tokens=250,\n",
|
||||
" fallback_providers=\"google\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In this example, you can use Google as a backup provider if OpenAI encounters any issues.\n",
|
||||
"\n",
|
||||
"For more information and details about Eden AI, check out this link: : https://docs.edenai.co/docs/additional-parameters"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chaining Calls\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(\n",
|
||||
" \"What is a good name for a company that makes {product}?\"\n",
|
||||
")\n",
|
||||
"chain = prompt | chat"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='VitalBites')"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"product\": \"healthy snacks\"})"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "langchain-pr",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -18,6 +18,14 @@
|
||||
"\n",
|
||||
"Note: This is separate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
|
||||
"\n",
|
||||
"ChatVertexAI exposes all foundational models available in Google Cloud:\n",
|
||||
"\n",
|
||||
"- Gemini (`gemini-pro` and `gemini-pro-vision`)\n",
|
||||
"- PaLM 2 for Text (`text-bison`)\n",
|
||||
"- Codey for Code Generation (`codechat-bison`)\n",
|
||||
"\n",
|
||||
"For a full and updated list of available models visit [VertexAI documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/overview).\n",
|
||||
"\n",
|
||||
"By default, Google Cloud [does not use](https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance#foundation_model_development) customer data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in [Google's Customer Data Processing Addendum (CDPA)](https://cloud.google.com/terms/data-processing-addendum).\n",
|
||||
"\n",
|
||||
"To use `Google Cloud Vertex AI` PaLM you must have the `langchain-google-vertexai` Python package installed and either:\n",
|
||||
@@ -35,9 +43,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet langchain-google-vertexai"
|
||||
@@ -45,7 +51,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -64,7 +70,7 @@
|
||||
"AIMessage(content=\" J'aime la programmation.\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -98,7 +104,7 @@
|
||||
"AIMessage(content=\"J'aime la programmation.\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -123,7 +129,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -132,7 +138,7 @@
|
||||
"AIMessage(content=' プログラミングが大好きです')"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -159,28 +165,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"execution": {
|
||||
"iopub.execute_input": "2023-06-17T21:09:25.423568Z",
|
||||
"iopub.status.busy": "2023-06-17T21:09:25.423213Z",
|
||||
"iopub.status.idle": "2023-06-17T21:09:25.429641Z",
|
||||
"shell.execute_reply": "2023-06-17T21:09:25.429060Z",
|
||||
"shell.execute_reply.started": "2023-06-17T21:09:25.423546Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Code generation chat models\n",
|
||||
"You can now leverage the Codey API for code chat within Vertex AI. The model name is:\n",
|
||||
"- codechat-bison: for code assistance"
|
||||
"You can now leverage the Codey API for code chat within Vertex AI. The model available is:\n",
|
||||
"- `codechat-bison`: for code assistance"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
@@ -242,7 +237,7 @@
|
||||
" model_name=\"codechat-bison\", max_output_tokens=1000, temperature=0.5\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"message = chat.invoke(\"Write a Python function to identify all prime numbers\")\n",
|
||||
"message = chat.invoke(\"Write a Python function generating all prime numbers\")\n",
|
||||
"print(message.content)"
|
||||
]
|
||||
},
|
||||
@@ -266,7 +261,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -320,7 +315,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -353,7 +348,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -362,7 +357,7 @@
|
||||
"MyModel(name='Erick', age=27)"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -389,7 +384,7 @@
|
||||
"source": [
|
||||
"## Asynchronous calls\n",
|
||||
"\n",
|
||||
"We can make asynchronous calls via the Runnables [Async Interface](/docs/expression_language/interface)"
|
||||
"We can make asynchronous calls via the Runnables [Async Interface](/docs/expression_language/interface)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -414,10 +409,10 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=' Why do you love programming?')"
|
||||
"AIMessage(content=' अहं प्रोग्रामनं प्रेमामि')"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -428,6 +423,10 @@
|
||||
")\n",
|
||||
"human = \"{text}\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
|
||||
"\n",
|
||||
"chat = ChatVertexAI(\n",
|
||||
" model_name=\"chat-bison\", max_output_tokens=1000, temperature=0.5\n",
|
||||
")\n",
|
||||
"chain = prompt | chat\n",
|
||||
"\n",
|
||||
"asyncio.run(\n",
|
||||
@@ -483,43 +482,15 @@
|
||||
" sys.stdout.write(chunk.content)\n",
|
||||
" sys.stdout.flush()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"environment": {
|
||||
"kernel": "python3",
|
||||
"name": "common-cpu.m108",
|
||||
"type": "gcloud",
|
||||
"uri": "gcr.io/deeplearning-platform-release/base-cpu:m108"
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
"display_name": "",
|
||||
"name": ""
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.10"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "cc99336516f23363341912c6723b01ace86f02e26b4290be1efc0677e2e2ec24"
|
||||
}
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -4,9 +4,9 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Hugging Face Chat Wrapper\n",
|
||||
"# Hugging Face\n",
|
||||
"\n",
|
||||
"This notebook shows how to get started using Hugging Face LLM's as chat models.\n",
|
||||
"This notebook shows how to get started using `Hugging Face` LLM's as chat models.\n",
|
||||
"\n",
|
||||
"In particular, we will:\n",
|
||||
"1. Utilize the [HuggingFaceTextGenInference](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_text_gen_inference.py), [HuggingFaceEndpoint](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_endpoint.py), or [HuggingFaceHub](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_hub.py) integrations to instantiate an `LLM`.\n",
|
||||
@@ -26,8 +26,6 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[33mWARNING: You are using pip version 22.0.4; however, version 23.3.1 is available.\n",
|
||||
"You should consider upgrading via the '/Users/jacoblee/langchain/langchain/libs/langchain/.venv/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n",
|
||||
"\u001b[0mNote: you may need to restart the kernel to use updated packages.\n"
|
||||
]
|
||||
}
|
||||
@@ -49,23 +47,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### `HuggingFaceTextGenInference`"
|
||||
"### `HuggingFaceTextGenInference`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/Users/jacoblee/langchain/langchain/libs/langchain/.venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
|
||||
" from .autonotebook import tqdm as notebook_tqdm\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
@@ -93,7 +82,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### `HuggingFaceEndpoint`"
|
||||
"### `HuggingFaceEndpoint`"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -121,7 +110,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### `HuggingFaceHub`"
|
||||
"### `HuggingFaceHub`"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -291,7 +280,7 @@
|
||||
"source": [
|
||||
"## 3. Take it for a spin as an agent!\n",
|
||||
"\n",
|
||||
"Here we'll test out `Zephyr-7B-beta` as a zero-shot ReAct Agent. The example below is taken from [here](https://python.langchain.com/docs/modules/agents/agent_types/react#using-chat-models).\n",
|
||||
"Here we'll test out `Zephyr-7B-beta` as a zero-shot `ReAct` Agent. The example below is taken from [here](https://python.langchain.com/docs/modules/agents/agent_types/react#using-chat-models).\n",
|
||||
"\n",
|
||||
"> Note: To run this section, you'll need to have a [SerpAPI Token](https://serpapi.com/) saved as an environment variable: `SERPAPI_API_KEY`"
|
||||
]
|
||||
@@ -448,7 +437,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.5"
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -21,17 +21,31 @@
|
||||
"\n",
|
||||
"1. Select the right LLM(s) for their application\n",
|
||||
"2. Prototype with various open-source and proprietary LLMs\n",
|
||||
"3. Move to production in-line with their security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant infrastructure\n",
|
||||
"3. Access Fine Tuning for open-source LLMs to get industry-leading performance at a fraction of the cost\n",
|
||||
"4. Setup low-cost production APIs according to security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant, multi-cloud infrastructure\n",
|
||||
"\n",
|
||||
"### Steps to Access Models\n",
|
||||
"1. **Explore Available Models:** Start by browsing through the [available models](https://docs.konko.ai/docs/list-of-models) on Konko. Each model caters to different use cases and capabilities.\n",
|
||||
"\n",
|
||||
"This example goes over how to use LangChain to interact with `Konko` [models](https://docs.konko.ai/docs/overview)"
|
||||
"2. **Identify Suitable Endpoints:** Determine which [endpoint](https://docs.konko.ai/docs/list-of-models#list-of-available-models) (ChatCompletion or Completion) supports your selected model.\n",
|
||||
"\n",
|
||||
"3. **Selecting a Model:** [Choose a model](https://docs.konko.ai/docs/list-of-models#list-of-available-models) based on its metadata and how well it fits your use case.\n",
|
||||
"\n",
|
||||
"4. **Prompting Guidelines:** Once a model is selected, refer to the [prompting guidelines](https://docs.konko.ai/docs/prompting) to effectively communicate with it.\n",
|
||||
"\n",
|
||||
"5. **Using the API:** Finally, use the appropriate Konko [API endpoint](https://docs.konko.ai/docs/quickstart-for-completion-and-chat-completion-endpoint) to call the model and receive responses.\n",
|
||||
"\n",
|
||||
"To run this notebook, you'll need Konko API key. You can create one by signing up on [Konko](https://www.konko.ai/).\n",
|
||||
"\n",
|
||||
"This example goes over how to use LangChain to interact with `Konko` ChatCompletion [models](https://docs.konko.ai/docs/list-of-models#konko-hosted-models-for-chatcompletion)\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To run this notebook, you'll need Konko API key. You can request it by messaging support@konko.ai."
|
||||
"To run this notebook, you'll need Konko API key. You can create one by signing up on [Konko](https://www.konko.ai/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -84,36 +98,34 @@
|
||||
"source": [
|
||||
"## Calling a model\n",
|
||||
"\n",
|
||||
"Find a model on the [Konko overview page](https://docs.konko.ai/docs/overview)\n",
|
||||
"Find a model on the [Konko overview page](https://docs.konko.ai/v0.5.0/docs/list-of-models)\n",
|
||||
"\n",
|
||||
"For example, for this [LLama 2 model](https://docs.konko.ai/docs/meta-llama-2-13b-chat). The model id would be: `\"meta-llama/Llama-2-13b-chat-hf\"`\n",
|
||||
"\n",
|
||||
"Another way to find the list of models running on the Konko instance is through this [endpoint](https://docs.konko.ai/reference/listmodels).\n",
|
||||
"Another way to find the list of models running on the Konko instance is through this [endpoint](https://docs.konko.ai/reference/get-models).\n",
|
||||
"\n",
|
||||
"From here, we can initialize our model:\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatKonko(max_tokens=400, model=\"meta-llama/Llama-2-13b-chat-hf\")"
|
||||
"chat = ChatKonko(max_tokens=400, model=\"meta-llama/llama-2-13b-chat\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\" Sure, I'd be happy to explain the Big Bang Theory briefly!\\n\\nThe Big Bang Theory is the leading explanation for the origin and evolution of the universe, based on a vast amount of observational evidence from many fields of science. In essence, the theory posits that the universe began as an infinitely hot and dense point, known as a singularity, around 13.8 billion years ago. This singularity expanded rapidly, and as it did, it cooled and formed subatomic particles, which eventually coalesced into the first atoms, and later into the stars and galaxies we see today.\\n\\nThe theory gets its name from the idea that the universe began in a state of incredibly high energy and temperature, and has been expanding and cooling ever since. This expansion is thought to have been driven by a mysterious force known as dark energy, which is thought to be responsible for the accelerating expansion of the universe.\\n\\nOne of the key predictions of the Big Bang Theory is that the universe should be homogeneous and isotropic on large scales, meaning that it should look the same in all directions and have the same properties everywhere. This prediction has been confirmed by a wealth of observational evidence, including the cosmic microwave background radiation, which is thought to be a remnant of the early universe.\\n\\nOverall, the Big Bang Theory is a well-established and widely accepted explanation for the origins of the universe, and it has been supported by a vast amount of observational evidence from many fields of science.\", additional_kwargs={}, example=False)"
|
||||
"AIMessage(content=\" Sure thing! The Big Bang Theory is a scientific theory that explains the origins of the universe. In short, it suggests that the universe began as an infinitely hot and dense point around 13.8 billion years ago and expanded rapidly. This expansion continues to this day, and it's what makes the universe look the way it does.\\n\\nHere's a brief overview of the key points:\\n\\n1. The universe started as a singularity, a point of infinite density and temperature.\\n2. The singularity expanded rapidly, causing the universe to cool and expand.\\n3. As the universe expanded, particles began to form, including protons, neutrons, and electrons.\\n4. These particles eventually came together to form atoms, and later, stars and galaxies.\\n5. The universe is still expanding today, and the rate of this expansion is accelerating.\\n\\nThat's the Big Bang Theory in a nutshell! It's a pretty mind-blowing idea when you think about it, and it's supported by a lot of scientific evidence. Do you have any other questions about it?\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -125,13 +137,6 @@
|
||||
"]\n",
|
||||
"chat(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
218
docs/docs/integrations/chat/litellm_router.ipynb
Normal file
218
docs/docs/integrations/chat/litellm_router.ipynb
Normal file
@@ -0,0 +1,218 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "59148044",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: LiteLLM Router\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "247da7a6",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "bf733a38-db84-4363-89e2-de6735c37230",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ChatLiteLLMRouter\n",
|
||||
"\n",
|
||||
"[LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc. \n",
|
||||
"\n",
|
||||
"This notebook covers how to get started with using Langchain + the LiteLLM Router I/O library. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema import HumanMessage\n",
|
||||
"from langchain_community.chat_models import ChatLiteLLMRouter\n",
|
||||
"from litellm import Router"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model_list = [\n",
|
||||
" {\n",
|
||||
" \"model_name\": \"gpt-4\",\n",
|
||||
" \"litellm_params\": {\n",
|
||||
" \"model\": \"azure/gpt-4-1106-preview\",\n",
|
||||
" \"api_key\": \"<your-api-key>\",\n",
|
||||
" \"api_version\": \"2023-05-15\",\n",
|
||||
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
|
||||
" },\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" \"model_name\": \"gpt-4\",\n",
|
||||
" \"litellm_params\": {\n",
|
||||
" \"model\": \"azure/gpt-4-1106-preview\",\n",
|
||||
" \"api_key\": \"<your-api-key>\",\n",
|
||||
" \"api_version\": \"2023-05-15\",\n",
|
||||
" \"api_base\": \"https://<your-endpoint>.openai.azure.com/\",\n",
|
||||
" },\n",
|
||||
" },\n",
|
||||
"]\n",
|
||||
"litellm_router = Router(model_list=model_list)\n",
|
||||
"chat = ChatLiteLLMRouter(router=litellm_router)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"J'aime programmer.\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"Translate this sentence from English to French. I love programming.\"\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"chat(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## `ChatLiteLLMRouter` also supports async and streaming functionality:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "93a21c5c-6ef9-4688-be60-b2e1f94842fb",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.callbacks.manager import CallbackManager\n",
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"LLMResult(generations=[[ChatGeneration(text=\"J'adore programmer.\", generation_info={'finish_reason': 'stop'}, message=AIMessage(content=\"J'adore programmer.\"))]], llm_output={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 19, 'total_tokens': 25}, 'model_name': None}, run=[RunInfo(run_id=UUID('75003ec9-1e2b-43b7-a216-10dcc0f75e00'))])"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await chat.agenerate([messages])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"J'adore programmer."
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"J'adore programmer.\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat = ChatLiteLLMRouter(\n",
|
||||
" router=litellm_router,\n",
|
||||
" streaming=True,\n",
|
||||
" verbose=True,\n",
|
||||
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
|
||||
")\n",
|
||||
"chat(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c253883f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.13"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -19,7 +19,19 @@
|
||||
"\n",
|
||||
"This notebook covers how to get started with MistralAI chat models, via their [API](https://docs.mistral.ai/api/).\n",
|
||||
"\n",
|
||||
"A valid [API key](https://console.mistral.ai/users/api-keys/) is needed to communicate with the API."
|
||||
"A valid [API key](https://console.mistral.ai/users/api-keys/) is needed to communicate with the API.\n",
|
||||
"\n",
|
||||
"You will need the `langchain-mistralai` package to use the API. You can install it via pip:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "eb978a7e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-core langchain-mistralai"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
File diff suppressed because one or more lines are too long
99
docs/docs/integrations/chat/sparkllm.ipynb
Normal file
99
docs/docs/integrations/chat/sparkllm.ipynb
Normal file
@@ -0,0 +1,99 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3ddface67cd10a87",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"# SparkLLM Chat\n",
|
||||
"\n",
|
||||
"SparkLLM chat models API by iFlyTek. For more information, see [iFlyTek Open Platform](https://www.xfyun.cn/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Basic use"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "43daa39972d4c533",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"is_executing": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"\"\"\"For basic init and call\"\"\"\n",
|
||||
"from langchain.chat_models import ChatSparkLLM\n",
|
||||
"from langchain.schema import HumanMessage\n",
|
||||
"\n",
|
||||
"chat = ChatSparkLLM(\n",
|
||||
" spark_app_id=\"<app_id>\", spark_api_key=\"<api_key>\", spark_api_secret=\"<api_secret>\"\n",
|
||||
")\n",
|
||||
"message = HumanMessage(content=\"Hello\")\n",
|
||||
"chat([message])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "df755f4c5689510",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"- Get SparkLLM's app_id, api_key and api_secret from [iFlyTek SparkLLM API Console](https://console.xfyun.cn/services/bm3) (for more info, see [iFlyTek SparkLLM Intro](https://xinghuo.xfyun.cn/sparkapi) ), then set environment variables `IFLYTEK_SPARK_APP_ID`, `IFLYTEK_SPARK_API_KEY` and `IFLYTEK_SPARK_API_SECRET` or pass parameters when creating `ChatSparkLLM` as the demo above."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "984e32ee47bc6772",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"## For ChatSparkLLM with Streaming"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7dc162bd65fec08f",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatSparkLLM(streaming=True)\n",
|
||||
"for chunk in chat.stream(\"Hello!\"):\n",
|
||||
" print(chunk.content, end=\"\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 2
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython2",
|
||||
"version": "2.7.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -12,7 +12,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 2,
|
||||
"id": "497736aa",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -24,7 +24,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 3,
|
||||
"id": "009e0036",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -34,19 +34,19 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 8,
|
||||
"id": "910fb6ee",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = ToMarkdownLoader.from_api_key(\n",
|
||||
" url=\"https://python.langchain.com/en/latest/\", api_key=api_key\n",
|
||||
"loader = ToMarkdownLoader(\n",
|
||||
" url=\"https://python.langchain.com/docs/get_started/introduction\", api_key=api_key\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 9,
|
||||
"id": "ac8db139",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -56,7 +56,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 10,
|
||||
"id": "706304e9",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -64,130 +64,106 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"## Contents\n",
|
||||
"**LangChain** is a framework for developing applications powered by language models. It enables applications that:\n",
|
||||
"\n",
|
||||
"- [Getting Started](#getting-started)\n",
|
||||
"- [Modules](#modules)\n",
|
||||
"- [Use Cases](#use-cases)\n",
|
||||
"- [Reference Docs](#reference-docs)\n",
|
||||
"- [LangChain Ecosystem](#langchain-ecosystem)\n",
|
||||
"- [Additional Resources](#additional-resources)\n",
|
||||
"- **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)\n",
|
||||
"- **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)\n",
|
||||
"\n",
|
||||
"## Welcome to LangChain [\\#](\\#welcome-to-langchain \"Permalink to this headline\")\n",
|
||||
"This framework consists of several parts.\n",
|
||||
"\n",
|
||||
"**LangChain** is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model, but will also be:\n",
|
||||
"- **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.\n",
|
||||
"- **[LangChain Templates](https://python.langchain.com/docs/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.\n",
|
||||
"- **[LangServe](https://python.langchain.com/docs/langserve)**: A library for deploying LangChain chains as a REST API.\n",
|
||||
"- **[LangSmith](https://python.langchain.com/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.\n",
|
||||
"\n",
|
||||
"1. _Data-aware_: connect a language model to other sources of data\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"2. _Agentic_: allow a language model to interact with its environment\n",
|
||||
"Together, these products simplify the entire application lifecycle:\n",
|
||||
"\n",
|
||||
"- **Develop**: Write your applications in LangChain/LangChain.js. Hit the ground running using Templates for reference.\n",
|
||||
"- **Productionize**: Use LangSmith to inspect, test and monitor your chains, so that you can constantly improve and deploy with confidence.\n",
|
||||
"- **Deploy**: Turn any chain into an API with LangServe.\n",
|
||||
"\n",
|
||||
"The LangChain framework is designed around these principles.\n",
|
||||
"## LangChain Libraries [](\\#langchain-libraries \"Direct link to LangChain Libraries\")\n",
|
||||
"\n",
|
||||
"This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see [here](https://docs.langchain.com/docs/). For the JavaScript documentation, see [here](https://js.langchain.com/docs/).\n",
|
||||
"The main value props of the LangChain packages are:\n",
|
||||
"\n",
|
||||
"## Getting Started [\\#](\\#getting-started \"Permalink to this headline\")\n",
|
||||
"1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not\n",
|
||||
"2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks\n",
|
||||
"\n",
|
||||
"How to get started using LangChain to create an Language Model application.\n",
|
||||
"Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.\n",
|
||||
"\n",
|
||||
"- [Quickstart Guide](https://python.langchain.com/en/latest/getting_started/getting_started.html)\n",
|
||||
"The LangChain libraries themselves are made up of several different packages.\n",
|
||||
"\n",
|
||||
"- **`langchain-core`**: Base abstractions and LangChain Expression Language.\n",
|
||||
"- **`langchain-community`**: Third party integrations.\n",
|
||||
"- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.\n",
|
||||
"\n",
|
||||
"Concepts and terminology.\n",
|
||||
"## Get started [](\\#get-started \"Direct link to Get started\")\n",
|
||||
"\n",
|
||||
"- [Concepts and terminology](https://python.langchain.com/en/latest/getting_started/concepts.html)\n",
|
||||
"[Here’s](https://python.langchain.com/docs/get_started/installation) how to install LangChain, set up your environment, and start building.\n",
|
||||
"\n",
|
||||
"We recommend following our [Quickstart](https://python.langchain.com/docs/get_started/quickstart) guide to familiarize yourself with the framework by building your first LangChain application.\n",
|
||||
"\n",
|
||||
"Tutorials created by community experts and presented on YouTube.\n",
|
||||
"Read up on our [Security](https://python.langchain.com/docs/security) best practices to make sure you're developing safely with LangChain.\n",
|
||||
"\n",
|
||||
"- [Tutorials](https://python.langchain.com/en/latest/getting_started/tutorials.html)\n",
|
||||
"note\n",
|
||||
"\n",
|
||||
"These docs focus on the Python LangChain library. [Head here](https://js.langchain.com) for docs on the JavaScript LangChain library.\n",
|
||||
"\n",
|
||||
"## Modules [\\#](\\#modules \"Permalink to this headline\")\n",
|
||||
"## LangChain Expression Language (LCEL) [](\\#langchain-expression-language-lcel \"Direct link to LangChain Expression Language (LCEL)\")\n",
|
||||
"\n",
|
||||
"These modules are the core abstractions which we view as the building blocks of any LLM-powered application.\n",
|
||||
"LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.\n",
|
||||
"\n",
|
||||
"For each module LangChain provides standard, extendable interfaces. LanghChain also provides external integrations and even end-to-end implementations for off-the-shelf use.\n",
|
||||
"- **[Overview](https://python.langchain.com/docs/expression_language/)**: LCEL and its benefits\n",
|
||||
"- **[Interface](https://python.langchain.com/docs/expression_language/interface)**: The standard interface for LCEL objects\n",
|
||||
"- **[How-to](https://python.langchain.com/docs/expression_language/how_to)**: Key features of LCEL\n",
|
||||
"- **[Cookbook](https://python.langchain.com/docs/expression_language/cookbook)**: Example code for accomplishing common tasks\n",
|
||||
"\n",
|
||||
"The docs for each module contain quickstart examples, how-to guides, reference docs, and conceptual guides.\n",
|
||||
"## Modules [](\\#modules \"Direct link to Modules\")\n",
|
||||
"\n",
|
||||
"The modules are (from least to most complex):\n",
|
||||
"LangChain provides standard, extendable interfaces and integrations for the following modules:\n",
|
||||
"\n",
|
||||
"- [Models](https://python.langchain.com/docs/modules/model_io/models/): Supported model types and integrations.\n",
|
||||
"#### [Model I/O](https://python.langchain.com/docs/modules/model_io/) [](\\#model-io \"Direct link to model-io\")\n",
|
||||
"\n",
|
||||
"- [Prompts](https://python.langchain.com/en/latest/modules/prompts.html): Prompt management, optimization, and serialization.\n",
|
||||
"Interface with language models\n",
|
||||
"\n",
|
||||
"- [Memory](https://python.langchain.com/en/latest/modules/memory.html): Memory refers to state that is persisted between calls of a chain/agent.\n",
|
||||
"#### [Retrieval](https://python.langchain.com/docs/modules/data_connection/) [](\\#retrieval \"Direct link to retrieval\")\n",
|
||||
"\n",
|
||||
"- [Indexes](https://python.langchain.com/en/latest/modules/data_connection.html): Language models become much more powerful when combined with application-specific data - this module contains interfaces and integrations for loading, querying and updating external data.\n",
|
||||
"Interface with application-specific data\n",
|
||||
"\n",
|
||||
"- [Chains](https://python.langchain.com/en/latest/modules/chains.html): Chains are structured sequences of calls (to an LLM or to a different utility).\n",
|
||||
"#### [Agents](https://python.langchain.com/docs/modules/agents/) [](\\#agents \"Direct link to agents\")\n",
|
||||
"\n",
|
||||
"- [Agents](https://python.langchain.com/en/latest/modules/agents.html): An agent is a Chain in which an LLM, given a high-level directive and a set of tools, repeatedly decides an action, executes the action and observes the outcome until the high-level directive is complete.\n",
|
||||
"Let models choose which tools to use given high-level directives\n",
|
||||
"\n",
|
||||
"- [Callbacks](https://python.langchain.com/en/latest/modules/callbacks/getting_started.html): Callbacks let you log and stream the intermediate steps of any chain, making it easy to observe, debug, and evaluate the internals of an application.\n",
|
||||
"## Examples, ecosystem, and resources [](\\#examples-ecosystem-and-resources \"Direct link to Examples, ecosystem, and resources\")\n",
|
||||
"\n",
|
||||
"### [Use cases](https://python.langchain.com/docs/use_cases/question_answering/) [](\\#use-cases \"Direct link to use-cases\")\n",
|
||||
"\n",
|
||||
"## Use Cases [\\#](\\#use-cases \"Permalink to this headline\")\n",
|
||||
"Walkthroughs and techniques for common end-to-end use cases, like:\n",
|
||||
"\n",
|
||||
"Best practices and built-in implementations for common LangChain use cases:\n",
|
||||
"- [Document question answering](https://python.langchain.com/docs/use_cases/question_answering/)\n",
|
||||
"- [Chatbots](https://python.langchain.com/docs/use_cases/chatbots/)\n",
|
||||
"- [Analyzing structured data](https://python.langchain.com/docs/use_cases/sql/)\n",
|
||||
"- and much more...\n",
|
||||
"\n",
|
||||
"- [Autonomous Agents](https://python.langchain.com/en/latest/use_cases/autonomous_agents.html): Autonomous agents are long-running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI.\n",
|
||||
"### [Integrations](https://python.langchain.com/docs/integrations/providers/) [](\\#integrations \"Direct link to integrations\")\n",
|
||||
"\n",
|
||||
"- [Agent Simulations](https://python.langchain.com/en/latest/use_cases/agent_simulations.html): Putting agents in a sandbox and observing how they interact with each other and react to events can be an effective way to evaluate their long-range reasoning and planning abilities.\n",
|
||||
"LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](https://python.langchain.com/docs/integrations/providers/).\n",
|
||||
"\n",
|
||||
"- [Personal Assistants](https://python.langchain.com/en/latest/use_cases/personal_assistants.html): One of the primary LangChain use cases. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\n",
|
||||
"### [Guides](https://python.langchain.com/docs/guides/debugging) [](\\#guides \"Direct link to guides\")\n",
|
||||
"\n",
|
||||
"- [Question Answering](https://python.langchain.com/en/latest/use_cases/question_answering.html): Another common LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\n",
|
||||
"Best practices for developing with LangChain.\n",
|
||||
"\n",
|
||||
"- [Chatbots](https://python.langchain.com/en/latest/use_cases/chatbots.html): Language models love to chat, making this a very natural use of them.\n",
|
||||
"### [API reference](https://api.python.langchain.com) [](\\#api-reference \"Direct link to api-reference\")\n",
|
||||
"\n",
|
||||
"- [Querying Tabular Data](https://python.langchain.com/en/latest/use_cases/tabular.html): Recommended reading if you want to use language models to query structured data (CSVs, SQL, dataframes, etc).\n",
|
||||
"Head to the reference section for full documentation of all classes and methods in the LangChain and LangChain Experimental Python packages.\n",
|
||||
"\n",
|
||||
"- [Code Understanding](https://python.langchain.com/en/latest/use_cases/code.html): Recommended reading if you want to use language models to analyze code.\n",
|
||||
"### [Developer's guide](https://python.langchain.com/docs/contributing) [](\\#developers-guide \"Direct link to developers-guide\")\n",
|
||||
"\n",
|
||||
"- [Interacting with APIs](https://python.langchain.com/en/latest/use_cases/apis.html): Enabling language models to interact with APIs is extremely powerful. It gives them access to up-to-date information and allows them to take actions.\n",
|
||||
"Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.\n",
|
||||
"\n",
|
||||
"- [Extraction](https://python.langchain.com/en/latest/use_cases/extraction.html): Extract structured information from text.\n",
|
||||
"\n",
|
||||
"- [Summarization](https://python.langchain.com/en/latest/use_cases/summarization.html): Compressing longer documents. A type of Data-Augmented Generation.\n",
|
||||
"\n",
|
||||
"- [Evaluation](https://python.langchain.com/en/latest/use_cases/evaluation.html): Generative models are hard to evaluate with traditional metrics. One promising approach is to use language models themselves to do the evaluation.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## Reference Docs [\\#](\\#reference-docs \"Permalink to this headline\")\n",
|
||||
"\n",
|
||||
"Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\n",
|
||||
"\n",
|
||||
"- [Reference Documentation](https://python.langchain.com/en/latest/reference.html)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## LangChain Ecosystem [\\#](\\#langchain-ecosystem \"Permalink to this headline\")\n",
|
||||
"\n",
|
||||
"Guides for how other companies/products can be used with LangChain.\n",
|
||||
"\n",
|
||||
"- [LangChain Ecosystem](https://python.langchain.com/en/latest/ecosystem.html)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## Additional Resources [\\#](\\#additional-resources \"Permalink to this headline\")\n",
|
||||
"\n",
|
||||
"Additional resources we think may be useful as you develop your application!\n",
|
||||
"\n",
|
||||
"- [LangChainHub](https://github.com/hwchase17/langchain-hub): The LangChainHub is a place to share and explore other prompts, chains, and agents.\n",
|
||||
"\n",
|
||||
"- [Gallery](https://python.langchain.com/en/latest/additional_resources/gallery.html): A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\n",
|
||||
"\n",
|
||||
"- [Deployments](https://python.langchain.com/en/latest/additional_resources/deployments.html): A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\n",
|
||||
"\n",
|
||||
"- [Tracing](https://python.langchain.com/en/latest/additional_resources/tracing.html): A guide on using tracing in LangChain to visualize the execution of chains and agents.\n",
|
||||
"\n",
|
||||
"- [Model Laboratory](https://python.langchain.com/en/latest/additional_resources/model_laboratory.html): Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\n",
|
||||
"\n",
|
||||
"- [Discord](https://discord.gg/6adMQxSpJS): Join us on our Discord to discuss all things LangChain!\n",
|
||||
"\n",
|
||||
"- [YouTube](https://python.langchain.com/en/latest/additional_resources/youtube.html): A collection of the LangChain tutorials and videos.\n",
|
||||
"\n",
|
||||
"- [Production Support](https://forms.gle/57d8AmXBYp8PP8tZA): As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.\n"
|
||||
"Head to the [Community navigator](https://python.langchain.com/docs/community) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLM’s.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -198,7 +174,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "5dde17e7",
|
||||
"id": "7c89b313-adb6-4aa2-9cd8-952a5724a2ce",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
@@ -220,7 +196,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.6"
|
||||
"version": "3.11.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": null,
|
||||
"id": "2886982e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -100,6 +100,54 @@
|
||||
"docs[0].page_content[:400]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b4ab0a79",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Load list of files"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "092d9a0b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"files = [\"./example_data/whatsapp_chat.txt\", \"./example_data/layout-parser-paper.pdf\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f841c4f8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = UnstructuredFileLoader(files)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "993c240b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"docs = loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "5ce4ff07",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"docs[0].page_content[:400]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "7874d01d",
|
||||
@@ -495,7 +543,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.10"
|
||||
"version": "3.9.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -27,17 +27,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 1,
|
||||
"id": "0cb0f937-b610-42a2-b765-336eed037031",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdin",
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" ········\n"
|
||||
"········\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -51,21 +51,20 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 2,
|
||||
"id": "6fb585dd",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain_community.llms import AlephAlpha"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": 3,
|
||||
"id": "f81a230d",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -81,7 +80,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 4,
|
||||
"id": "f0d26e48",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -98,19 +97,19 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"execution_count": 5,
|
||||
"id": "6811d621",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)"
|
||||
"llm_chain = prompt | llm"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 8,
|
||||
"id": "3058e63f",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -119,10 +118,10 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"' Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems.\\n'"
|
||||
"' Artificial Intelligence is the simulation of human intelligence processes by machines.\\n\\n'"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -130,8 +129,16 @@
|
||||
"source": [
|
||||
"question = \"What is AI?\"\n",
|
||||
"\n",
|
||||
"llm_chain.run(question)"
|
||||
"llm_chain.invoke({\"question\": question})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a3544eff",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -150,7 +157,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.6"
|
||||
"version": "3.9.12"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
|
||||
@@ -6,9 +6,9 @@
|
||||
"source": [
|
||||
"# Azure ML\n",
|
||||
"\n",
|
||||
"[Azure ML](https://azure.microsoft.com/en-us/products/machine-learning/) is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.\n",
|
||||
"[Azure ML](https://azure.microsoft.com/en-us/products/machine-learning/) is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides foundational and general purpose models from different providers.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use an LLM hosted on an `AzureML online endpoint`"
|
||||
"This notebook goes over how to use an LLM hosted on an `Azure ML Online Endpoint`."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -26,11 +26,12 @@
|
||||
"source": [
|
||||
"## Set up\n",
|
||||
"\n",
|
||||
"To use the wrapper, you must [deploy a model on AzureML](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models?view=azureml-api-2#deploying-foundation-models-to-endpoints-for-inferencing) and obtain the following parameters:\n",
|
||||
"You must [deploy a model on Azure ML](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models?view=azureml-api-2#deploying-foundation-models-to-endpoints-for-inferencing) or [to Azure AI studio](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-open) and obtain the following parameters:\n",
|
||||
"\n",
|
||||
"* `endpoint_api_key`: Required - The API key provided by the endpoint\n",
|
||||
"* `endpoint_url`: Required - The REST endpoint url provided by the endpoint\n",
|
||||
"* `deployment_name`: Not required - The deployment name of the model using the endpoint"
|
||||
"* `endpoint_url`: The REST endpoint url provided by the endpoint.\n",
|
||||
"* `endpoint_api_type`: Use `endpoint_type='realtime'` when deploying models to **Realtime endpoints** (hosted managed infrastructure). Use `endpoint_type='serverless'` when deploying models using the **Pay-as-you-go** offering (model as a service).\n",
|
||||
"* `endpoint_api_key`: The API key provided by the endpoint.\n",
|
||||
"* `deployment_name`: (Optional) The deployment name of the model using the endpoint."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -46,31 +47,107 @@
|
||||
"* `HFContentFormatter`: Formats request and response data for text-generation Hugging Face models\n",
|
||||
"* `LLamaContentFormatter`: Formats request and response data for LLaMa2\n",
|
||||
"\n",
|
||||
"*Note: `OSSContentFormatter` is being deprecated and replaced with `GPT2ContentFormatter`. The logic is the same but `GPT2ContentFormatter` is a more suitable name. You can still continue to use `OSSContentFormatter` as the changes are backwards compatible.*\n",
|
||||
"\n",
|
||||
"Below is an example using a summarization model from Hugging Face."
|
||||
"*Note: `OSSContentFormatter` is being deprecated and replaced with `GPT2ContentFormatter`. The logic is the same but `GPT2ContentFormatter` is a more suitable name. You can still continue to use `OSSContentFormatter` as the changes are backwards compatible.*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Custom Content Formatter"
|
||||
"## Examples"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example: LlaMa 2 completions with real-time endpoints"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"HaSeul won her first music show trophy with \"So What\" on Mnet's M Countdown. Loona released their second EP titled [#] (read as hash] on February 5, 2020. HaSeul did not take part in the promotion of the album because of mental health issues. On October 19, 2020, they released their third EP called [12:00]. It was their first album to enter the Billboard 200, debuting at number 112. On June 2, 2021, the group released their fourth EP called Yummy-Yummy. On August 27, it was announced that they are making their Japanese debut on September 15 under Universal Music Japan sublabel EMI Records.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema import HumanMessage\n",
|
||||
"from langchain_community.llms.azureml_endpoint import (\n",
|
||||
" AzureMLEndpointApiType,\n",
|
||||
" LlamaContentFormatter,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"llm = AzureMLOnlineEndpoint(\n",
|
||||
" endpoint_url=\"https://<your-endpoint>.<your_region>.inference.ml.azure.com/score\",\n",
|
||||
" endpoint_api_type=AzureMLEndpointApiType.realtime,\n",
|
||||
" endpoint_api_key=\"my-api-key\",\n",
|
||||
" content_formatter=LlamaContentFormatter(),\n",
|
||||
" model_kwargs={\"temperature\": 0.8, \"max_new_tokens\": 400},\n",
|
||||
")\n",
|
||||
"response = llm.invoke(\"Write me a song about sparkling water:\")\n",
|
||||
"response"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Model parameters can also be indicated during invocation:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"response = llm.invoke(\"Write me a song about sparkling water:\", temperature=0.5)\n",
|
||||
"response"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example: Chat completions with pay-as-you-go deployments (model as a service)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema import HumanMessage\n",
|
||||
"from langchain_community.llms.azureml_endpoint import (\n",
|
||||
" AzureMLEndpointApiType,\n",
|
||||
" LlamaContentFormatter,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"llm = AzureMLOnlineEndpoint(\n",
|
||||
" endpoint_url=\"https://<your-endpoint>.<your_region>.inference.ml.azure.com/v1/completions\",\n",
|
||||
" endpoint_api_type=AzureMLEndpointApiType.serverless,\n",
|
||||
" endpoint_api_key=\"my-api-key\",\n",
|
||||
" content_formatter=LlamaContentFormatter(),\n",
|
||||
" model_kwargs={\"temperature\": 0.8, \"max_new_tokens\": 400},\n",
|
||||
")\n",
|
||||
"response = llm.invoke(\"Write me a song about sparkling water:\")\n",
|
||||
"response"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example: Custom content formatter\n",
|
||||
"\n",
|
||||
"Below is an example using a summarization model from Hugging Face."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"import os\n",
|
||||
@@ -104,6 +181,7 @@
|
||||
"content_formatter = CustomFormatter()\n",
|
||||
"\n",
|
||||
"llm = AzureMLOnlineEndpoint(\n",
|
||||
" endpoint_api_type=\"realtime\",\n",
|
||||
" endpoint_api_key=os.getenv(\"BART_ENDPOINT_API_KEY\"),\n",
|
||||
" endpoint_url=os.getenv(\"BART_ENDPOINT_URL\"),\n",
|
||||
" model_kwargs={\"temperature\": 0.8, \"max_new_tokens\": 400},\n",
|
||||
@@ -132,7 +210,7 @@
|
||||
"that Loona will release the double A-side single, \"Hula Hoop / Star Seed\" on September 15, with a physical CD release on October \n",
|
||||
"20.[53] In December, Chuu filed an injunction to suspend her exclusive contract with Blockberry Creative.[54][55]\n",
|
||||
"\"\"\"\n",
|
||||
"summarized_text = llm(large_text)\n",
|
||||
"summarized_text = llm.invoke(large_text)\n",
|
||||
"print(summarized_text)"
|
||||
]
|
||||
},
|
||||
@@ -140,22 +218,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Dolly with LLMChain"
|
||||
"### Example: Dolly with LLMChain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Many people are willing to talk about themselves; it's others who seem to be stuck up. Try to understand others where they're coming from. Like minded people can build a tribe together.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
@@ -177,31 +247,22 @@
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = LLMChain(llm=llm, prompt=prompt)\n",
|
||||
"print(chain.run({\"word_count\": 100, \"topic\": \"how to make friends\"}))"
|
||||
"print(chain.invoke({\"word_count\": 100, \"topic\": \"how to make friends\"}))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Serializing an LLM\n",
|
||||
"## Serializing an LLM\n",
|
||||
"You can also save and load LLM configurations"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[1mAzureMLOnlineEndpoint\u001b[0m\n",
|
||||
"Params: {'deployment_name': 'databricks-dolly-v2-12b-4', 'model_kwargs': {'temperature': 0.2, 'max_tokens': 150, 'top_p': 0.8, 'frequency_penalty': 0.32, 'presence_penalty': 0.072}}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.llms.loading import load_llm\n",
|
||||
"\n",
|
||||
@@ -224,9 +285,9 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"display_name": "langchain",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
"name": "langchain"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
@@ -238,7 +299,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.11.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
97
docs/docs/integrations/llms/baichuan.ipynb
Normal file
97
docs/docs/integrations/llms/baichuan.ipynb
Normal file
@@ -0,0 +1,97 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Baichuan LLM\n",
|
||||
"Baichuan Inc. (https://www.baichuan-ai.com/) is a Chinese startup in the era of AGI, dedicated to addressing fundamental human needs: Efficiency, Health, and Happiness."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisite\n",
|
||||
"An API key is required to access Baichuan LLM API. Visit https://platform.baichuan-ai.com/ to get your API key."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Use Baichuan LLM"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"BAICHUAN_API_KEY\"] = \"YOUR_API_KEY\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.llms import BaichuanLLM\n",
|
||||
"\n",
|
||||
"# Load the model\n",
|
||||
"llm = BaichuanLLM()\n",
|
||||
"\n",
|
||||
"res = llm(\"What's your name?\")\n",
|
||||
"print(res)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"res = llm.generate(prompts=[\"你好!\"])\n",
|
||||
"res"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"for res in llm.stream(\"Who won the second world war?\"):\n",
|
||||
" print(res)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import asyncio\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"async def run_aio_stream():\n",
|
||||
" async for res in llm.astream(\"Write a poem about the sun.\"):\n",
|
||||
" print(res)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"asyncio.run(run_aio_stream())"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -106,6 +106,76 @@
|
||||
"\n",
|
||||
"conversation.predict(input=\"Hi there!\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Custom models"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"custom_llm = Bedrock(\n",
|
||||
" credentials_profile_name=\"bedrock-admin\",\n",
|
||||
" provider=\"cohere\",\n",
|
||||
" model_id=\"<Custom model ARN>\", # ARN like 'arn:aws:bedrock:...' obtained via provisioning the custom model\n",
|
||||
" model_kwargs={\"temperature\": 1},\n",
|
||||
" streaming=True,\n",
|
||||
" callbacks=[StreamingStdOutCallbackHandler()],\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"conversation = ConversationChain(\n",
|
||||
" llm=custom_llm, verbose=True, memory=ConversationBufferMemory()\n",
|
||||
")\n",
|
||||
"conversation.predict(input=\"What is the recipe of mayonnaise?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Guardrails for Amazon Bedrock example \n",
|
||||
"\n",
|
||||
"## Guardrails for Amazon Bedrock (Preview) \n",
|
||||
"[Guardrails for Amazon Bedrock](https://aws.amazon.com/bedrock/guardrails/) evaluates user inputs and model responses based on use case specific policies, and provides an additional layer of safeguards regardless of the underlying model. Guardrails can be applied across models, including Anthropic Claude, Meta Llama 2, Cohere Command, AI21 Labs Jurassic, and Amazon Titan Text, as well as fine-tuned models.\n",
|
||||
"**Note**: Guardrails for Amazon Bedrock is currently in preview and not generally available. Reach out through your usual AWS Support contacts if you’d like access to this feature.\n",
|
||||
"In this section, we are going to set up a Bedrock language model with specific guardrails that include tracing capabilities. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from typing import Any\n",
|
||||
"\n",
|
||||
"from langchain_core.callbacks import AsyncCallbackHandler\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class BedrockAsyncCallbackHandler(AsyncCallbackHandler):\n",
|
||||
" # Async callback handler that can be used to handle callbacks from langchain.\n",
|
||||
"\n",
|
||||
" async def on_llm_error(self, error: BaseException, **kwargs: Any) -> Any:\n",
|
||||
" reason = kwargs.get(\"reason\")\n",
|
||||
" if reason == \"GUARDRAIL_INTERVENED\":\n",
|
||||
" print(f\"Guardrails: {kwargs}\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Guardrails for Amazon Bedrock with trace\n",
|
||||
"llm = Bedrock(\n",
|
||||
" credentials_profile_name=\"bedrock-admin\",\n",
|
||||
" model_id=\"<Model_ID>\",\n",
|
||||
" model_kwargs={},\n",
|
||||
" guardrails={\"id\": \"<Guardrail_ID>\", \"version\": \"<Version>\", \"trace\": True},\n",
|
||||
" callbacks=[BedrockAsyncCallbackHandler()],\n",
|
||||
")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -124,7 +194,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.11.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -11,7 +11,102 @@
|
||||
"\n",
|
||||
"[ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B) is the second-generation version of the open-source bilingual (Chinese-English) chat model ChatGLM-6B. It retains the smooth conversation flow and low deployment threshold of the first-generation model, while introducing the new features like better performance, longer context and more efficient inference.\n",
|
||||
"\n",
|
||||
"This example goes over how to use LangChain to interact with ChatGLM2-6B Inference for text completion.\n",
|
||||
"[ChatGLM3](https://github.com/THUDM/ChatGLM3) is a new generation of pre-trained dialogue models jointly released by Zhipu AI and Tsinghua KEG. ChatGLM3-6B is the open-source model in the ChatGLM3 series"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Install required dependencies\n",
|
||||
"\n",
|
||||
"%pip install -qU langchain langchain-community"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## ChatGLM3\n",
|
||||
"\n",
|
||||
"This examples goes over how to use LangChain to interact with ChatGLM3-6B Inference for text completion."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.schema.messages import AIMessage\n",
|
||||
"from langchain_community.llms.chatglm3 import ChatGLM3"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"template = \"\"\"{question}\"\"\"\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"endpoint_url = \"http://127.0.0.1:8000/v1/chat/completions\"\n",
|
||||
"\n",
|
||||
"messages = [\n",
|
||||
" AIMessage(content=\"我将从美国到中国来旅游,出行前希望了解中国的城市\"),\n",
|
||||
" AIMessage(content=\"欢迎问我任何问题。\"),\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"llm = ChatGLM3(\n",
|
||||
" endpoint_url=endpoint_url,\n",
|
||||
" max_tokens=80000,\n",
|
||||
" prefix_messages=messages,\n",
|
||||
" top_p=0.9,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'北京和上海是中国两个不同的城市,它们在很多方面都有所不同。\\n\\n北京是中国的首都,也是历史悠久的城市之一。它有着丰富的历史文化遗产,如故宫、颐和园等,这些景点吸引着众多游客前来观光。北京也是一个政治、文化和教育中心,有很多政府机构和学术机构总部设在北京。\\n\\n上海则是一个现代化的城市,它是中国的经济中心之一。上海拥有许多高楼大厦和国际化的金融机构,是中国最国际化的城市之一。上海也是一个美食和购物天堂,有许多著名的餐厅和购物中心。\\n\\n北京和上海的气候也不同。北京属于温带大陆性气候,冬季寒冷干燥,夏季炎热多风;而上海属于亚热带季风气候,四季分明,春秋宜人。\\n\\n北京和上海有很多不同之处,但都是中国非常重要的城市,每个城市都有自己独特的魅力和特色。'"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"question = \"北京和上海两座城市有什么不同?\"\n",
|
||||
"\n",
|
||||
"llm_chain.run(question)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## ChatGLM and ChatGLM2\n",
|
||||
"\n",
|
||||
"The following example shows how to use LangChain to interact with the ChatGLM2-6B Inference to complete text.\n",
|
||||
"ChatGLM-6B and ChatGLM2-6B has the same api specs, so this example should work with both."
|
||||
]
|
||||
},
|
||||
@@ -106,7 +201,7 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "langchain-dev",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -120,9 +215,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
|
||||
100
docs/docs/integrations/llms/konko.ipynb
Normal file
100
docs/docs/integrations/llms/konko.ipynb
Normal file
@@ -0,0 +1,100 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "136d9ba6-c42a-435b-9e19-77ebcc7a3145",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ChatKonko\n",
|
||||
"\n",
|
||||
">[Konko](https://www.konko.ai/) API is a fully managed Web API designed to help application developers:\n",
|
||||
"\n",
|
||||
"Konko API is a fully managed API designed to help application developers:\n",
|
||||
"\n",
|
||||
"1. Select the right LLM(s) for their application\n",
|
||||
"2. Prototype with various open-source and proprietary LLMs\n",
|
||||
"3. Access Fine Tuning for open-source LLMs to get industry-leading performance at a fraction of the cost\n",
|
||||
"4. Setup low-cost production APIs according to security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant, multi-cloud infrastructure\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0d896d07-82b4-4f38-8c37-f0bc8b0e4fe1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Steps to Access Models\n",
|
||||
"1. **Explore Available Models:** Start by browsing through the [available models](https://docs.konko.ai/docs/list-of-models) on Konko. Each model caters to different use cases and capabilities.\n",
|
||||
"\n",
|
||||
"2. **Identify Suitable Endpoints:** Determine which [endpoint](https://docs.konko.ai/docs/list-of-models#list-of-available-models) (ChatCompletion or Completion) supports your selected model.\n",
|
||||
"\n",
|
||||
"3. **Selecting a Model:** [Choose a model](https://docs.konko.ai/docs/list-of-models#list-of-available-models) based on its metadata and how well it fits your use case.\n",
|
||||
"\n",
|
||||
"4. **Prompting Guidelines:** Once a model is selected, refer to the [prompting guidelines](https://docs.konko.ai/docs/prompting) to effectively communicate with it.\n",
|
||||
"\n",
|
||||
"5. **Using the API:** Finally, use the appropriate Konko [API endpoint](https://docs.konko.ai/docs/quickstart-for-completion-and-chat-completion-endpoint) to call the model and receive responses.\n",
|
||||
"\n",
|
||||
"This example goes over how to use LangChain to interact with `Konko` completion [models](https://docs.konko.ai/docs/list-of-models#konko-hosted-models-for-completion)\n",
|
||||
"\n",
|
||||
"To run this notebook, you'll need Konko API key. You can create one by signing up on [Konko](https://www.konko.ai/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "dd70bccb-7a65-42d0-a3f2-8116f3549da7",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"Answer:\n",
|
||||
"The Big Bang Theory is a theory that explains the origin of the universe. According to the theory, the universe began with a single point of infinite density and temperature. This point is called the singularity. The singularity exploded and expanded rapidly. The expansion of the universe is still continuing.\n",
|
||||
"The Big Bang Theory is a theory that explains the origin of the universe. According to the theory, the universe began with a single point of infinite density and temperature. This point is called the singularity. The singularity exploded and expanded rapidly. The expansion of the universe is still continuing.\n",
|
||||
"\n",
|
||||
"Question\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.llms import Konko\n",
|
||||
"\n",
|
||||
"llm = Konko(model=\"mistralai/mistral-7b-v0.1\", temperature=0.1, max_tokens=128)\n",
|
||||
"\n",
|
||||
"input_ = \"\"\"You are a helpful assistant. Explain Big Bang Theory briefly.\"\"\"\n",
|
||||
"print(llm(input_))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "78148bf7-2211-40b4-93a7-e90139ab1169",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -144,24 +144,40 @@
|
||||
"git clone --recursive -j8 https://github.com/abetlen/llama-cpp-python.git\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"2. Open up command Prompt (or anaconda prompt if you have it installed), set up environment variables to install. Follow this if you do not have a GPU, you must set both of the following variables.\n",
|
||||
"2. Open up a command Prompt and set the following environment variables.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"set FORCE_CMAKE=1\n",
|
||||
"set CMAKE_ARGS=-DLLAMA_CUBLAS=OFF\n",
|
||||
"```\n",
|
||||
"You can ignore the second environment variable if you have an NVIDIA GPU.\n",
|
||||
"If you have an NVIDIA GPU make sure `DLLAMA_CUBLAS` is set to `ON`\n",
|
||||
"\n",
|
||||
"#### Compiling and installing\n",
|
||||
"\n",
|
||||
"In the same command prompt (anaconda prompt) you set the variables, you can `cd` into `llama-cpp-python` directory and run the following commands.\n",
|
||||
"Now you can `cd` into the `llama-cpp-python` directory and install the package\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"python setup.py clean\n",
|
||||
"python setup.py install\n",
|
||||
"python -m pip install -e .\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**IMPORTANT**: If you have already installed a cpu only version of the package, you need to reinstall it from scratch: consider the following command: "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!python -m pip install -e . --force-reinstall --no-cache-dir"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -186,7 +202,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 1,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@@ -223,7 +239,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 3,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@@ -406,7 +422,7 @@
|
||||
"- `n_gpu_layers` - determines how many layers of the model are offloaded to your GPU.\n",
|
||||
"- `n_batch` - how many tokens are processed in parallel. \n",
|
||||
"\n",
|
||||
"Setting these parameters correctly will dramatically improve the evaluation speed (see [wrapper code](https://github.com/mmagnesium/langchain/blob/master/langchain/llms/llamacpp.py) for more details)."
|
||||
"Setting these parameters correctly will dramatically improve the evaluation speed (see [wrapper code](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/llamacpp.py) for more details)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -415,7 +431,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"n_gpu_layers = 40 # Change this value based on your model and your GPU VRAM pool.\n",
|
||||
"n_gpu_layers = -1 # The number of layers to put on the GPU. The rest will be on the CPU. If you don't know how many layers there are, you can use -1 to move all to GPU.\n",
|
||||
"n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.\n",
|
||||
"\n",
|
||||
"# Make sure the model path is correct for your system!\n",
|
||||
@@ -487,12 +503,12 @@
|
||||
"\n",
|
||||
"Two of the most important GPU parameters are:\n",
|
||||
"\n",
|
||||
"- `n_gpu_layers` - determines how many layers of the model are offloaded to your Metal GPU, in the most case, set it to `1` is enough for Metal\n",
|
||||
"- `n_gpu_layers` - determines how many layers of the model are offloaded to your Metal GPU.\n",
|
||||
"- `n_batch` - how many tokens are processed in parallel, default is 8, set to bigger number.\n",
|
||||
"- `f16_kv` - for some reason, Metal only support `True`, otherwise you will get error such as `Asserting on type 0\n",
|
||||
"GGML_ASSERT: .../ggml-metal.m:706: false && \"not implemented\"`\n",
|
||||
"\n",
|
||||
"Setting these parameters correctly will dramatically improve the evaluation speed (see [wrapper code](https://github.com/mmagnesium/langchain/blob/master/langchain/llms/llamacpp.py) for more details)."
|
||||
"Setting these parameters correctly will dramatically improve the evaluation speed (see [wrapper code](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/llamacpp.py) for more details)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -501,7 +517,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"n_gpu_layers = 1 # Metal set to 1 is enough.\n",
|
||||
"n_gpu_layers = 1 # The number of layers to put on the GPU. The rest will be on the CPU. If you don't know how many layers there are, you can use -1 to move all to GPU.\n",
|
||||
"n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.\n",
|
||||
"# Make sure the model path is correct for your system!\n",
|
||||
"llm = LlamaCpp(\n",
|
||||
@@ -559,7 +575,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"n_gpu_layers = 1 # Metal set to 1 is enough.\n",
|
||||
"n_gpu_layers = 1 # The number of layers to put on the GPU. The rest will be on the CPU. If you don't know how many layers there are, you can use -1 to move all to GPU.\n",
|
||||
"n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.\n",
|
||||
"# Make sure the model path is correct for your system!\n",
|
||||
"llm = LlamaCpp(\n",
|
||||
@@ -680,7 +696,7 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.10.12 ('langchain_venv': venv)",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -694,7 +710,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.11.6"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
|
||||
@@ -69,11 +69,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/home/noamgat/envs/langchain_experimental/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
|
||||
" from .autonotebook import tqdm as notebook_tqdm\n",
|
||||
"Downloading shards: 100%|██████████| 2/2 [00:00<00:00, 3.58it/s]\n",
|
||||
"Loading checkpoint shards: 100%|██████████| 2/2 [05:32<00:00, 166.35s/it]\n",
|
||||
"Downloading (…)okenizer_config.json: 100%|██████████| 1.62k/1.62k [00:00<00:00, 4.87MB/s]\n"
|
||||
|
||||
191
docs/docs/integrations/llms/oci_generative_ai.ipynb
Normal file
191
docs/docs/integrations/llms/oci_generative_ai.ipynb
Normal file
@@ -0,0 +1,191 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Oracle Cloud Infrastructure Generative AI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Oracle Cloud Infrastructure (OCI) Generative AI is a fully managed service that provides a set of state-of-the-art, customizable large language models (LLMs) that cover a wide range of use cases, and which is available through a single API.\n",
|
||||
"Using the OCI Generative AI service you can access ready-to-use pretrained models, or create and host your own fine-tuned custom models based on your own data on dedicated AI clusters. Detailed documentation of the service and API is available __[here](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm)__ and __[here](https://docs.oracle.com/en-us/iaas/api/#/en/generative-ai/20231130/)__.\n",
|
||||
"\n",
|
||||
"This notebook explains how to use OCI's Genrative AI models with LangChain."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Prerequisite\n",
|
||||
"We will need to install the oci sdk"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install -U oci"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### OCI Generative AI API endpoint \n",
|
||||
"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Authentication\n",
|
||||
"The authentication methods supported for this langchain integration are:\n",
|
||||
"\n",
|
||||
"1. API Key\n",
|
||||
"2. Session token\n",
|
||||
"3. Instance principal\n",
|
||||
"4. Resource principal \n",
|
||||
"\n",
|
||||
"These follows the standard SDK authentication methods detailed __[here](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm)__.\n",
|
||||
" "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.llms import OCIGenAI\n",
|
||||
"\n",
|
||||
"# use default authN method API-key\n",
|
||||
"llm = OCIGenAI(\n",
|
||||
" model_id=\"MY_MODEL\",\n",
|
||||
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
|
||||
" compartment_id=\"MY_OCID\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"response = llm.invoke(\"Tell me one fact about earth\", temperature=0.7)\n",
|
||||
"print(response)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain_core.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"# Use Session Token to authN\n",
|
||||
"llm = OCIGenAI(\n",
|
||||
" model_id=\"MY_MODEL\",\n",
|
||||
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
|
||||
" compartment_id=\"MY_OCID\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate(input_variables=[\"query\"], template=\"{query}\")\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
|
||||
"\n",
|
||||
"response = llm_chain.invoke(\"what is the capital of france?\")\n",
|
||||
"print(response)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema.output_parser import StrOutputParser\n",
|
||||
"from langchain.schema.runnable import RunnablePassthrough\n",
|
||||
"from langchain_community.embeddings import OCIGenAIEmbeddings\n",
|
||||
"from langchain_community.vectorstores import FAISS\n",
|
||||
"\n",
|
||||
"embeddings = OCIGenAIEmbeddings(\n",
|
||||
" model_id=\"MY_EMBEDDING_MODEL\",\n",
|
||||
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
|
||||
" compartment_id=\"MY_OCID\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"vectorstore = FAISS.from_texts(\n",
|
||||
" [\n",
|
||||
" \"Larry Ellison co-founded Oracle Corporation in 1977 with Bob Miner and Ed Oates.\",\n",
|
||||
" \"Oracle Corporation is an American multinational computer technology company headquartered in Austin, Texas, United States.\",\n",
|
||||
" ],\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"template = \"\"\"Answer the question based only on the following context:\n",
|
||||
"{context}\n",
|
||||
" \n",
|
||||
"Question: {question}\n",
|
||||
"\"\"\"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"llm = OCIGenAI(\n",
|
||||
" model_id=\"MY_MODEL\",\n",
|
||||
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
|
||||
" compartment_id=\"MY_OCID\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = (\n",
|
||||
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
|
||||
" | prompt\n",
|
||||
" | llm\n",
|
||||
" | StrOutputParser()\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"print(chain.invoke(\"when was oracle founded?\"))\n",
|
||||
"print(chain.invoke(\"where is oracle headquartered?\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "oci_langchain",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.18"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -26,19 +26,19 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"OCTOAI_API_TOKEN\"] = \"OCTOAI_API_TOKEN\"\n",
|
||||
"os.environ[\"ENDPOINT_URL\"] = \"https://mpt-7b-demo-f1kzsig6xes9.octoai.run/generate\""
|
||||
"os.environ[\"ENDPOINT_URL\"] = \"https://text.octoai.run/v1/chat/completions\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -56,7 +56,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -66,36 +66,40 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 30,
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = OctoAIEndpoint(\n",
|
||||
" model_kwargs={\n",
|
||||
" \"max_new_tokens\": 200,\n",
|
||||
" \"temperature\": 0.75,\n",
|
||||
" \"top_p\": 0.95,\n",
|
||||
" \"repetition_penalty\": 1,\n",
|
||||
" \"seed\": None,\n",
|
||||
" \"stop\": [],\n",
|
||||
" \"model\": \"llama-2-13b-chat-fp16\",\n",
|
||||
" \"max_tokens\": 128,\n",
|
||||
" \"presence_penalty\": 0,\n",
|
||||
" \"temperature\": 0.1,\n",
|
||||
" \"top_p\": 0.9,\n",
|
||||
" \"messages\": [\n",
|
||||
" {\n",
|
||||
" \"role\": \"system\",\n",
|
||||
" \"content\": \"You are a helpful assistant. Keep your responses limited to one short paragraph if possible.\",\n",
|
||||
" },\n",
|
||||
" ],\n",
|
||||
" },\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 31,
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'\\nLeonardo da Vinci was an Italian polymath and painter regarded by many as one of the greatest painters of all time. He is best known for his masterpieces including Mona Lisa, The Last Supper, and The Virgin of the Rocks. He was a draftsman, sculptor, architect, and one of the most important figures in the history of science. Da Vinci flew gliders, experimented with water turbines and windmills, and invented the catapult and a joystick-type human-powered aircraft control. He may have pioneered helicopters. As a scholar, he was interested in anatomy, geology, botany, engineering, mathematics, and astronomy.\\nOther painters and patrons claimed to be more talented, but Leonardo da Vinci was an incredibly productive artist, sculptor, engineer, anatomist, and scientist.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 31,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" Sure thing! Here's my response:\n",
|
||||
"\n",
|
||||
"Leonardo da Vinci was a true Renaissance man - an Italian polymath who excelled in various fields, including painting, sculpture, engineering, mathematics, anatomy, and geology. He is widely considered one of the greatest painters of all time, and his inventive and innovative works continue to inspire and influence artists and thinkers to this day. Some of his most famous works include the Mona Lisa, The Last Supper, and Vitruvian Man. \n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
@@ -103,7 +107,7 @@
|
||||
"\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"\n",
|
||||
"llm_chain.run(question)"
|
||||
"print(llm_chain.run(question))"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -123,7 +127,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.11.7"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
"\n",
|
||||
"This example showcases how to connect to [PromptLayer](https://www.promptlayer.com) to start recording your OpenAI requests.\n",
|
||||
"\n",
|
||||
"Another example is [here](https://python.langchain.com/en/latest/ecosystem/promptlayer.html)."
|
||||
"Another example is [here](https://python.langchain.com/docs/integrations/providers/promptlayer)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -225,7 +225,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.6"
|
||||
"version": "3.11.6"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
|
||||
@@ -82,7 +82,7 @@
|
||||
" temperature=0.8,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"print(llm(\"What is the capital of France ?\"))"
|
||||
"print(llm.invoke(\"What is the capital of France ?\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -117,8 +117,7 @@
|
||||
"1. The first Pokemon game was released in 1996.\n",
|
||||
"2. The president was Bill Clinton.\n",
|
||||
"3. Clinton was president from 1993 to 2001.\n",
|
||||
"4. The answer is Clinton.\n",
|
||||
"\n"
|
||||
"4. The answer is Clinton.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -142,7 +141,7 @@
|
||||
"\n",
|
||||
"question = \"Who was the US president in the year the first Pokemon game was released?\"\n",
|
||||
"\n",
|
||||
"print(llm_chain.run(question))"
|
||||
"print(llm_chain.invoke(question))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -172,7 +171,36 @@
|
||||
" trust_remote_code=True, # mandatory for hf models\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"llm(\"What is the future of AI?\")"
|
||||
"llm.invoke(\"What is the future of AI?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d6ca8fd911d25faa",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"## Quantization\n",
|
||||
"\n",
|
||||
"vLLM supports `awq` quantization. To enable it, pass `quantization` to `vllm_kwargs`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "2cada3174c46a0ea",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm_q = VLLM(\n",
|
||||
" model=\"TheBloke/Llama-2-7b-Chat-AWQ\",\n",
|
||||
" trust_remote_code=True,\n",
|
||||
" max_new_tokens=512,\n",
|
||||
" vllm_kwargs={\"quantization\": \"awq\"},\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -216,7 +244,7 @@
|
||||
" model_name=\"tiiuae/falcon-7b\",\n",
|
||||
" model_kwargs={\"stop\": [\".\"]},\n",
|
||||
")\n",
|
||||
"print(llm(\"Rome is\"))"
|
||||
"print(llm.invoke(\"Rome is\"))"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -207,15 +207,11 @@ from langchain_community.vectorstores import MatchingEngine
|
||||
> [Google BigQuery](https://cloud.google.com/bigquery),
|
||||
> BigQuery is a serverless and cost-effective enterprise data warehouse in Google Cloud.
|
||||
>
|
||||
> Google BigQuery Vector Search
|
||||
> [Google BigQuery Vector Search](https://cloud.google.com/bigquery/docs/vector-search-intro)
|
||||
> BigQuery vector search lets you use GoogleSQL to do semantic search, using vector indexes for fast but approximate results, or using brute force for exact results.
|
||||
|
||||
> It can calculate Euclidean or Cosine distance. With LangChain, we default to use Euclidean distance.
|
||||
|
||||
> This is a private preview (experimental) feature. Please submit this
|
||||
> [enrollment form](https://docs.google.com/forms/d/18yndSb4dTf2H0orqA9N7NAchQEDQekwWiD5jYfEkGWk/viewform?edit_requested=true)
|
||||
> if you want to enroll BigQuery Vector Search Experimental.
|
||||
|
||||
We need to install several python packages.
|
||||
|
||||
```bash
|
||||
|
||||
@@ -58,31 +58,24 @@ See a [usage example](/docs/integrations/llms/huggingface_textgen_inference).
|
||||
from langchain_community.llms import HuggingFaceTextGenInference
|
||||
```
|
||||
|
||||
## Chat models
|
||||
|
||||
### Models from Hugging Face
|
||||
|
||||
## Document Loaders
|
||||
We can use the `Hugging Face` LLM classes or directly use the `ChatHuggingFace` class.
|
||||
|
||||
### Hugging Face dataset
|
||||
|
||||
>[Hugging Face Hub](https://huggingface.co/docs/hub/index) is home to over 75,000
|
||||
> [datasets](https://huggingface.co/docs/hub/index#datasets) in more than 100 languages
|
||||
> that can be used for a broad range of tasks across NLP, Computer Vision, and Audio.
|
||||
> They used for a diverse range of tasks such as translation, automatic speech
|
||||
> recognition, and image classification.
|
||||
|
||||
We need to install `datasets` python package.
|
||||
We need to install several python packages.
|
||||
|
||||
```bash
|
||||
pip install datasets
|
||||
pip install huggingface_hub
|
||||
pip install transformers
|
||||
```
|
||||
|
||||
See a [usage example](/docs/integrations/document_loaders/hugging_face_dataset).
|
||||
See a [usage example](/docs/integrations/chat/huggingface).
|
||||
|
||||
```python
|
||||
from langchain_community.document_loaders.hugging_face_dataset import HuggingFaceDatasetLoader
|
||||
from langchain_community.chat_models.huggingface import ChatHuggingFace
|
||||
```
|
||||
|
||||
|
||||
## Embedding Models
|
||||
|
||||
### Hugging Face Hub
|
||||
@@ -126,6 +119,48 @@ See a [usage example](/docs/integrations/text_embedding/bge_huggingface).
|
||||
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
|
||||
```
|
||||
|
||||
### Hugging Face Text Embeddings Inference (TEI)
|
||||
|
||||
>[Hugging Face Text Embeddings Inference (TEI)](https://huggingface.co/docs/text-generation-inference/index) is a toolkit for deploying and serving open-source
|
||||
> text embeddings and sequence classification models. `TEI` enables high-performance extraction for the most popular models,
|
||||
>including `FlagEmbedding`, `Ember`, `GTE` and `E5`.
|
||||
|
||||
We need to install `huggingface-hub` python package.
|
||||
|
||||
```bash
|
||||
pip install huggingface-hub
|
||||
```
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/text_embeddings_inference).
|
||||
|
||||
```python
|
||||
from langchain_community.embeddings import HuggingFaceHubEmbeddings
|
||||
```
|
||||
|
||||
|
||||
## Document Loaders
|
||||
|
||||
### Hugging Face dataset
|
||||
|
||||
>[Hugging Face Hub](https://huggingface.co/docs/hub/index) is home to over 75,000
|
||||
> [datasets](https://huggingface.co/docs/hub/index#datasets) in more than 100 languages
|
||||
> that can be used for a broad range of tasks across NLP, Computer Vision, and Audio.
|
||||
> They used for a diverse range of tasks such as translation, automatic speech
|
||||
> recognition, and image classification.
|
||||
|
||||
We need to install `datasets` python package.
|
||||
|
||||
```bash
|
||||
pip install datasets
|
||||
```
|
||||
|
||||
See a [usage example](/docs/integrations/document_loaders/hugging_face_dataset).
|
||||
|
||||
```python
|
||||
from langchain_community.document_loaders.hugging_face_dataset import HuggingFaceDatasetLoader
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Tools
|
||||
|
||||
|
||||
22
docs/docs/integrations/platforms/index.mdx
Normal file
22
docs/docs/integrations/platforms/index.mdx
Normal file
@@ -0,0 +1,22 @@
|
||||
# Providers
|
||||
|
||||
LangChain integrates with many providers
|
||||
|
||||
## Partner Packages
|
||||
|
||||
- [OpenAI](/docs/integrations/platforms/openai)
|
||||
- [Anthropic](/docs/integrations/platforms/anthropic)
|
||||
- [Google](/docs/integrations/platforms/google)
|
||||
- [MistralAI](/docs/integrations/providers/mistralai)
|
||||
- [NVIDIA AI](/docs/integrations/providers/nvidia)
|
||||
- [Together AI](/docs/integrations/providers/together)
|
||||
- [Robocorp](/docs/integrations/providers/robocorp)
|
||||
- [Exa Search](/docs/integrations/providers/exa_search)
|
||||
- [Nomic](/docs/integrations/providers/nomic)
|
||||
|
||||
|
||||
## Featured Community Providers
|
||||
|
||||
- [AWS](/docs/integrations/platforms/aws)
|
||||
- [Hugging Face](/docs/integrations/platforms/huggingface)
|
||||
- [Microsoft](/docs/integrations/platforms/microsoft)
|
||||
16
docs/docs/integrations/providers/baichuan.mdx
Normal file
16
docs/docs/integrations/providers/baichuan.mdx
Normal file
@@ -0,0 +1,16 @@
|
||||
# Baichuan
|
||||
|
||||
>[Baichuan Inc.](https://www.baichuan-ai.com/) is a Chinese startup in the era of AGI, dedicated to addressing fundamental human needs: Efficiency, Health, and Happiness.
|
||||
|
||||
## Visit Us
|
||||
Visit us at https://www.baichuan-ai.com/.
|
||||
Register and get an API key if you are trying out our APIs.
|
||||
|
||||
## Baichuan LLM Endpoint
|
||||
An example is available at [example](/docs/integrations/llms/baichuan)
|
||||
|
||||
## Baichuan Chat Model
|
||||
An example is available at [example](/docs/integrations/chat/baichuan).
|
||||
|
||||
## Baichuan Text Embedding Model
|
||||
An example is available at [example](/docs/integrations/text_embedding/baichuan)
|
||||
@@ -1,45 +1,52 @@
|
||||
# DeepInfra
|
||||
|
||||
This page covers how to use the DeepInfra ecosystem within LangChain.
|
||||
>[DeepInfra](https://deepinfra.com/docs) allows us to run the
|
||||
> [latest machine learning models](https://deepinfra.com/models) with ease.
|
||||
> DeepInfra takes care of all the heavy lifting related to running, scaling and monitoring
|
||||
> the models. Users can focus on your application and integrate the models with simple REST API calls.
|
||||
|
||||
>DeepInfra provides [examples](https://deepinfra.com/docs/advanced/langchain) of integration with LangChain.
|
||||
|
||||
This page covers how to use the `DeepInfra` ecosystem within `LangChain`.
|
||||
It is broken into two parts: installation and setup, and then references to specific DeepInfra wrappers.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
- Get your DeepInfra api key from this link [here](https://deepinfra.com/).
|
||||
- Get an DeepInfra api key and set it as an environment variable (`DEEPINFRA_API_TOKEN`)
|
||||
|
||||
## Available Models
|
||||
|
||||
DeepInfra provides a range of Open Source LLMs ready for deployment.
|
||||
You can list supported models for
|
||||
|
||||
You can see supported models for
|
||||
[text-generation](https://deepinfra.com/models?type=text-generation) and
|
||||
[embeddings](https://deepinfra.com/models?type=embeddings).
|
||||
google/flan\* models can be viewed [here](https://deepinfra.com/models?type=text2text-generation).
|
||||
|
||||
You can view a [list of request and response parameters](https://deepinfra.com/meta-llama/Llama-2-70b-chat-hf/api).
|
||||
|
||||
Chat models [follow openai api](https://deepinfra.com/meta-llama/Llama-2-70b-chat-hf/api?example=openai-http)
|
||||
|
||||
## Wrappers
|
||||
|
||||
### LLM
|
||||
## LLM
|
||||
|
||||
There exists an DeepInfra LLM wrapper, which you can access with
|
||||
See a [usage example](/docs/integrations/llms/deepinfra).
|
||||
|
||||
```python
|
||||
from langchain_community.llms import DeepInfra
|
||||
```
|
||||
|
||||
### Embeddings
|
||||
## Embeddings
|
||||
|
||||
There is also an DeepInfra Embeddings wrapper, you can access with
|
||||
See a [usage example](/docs/integrations/text_embedding/deepinfra).
|
||||
|
||||
```python
|
||||
from langchain_community.embeddings import DeepInfraEmbeddings
|
||||
```
|
||||
|
||||
### Chat Models
|
||||
## Chat Models
|
||||
|
||||
There is a chat-oriented wrapper as well, accessible with
|
||||
See a [usage example](/docs/integrations/chat/deepinfra).
|
||||
|
||||
```python
|
||||
from langchain_community.chat_models import ChatDeepInfra
|
||||
|
||||
77
docs/docs/integrations/providers/exa_search.ipynb
Normal file
77
docs/docs/integrations/providers/exa_search.ipynb
Normal file
@@ -0,0 +1,77 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Exa Search\n",
|
||||
"\n",
|
||||
"Exa's search integration exists in its own [partner package](https://pypi.org/project/langchain-exa/). You can install it with:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-exa"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In order to use the package, you will also need to set the `EXA_API_KEY` environment variable to your Exa API key.\n",
|
||||
"\n",
|
||||
"## Retriever\n",
|
||||
"\n",
|
||||
"You can use the [`ExaSearchRetriever`](/docs/integrations/tools/exa_search#using-exasearchretriever) in a standard retrieval pipeline. You can import it as follows"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {
|
||||
"id": "y8ku6X96sebl"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_exa import ExaSearchRetriever"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Tools\n",
|
||||
"\n",
|
||||
"You can use Exa as an agent tool as described in the [Exa tool calling docs](/docs/integrations/tools/exa_search#using-the-exa-sdk-as-langchain-agent-tools).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.11"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 1
|
||||
}
|
||||
24
docs/docs/integrations/providers/kdbai.mdx
Normal file
24
docs/docs/integrations/providers/kdbai.mdx
Normal file
@@ -0,0 +1,24 @@
|
||||
# KDB.AI
|
||||
|
||||
>[KDB.AI](https://kdb.ai) is a powerful knowledge-based vector database and search engine that allows you to build scalable, reliable AI applications, using real-time data, by providing advanced search, recommendation and personalization.
|
||||
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
Install the Python SDK:
|
||||
|
||||
```bash
|
||||
pip install kdbai-client
|
||||
```
|
||||
|
||||
|
||||
## Vector store
|
||||
|
||||
There exists a wrapper around KDB.AI indexes, allowing you to use it as a vectorstore,
|
||||
whether for semantic search or example selection.
|
||||
|
||||
```python
|
||||
from langchain_community.vectorstores import KDBAI
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the KDB.AI vectorstore, see [this notebook](/docs/integrations/vectorstores/kdbai)
|
||||
@@ -60,21 +60,27 @@ konko.Model.list()
|
||||
|
||||
## Calling a model
|
||||
|
||||
Find a model on the [Konko Introduction page](https://docs.konko.ai/docs#available-models)
|
||||
|
||||
For example, for this [LLama 2 model](https://docs.konko.ai/docs/meta-llama-2-13b-chat). The model id would be: `"meta-llama/Llama-2-13b-chat-hf"`
|
||||
Find a model on the [Konko Introduction page](https://docs.konko.ai/docs/list-of-models)
|
||||
|
||||
Another way to find the list of models running on the Konko instance is through this [endpoint](https://docs.konko.ai/reference/listmodels).
|
||||
|
||||
From here, we can initialize our model:
|
||||
## Examples of Endpoint Usage
|
||||
|
||||
```python
|
||||
chat_instance = ChatKonko(max_tokens=10, model = 'meta-llama/Llama-2-13b-chat-hf')
|
||||
```
|
||||
|
||||
And run it:
|
||||
- **ChatCompletion with Mistral-7B:**
|
||||
```python
|
||||
chat_instance = ChatKonko(max_tokens=10, model = 'mistralai/mistral-7b-instruct-v0.1')
|
||||
msg = HumanMessage(content="Hi")
|
||||
chat_response = chat_instance([msg])
|
||||
|
||||
```
|
||||
|
||||
```python
|
||||
msg = HumanMessage(content="Hi")
|
||||
chat_response = chat_instance([msg])
|
||||
```
|
||||
- **Completion with mistralai/Mistral-7B-v0.1:**
|
||||
```python
|
||||
from langchain.llms import Konko
|
||||
llm = Konko(max_tokens=800, model='mistralai/Mistral-7B-v0.1')
|
||||
prompt = "Generate a Product Description for Apple Iphone 15"
|
||||
response = llm(prompt)
|
||||
```
|
||||
|
||||
For further assistance, contact [support@konko.ai](mailto:support@konko.ai) or join our [Discord](https://discord.gg/TXV2s3z7RZ).
|
||||
78
docs/docs/integrations/providers/mistralai.ipynb
Normal file
78
docs/docs/integrations/providers/mistralai.ipynb
Normal file
@@ -0,0 +1,78 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# MistralAI\n",
|
||||
"\n",
|
||||
"Mistral AI is a platform that offers hosting for their powerful open source models.\n",
|
||||
"\n",
|
||||
"You can access them via their [API](https://docs.mistral.ai/api/).\n",
|
||||
"\n",
|
||||
"A valid [API key](https://console.mistral.ai/users/api-keys/) is needed to communicate with the API.\n",
|
||||
"\n",
|
||||
"You will also need the `langchain-mistralai` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-core langchain-mistralai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {
|
||||
"id": "y8ku6X96sebl"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_mistralai import ChatMistralAI, MistralAIEmbeddings"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"See the docs for their\n",
|
||||
"\n",
|
||||
"- [Chat Model](/docs/integrations/chat/mistralai)\n",
|
||||
"- [Embeddings Model](/docs/integrations/text_embedding/mistralai)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.11"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 1
|
||||
}
|
||||
69
docs/docs/integrations/providers/nomic.ipynb
Normal file
69
docs/docs/integrations/providers/nomic.ipynb
Normal file
@@ -0,0 +1,69 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Nomic\n",
|
||||
"\n",
|
||||
"Nomic currently offers two products:\n",
|
||||
"\n",
|
||||
"- Atlas: their Visual Data Engine\n",
|
||||
"- GPT4All: their Open Source Edge Language Model Ecosystem\n",
|
||||
"\n",
|
||||
"The Nomic integration exists in its own [partner package](https://pypi.org/project/langchain-nomic/). You can install it with:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-nomic"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Currently, you can import their hosted [embedding model](/docs/integrations/text_embedding/nomic) as follows:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {
|
||||
"id": "y8ku6X96sebl"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_nomic import NomicEmbeddings"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.11"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 1
|
||||
}
|
||||
@@ -21,7 +21,7 @@ To use, you should set up the environment variables `ANYSCALE_API_BASE` and
|
||||
## LLM
|
||||
|
||||
```python
|
||||
from langchain.llms import Ollama
|
||||
from langchain_community.llms import Ollama
|
||||
```
|
||||
|
||||
See the notebook example [here](/docs/integrations/llms/ollama).
|
||||
@@ -31,7 +31,7 @@ See the notebook example [here](/docs/integrations/llms/ollama).
|
||||
### Chat Ollama
|
||||
|
||||
```python
|
||||
from langchain.chat_models import ChatOllama
|
||||
from langchain_community.chat_models import ChatOllama
|
||||
```
|
||||
|
||||
See the notebook example [here](/docs/integrations/chat/ollama).
|
||||
@@ -47,7 +47,7 @@ See the notebook example [here](/docs/integrations/chat/ollama_functions).
|
||||
## Embedding models
|
||||
|
||||
```python
|
||||
from langchain.embeddings import OllamaEmbeddings
|
||||
from langchain_community.embeddings import OllamaEmbeddings
|
||||
```
|
||||
|
||||
See the notebook example [here](/docs/integrations/text_embedding/ollama).
|
||||
|
||||
21
docs/docs/integrations/providers/ontotext_graphdb.mdx
Normal file
21
docs/docs/integrations/providers/ontotext_graphdb.mdx
Normal file
@@ -0,0 +1,21 @@
|
||||
# Ontotext GraphDB
|
||||
|
||||
>[Ontotext GraphDB](https://graphdb.ontotext.com/) is a graph database and knowledge discovery tool compliant with RDF and SPARQL.
|
||||
|
||||
## Dependencies
|
||||
|
||||
Install the [rdflib](https://github.com/RDFLib/rdflib) package with
|
||||
```bash
|
||||
pip install rdflib==7.0.0
|
||||
```
|
||||
|
||||
## Graph QA Chain
|
||||
|
||||
Connect your GraphDB Database with a chat model to get insights on your data.
|
||||
|
||||
See the notebook example [here](/docs/use_cases/graph/graph_ontotext_graphdb_qa).
|
||||
|
||||
```python
|
||||
from langchain_community.graphs import OntotextGraphDBGraph
|
||||
from langchain.chains import OntotextGraphDBQAChain
|
||||
```
|
||||
78
docs/docs/integrations/providers/together.ipynb
Normal file
78
docs/docs/integrations/providers/together.ipynb
Normal file
@@ -0,0 +1,78 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Together AI\n",
|
||||
"\n",
|
||||
"> The Together API makes it easy to fine-tune or run leading open-source models with a couple lines of code. We have integrated the world’s leading open-source models, including Llama-2, RedPajama, Falcon, Alpaca, Stable Diffusion XL, and more. Read more: https://together.ai\n",
|
||||
"\n",
|
||||
"To use, you'll need an API key which you can find here:\n",
|
||||
"https://api.together.xyz/settings/api-keys. This can be passed in as init param\n",
|
||||
"``together_api_key`` or set as environment variable ``TOGETHER_API_KEY``.\n",
|
||||
"\n",
|
||||
"Together API reference: https://docs.together.ai/reference/inference\n",
|
||||
"\n",
|
||||
"You will also need to install the `langchain-together` integration package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet langchain-together"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {
|
||||
"id": "y8ku6X96sebl"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from __module_name__ import (\n",
|
||||
" Together, # LLM\n",
|
||||
" TogetherEmbeddings,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"See the docs for their\n",
|
||||
"\n",
|
||||
"- [LLM](/docs/integrations/llms/together)\n",
|
||||
"- [Embeddings Model](/docs/integrations/text_embedding/together)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.11"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 1
|
||||
}
|
||||
@@ -5,9 +5,7 @@
|
||||
"id": "134a0785",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Chat Over Documents with Vectara\n",
|
||||
"\n",
|
||||
"This notebook is based on the [chat_vector_db](https://github.com/hwchase17/langchain/blob/master/docs/modules/chains/index_examples/chat_vector_db.html) notebook, but using Vectara as the vector database."
|
||||
"# Chat Over Documents with Vectara"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -186,9 +184,7 @@
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "e8ce4fe9",
|
||||
"metadata": {
|
||||
"scrolled": false
|
||||
},
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
|
||||
@@ -547,7 +543,6 @@
|
||||
"execution_count": 26,
|
||||
"id": "e2badd21",
|
||||
"metadata": {
|
||||
"scrolled": false,
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
@@ -755,7 +750,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.9"
|
||||
"version": "3.11.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -24,7 +24,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": null,
|
||||
"id": "b37bd138-4f3c-4d2c-bc4b-be705ce27a09",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -40,7 +40,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 13,
|
||||
"id": "c47b0b26-6d51-4beb-aedb-ad09740a9a2b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -55,19 +55,12 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "2268c17f-5cc3-457b-928b-0d470154c3a8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "28e8dc12",
|
||||
"metadata": {},
|
||||
"execution_count": 14,
|
||||
"id": "6fa3d916",
|
||||
"metadata": {
|
||||
"jp-MarkdownHeadingCollapsed": true,
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Helper function for printing docs\n",
|
||||
@@ -95,8 +88,8 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"id": "9fbcc58f",
|
||||
"execution_count": 15,
|
||||
"id": "b7648612",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -111,28 +104,20 @@
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 2:\n",
|
||||
"\n",
|
||||
"We cannot let this happen. \n",
|
||||
"\n",
|
||||
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
|
||||
"\n",
|
||||
"Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 3:\n",
|
||||
"\n",
|
||||
"As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n",
|
||||
"\n",
|
||||
"While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 3:\n",
|
||||
"\n",
|
||||
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
|
||||
"\n",
|
||||
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 4:\n",
|
||||
"\n",
|
||||
"He met the Ukrainian people. \n",
|
||||
"\n",
|
||||
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
|
||||
"\n",
|
||||
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
|
||||
"\n",
|
||||
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 5:\n",
|
||||
"\n",
|
||||
"I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \n",
|
||||
"\n",
|
||||
"I’ve worked on these issues a long time. \n",
|
||||
@@ -141,64 +126,86 @@
|
||||
"\n",
|
||||
"So let’s not abandon our streets. Or choose between safety and equal justice.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 5:\n",
|
||||
"\n",
|
||||
"He met the Ukrainian people. \n",
|
||||
"\n",
|
||||
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
|
||||
"\n",
|
||||
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
|
||||
"\n",
|
||||
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 6:\n",
|
||||
"\n",
|
||||
"So let’s not abandon our streets. Or choose between safety and equal justice. \n",
|
||||
"\n",
|
||||
"Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. \n",
|
||||
"\n",
|
||||
"That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 7:\n",
|
||||
"\n",
|
||||
"But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. \n",
|
||||
"\n",
|
||||
"Vice President Harris and I ran for office with a new economic vision for America. \n",
|
||||
"\n",
|
||||
"Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up \n",
|
||||
"and the middle out, not from the top down. \n",
|
||||
"\n",
|
||||
"Because we know that when the middle class grows, the poor have a ladder up and the wealthy do very well. \n",
|
||||
"\n",
|
||||
"America used to have the best roads, bridges, and airports on Earth. \n",
|
||||
"\n",
|
||||
"Now our infrastructure is ranked 13th in the world.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 7:\n",
|
||||
"\n",
|
||||
"And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. \n",
|
||||
"\n",
|
||||
"By the end of this year, the deficit will be down to less than half what it was before I took office. \n",
|
||||
"\n",
|
||||
"The only president ever to cut the deficit by more than one trillion dollars in a single year. \n",
|
||||
"\n",
|
||||
"Lowering your costs also means demanding more competition. \n",
|
||||
"\n",
|
||||
"I’m a capitalist, but capitalism without competition isn’t capitalism. \n",
|
||||
"\n",
|
||||
"It’s exploitation—and it drives up prices.\n",
|
||||
"and the middle out, not from the top down.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 8:\n",
|
||||
"\n",
|
||||
"For the past 40 years we were told that if we gave tax breaks to those at the very top, the benefits would trickle down to everyone else. \n",
|
||||
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
|
||||
"\n",
|
||||
"But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. \n",
|
||||
"\n",
|
||||
"Vice President Harris and I ran for office with a new economic vision for America.\n",
|
||||
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 9:\n",
|
||||
"\n",
|
||||
"All told, we created 369,000 new manufacturing jobs in America just last year. \n",
|
||||
"The widow of Sergeant First Class Heath Robinson. \n",
|
||||
"\n",
|
||||
"Powered by people I’ve met like JoJo Burgess, from generations of union steelworkers from Pittsburgh, who’s here with us tonight. \n",
|
||||
"He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. \n",
|
||||
"\n",
|
||||
"Stationed near Baghdad, just yards from burn pits the size of football fields. \n",
|
||||
"\n",
|
||||
"Heath’s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter. \n",
|
||||
"\n",
|
||||
"But cancer from prolonged exposure to burn pits ravaged Heath’s lungs and body. \n",
|
||||
"\n",
|
||||
"Danielle says Heath was a fighter to the very end.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 10:\n",
|
||||
"\n",
|
||||
"As I’ve told Xi Jinping, it is never a good bet to bet against the American people. \n",
|
||||
"\n",
|
||||
"We’ll create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. \n",
|
||||
"\n",
|
||||
"And we’ll do it all to withstand the devastating effects of the climate crisis and promote environmental justice.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 11:\n",
|
||||
"\n",
|
||||
"As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.” \n",
|
||||
"\n",
|
||||
"It’s time. \n",
|
||||
"\n",
|
||||
"But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills.\n",
|
||||
"But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. \n",
|
||||
"\n",
|
||||
"Inflation is robbing them of the gains they might otherwise feel. \n",
|
||||
"\n",
|
||||
"I get it. That’s why my top priority is getting prices under control.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 10:\n",
|
||||
"Document 12:\n",
|
||||
"\n",
|
||||
"I’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. \n",
|
||||
"This was a bipartisan effort, and I want to thank the members of both parties who worked to make it happen. \n",
|
||||
"\n",
|
||||
"And fourth, let’s end cancer as we know it. \n",
|
||||
"We’re done talking about infrastructure weeks. \n",
|
||||
"\n",
|
||||
"This is personal to me and Jill, to Kamala, and to so many of you. \n",
|
||||
"We’re going to have an infrastructure decade. \n",
|
||||
"\n",
|
||||
"Cancer is the #2 cause of death in America–second only to heart disease.\n",
|
||||
"It is going to transform America and put us on a path to win the economic competition of the 21st Century that we face with the rest of the world—particularly with China. \n",
|
||||
"\n",
|
||||
"As I’ve told Xi Jinping, it is never a good bet to bet against the American people.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 11:\n",
|
||||
"Document 13:\n",
|
||||
"\n",
|
||||
"He will never extinguish their love of freedom. He will never weaken the resolve of the free world. \n",
|
||||
"\n",
|
||||
@@ -210,100 +217,8 @@
|
||||
"\n",
|
||||
"I understand.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 12:\n",
|
||||
"\n",
|
||||
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n",
|
||||
"\n",
|
||||
"Last year COVID-19 kept us apart. This year we are finally together again. \n",
|
||||
"\n",
|
||||
"Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n",
|
||||
"\n",
|
||||
"With a duty to one another to the American people to the Constitution. \n",
|
||||
"\n",
|
||||
"And with an unwavering resolve that freedom will always triumph over tyranny.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 13:\n",
|
||||
"\n",
|
||||
"I know. \n",
|
||||
"\n",
|
||||
"One of those soldiers was my son Major Beau Biden. \n",
|
||||
"\n",
|
||||
"We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. \n",
|
||||
"\n",
|
||||
"But I’m committed to finding out everything we can. \n",
|
||||
"\n",
|
||||
"Committed to military families like Danielle Robinson from Ohio. \n",
|
||||
"\n",
|
||||
"The widow of Sergeant First Class Heath Robinson. \n",
|
||||
"\n",
|
||||
"He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 14:\n",
|
||||
"\n",
|
||||
"And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n",
|
||||
"\n",
|
||||
"So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n",
|
||||
"\n",
|
||||
"First, beat the opioid epidemic. \n",
|
||||
"\n",
|
||||
"There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 15:\n",
|
||||
"\n",
|
||||
"Third, support our veterans. \n",
|
||||
"\n",
|
||||
"Veterans are the best of us. \n",
|
||||
"\n",
|
||||
"I’ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home. \n",
|
||||
"\n",
|
||||
"My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. \n",
|
||||
"\n",
|
||||
"Our troops in Iraq and Afghanistan faced many dangers.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 16:\n",
|
||||
"\n",
|
||||
"When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we haven’t done in a long time: build a better America. \n",
|
||||
"\n",
|
||||
"For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation. \n",
|
||||
"\n",
|
||||
"And I know you’re tired, frustrated, and exhausted. \n",
|
||||
"\n",
|
||||
"But I also know this.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 17:\n",
|
||||
"\n",
|
||||
"Now is the hour. \n",
|
||||
"\n",
|
||||
"Our moment of responsibility. \n",
|
||||
"\n",
|
||||
"Our test of resolve and conscience, of history itself. \n",
|
||||
"\n",
|
||||
"It is in this moment that our character is formed. Our purpose is found. Our future is forged. \n",
|
||||
"\n",
|
||||
"Well I know this nation. \n",
|
||||
"\n",
|
||||
"We will meet the test. \n",
|
||||
"\n",
|
||||
"To protect freedom and liberty, to expand fairness and opportunity. \n",
|
||||
"\n",
|
||||
"We will save democracy. \n",
|
||||
"\n",
|
||||
"As hard as these times have been, I am more optimistic about America today than I have been my whole life.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 18:\n",
|
||||
"\n",
|
||||
"He didn’t know how to stop fighting, and neither did she. \n",
|
||||
"\n",
|
||||
"Through her pain she found purpose to demand we do better. \n",
|
||||
"\n",
|
||||
"Tonight, Danielle—we are. \n",
|
||||
"\n",
|
||||
"The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. \n",
|
||||
"\n",
|
||||
"And tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 19:\n",
|
||||
"\n",
|
||||
"I understand. \n",
|
||||
"\n",
|
||||
"I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. \n",
|
||||
@@ -314,26 +229,87 @@
|
||||
"\n",
|
||||
"Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 15:\n",
|
||||
"\n",
|
||||
"My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. \n",
|
||||
"\n",
|
||||
"Our troops in Iraq and Afghanistan faced many dangers. \n",
|
||||
"\n",
|
||||
"One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. \n",
|
||||
"\n",
|
||||
"When they came home, many of the world’s fittest and best trained warriors were never the same. \n",
|
||||
"\n",
|
||||
"Headaches. Numbness. Dizziness.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 16:\n",
|
||||
"\n",
|
||||
"Danielle says Heath was a fighter to the very end. \n",
|
||||
"\n",
|
||||
"He didn’t know how to stop fighting, and neither did she. \n",
|
||||
"\n",
|
||||
"Through her pain she found purpose to demand we do better. \n",
|
||||
"\n",
|
||||
"Tonight, Danielle—we are. \n",
|
||||
"\n",
|
||||
"The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. \n",
|
||||
"\n",
|
||||
"And tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 17:\n",
|
||||
"\n",
|
||||
"Cancer is the #2 cause of death in America–second only to heart disease. \n",
|
||||
"\n",
|
||||
"Last month, I announced our plan to supercharge \n",
|
||||
"the Cancer Moonshot that President Obama asked me to lead six years ago. \n",
|
||||
"\n",
|
||||
"Our goal is to cut the cancer death rate by at least 50% over the next 25 years, turn more cancers from death sentences into treatable diseases. \n",
|
||||
"\n",
|
||||
"More support for patients and families. \n",
|
||||
"\n",
|
||||
"To get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 18:\n",
|
||||
"\n",
|
||||
"My plan to fight inflation will lower your costs and lower the deficit. \n",
|
||||
"\n",
|
||||
"17 Nobel laureates in economics say my plan will ease long-term inflationary pressures. Top business leaders and most Americans support my plan. And here’s the plan: \n",
|
||||
"\n",
|
||||
"First – cut the cost of prescription drugs. Just look at insulin. One in ten Americans has diabetes. In Virginia, I met a 13-year-old boy named Joshua Davis.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 19:\n",
|
||||
"\n",
|
||||
"Let’s pass the Paycheck Fairness Act and paid leave. \n",
|
||||
"\n",
|
||||
"Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n",
|
||||
"\n",
|
||||
"Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges. \n",
|
||||
"\n",
|
||||
"And let’s pass the PRO Act when a majority of workers want to form a union—they shouldn’t be stopped.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 20:\n",
|
||||
"\n",
|
||||
"So let’s not abandon our streets. Or choose between safety and equal justice. \n",
|
||||
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n",
|
||||
"\n",
|
||||
"Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. \n",
|
||||
"Last year COVID-19 kept us apart. This year we are finally together again. \n",
|
||||
"\n",
|
||||
"That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.\n"
|
||||
"Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n",
|
||||
"\n",
|
||||
"With a duty to one another to the American people to the Constitution. \n",
|
||||
"\n",
|
||||
"And with an unwavering resolve that freedom will always triumph over tyranny.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||
"from langchain_community.document_loaders import TextLoader\n",
|
||||
"from langchain_community.embeddings import CohereEmbeddings\n",
|
||||
"from langchain_community.vectorstores import FAISS\n",
|
||||
"from langchain_openai import OpenAIEmbeddings\n",
|
||||
"\n",
|
||||
"documents = TextLoader(\"../../modules/state_of_the_union.txt\").load()\n",
|
||||
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)\n",
|
||||
"texts = text_splitter.split_documents(documents)\n",
|
||||
"retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever(\n",
|
||||
"retriever = FAISS.from_documents(texts, CohereEmbeddings()).as_retriever(\n",
|
||||
" search_kwargs={\"k\": 20}\n",
|
||||
")\n",
|
||||
"\n",
|
||||
@@ -353,8 +329,8 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 31,
|
||||
"id": "9a658023",
|
||||
"execution_count": 16,
|
||||
"id": "b83dfedb",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -388,9 +364,9 @@
|
||||
"source": [
|
||||
"from langchain.retrievers import ContextualCompressionRetriever\n",
|
||||
"from langchain.retrievers.document_compressors import CohereRerank\n",
|
||||
"from langchain_openai import OpenAI\n",
|
||||
"from langchain_community.llms import Cohere\n",
|
||||
"\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"llm = Cohere(temperature=0)\n",
|
||||
"compressor = CohereRerank()\n",
|
||||
"compression_retriever = ContextualCompressionRetriever(\n",
|
||||
" base_compressor=compressor, base_retriever=retriever\n",
|
||||
@@ -412,7 +388,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 32,
|
||||
"execution_count": 17,
|
||||
"id": "367dafe0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -422,19 +398,19 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 33,
|
||||
"execution_count": 18,
|
||||
"id": "ae697ca4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = RetrievalQA.from_chain_type(\n",
|
||||
" llm=OpenAI(temperature=0), retriever=compression_retriever\n",
|
||||
" llm=Cohere(temperature=0), retriever=compression_retriever\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 34,
|
||||
"execution_count": 19,
|
||||
"id": "46ee62fc",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -442,10 +418,10 @@
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'query': 'What did the president say about Ketanji Brown Jackson',\n",
|
||||
" 'result': \" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she is a consensus builder who has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\"}"
|
||||
" 'result': \" The president speaks highly of Ketanji Brown Jackson, stating that she is one of the nation's top legal minds, and will continue the legacy of excellence of Justice Breyer. The president also mentions that he worked with her family and that she comes from a family of public school educators and police officers. Since her nomination, she has received support from various groups, including the Fraternal Order of Police and judges from both major political parties. \\n\\nWould you like me to extract another sentence from the provided text? \"}"
|
||||
]
|
||||
},
|
||||
"execution_count": 34,
|
||||
"execution_count": 19,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -453,14 +429,6 @@
|
||||
"source": [
|
||||
"chain({\"query\": query})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "700a8133",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
102
docs/docs/integrations/text_embedding/baichuan.ipynb
Normal file
102
docs/docs/integrations/text_embedding/baichuan.ipynb
Normal file
@@ -0,0 +1,102 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Baichuan Text Embeddings\n",
|
||||
"\n",
|
||||
"As of today (Jan 25th, 2024) BaichuanTextEmbeddings ranks #1 in C-MTEB (Chinese Multi-Task Embedding Benchmark) leaderboard.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Leaderboard (Under Overall -> Chinese section): https://huggingface.co/spaces/mteb/leaderboard"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Official Website: https://platform.baichuan-ai.com/docs/text-Embedding\n",
|
||||
"\n",
|
||||
"An API key is required to use this embedding model. You can get one by registering at https://platform.baichuan-ai.com/docs/text-Embedding."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"BaichuanTextEmbeddings support 512 token window and preduces vectors with 1024 dimensions. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Please NOTE that BaichuanTextEmbeddings only supports Chinese text embedding. Multi-language support is coming soon."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.embeddings import BaichuanTextEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = BaichuanTextEmbeddings(baichuan_api_key=\"sk-*\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Alternatively, you can set API key this way:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"BAICHUAN_API_KEY\"] = \"YOUR_API_KEY\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"text_1 = \"今天天气不错\"\n",
|
||||
"text_2 = \"今天阳光很好\"\n",
|
||||
"\n",
|
||||
"query_result = embeddings.embed_query(text_1)\n",
|
||||
"query_result"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"doc_result = embeddings.embed_documents([text_1, text_2])\n",
|
||||
"doc_result"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
132
docs/docs/integrations/text_embedding/nomic.ipynb
Normal file
132
docs/docs/integrations/text_embedding/nomic.ipynb
Normal file
@@ -0,0 +1,132 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: Nomic\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e49f1e0d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# NomicEmbeddings\n",
|
||||
"\n",
|
||||
"This notebook covers how to get started with Nomic embedding models.\n",
|
||||
"\n",
|
||||
"## Installation"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "4c3bef91",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# install package\n",
|
||||
"!pip install -U langchain-nomic"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2b4f3e15",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Environment Setup\n",
|
||||
"\n",
|
||||
"Make sure to set the following environment variables:\n",
|
||||
"\n",
|
||||
"- `NOMIC_API_KEY`\n",
|
||||
"\n",
|
||||
"## Usage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "62e0dbc3",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_nomic.embeddings import NomicEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = NomicEmbeddings(model=\"nomic-embed-text-v1\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "12fcfb4b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"embeddings.embed_query(\"My query to look up\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "1f2e6104",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"embeddings.embed_documents(\n",
|
||||
" [\"This is a content of the document\", \"This is another document\"]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "46739f68",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# async embed query\n",
|
||||
"await embeddings.aembed_query(\"My query to look up\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e48632ea",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# async embed documents\n",
|
||||
"await embeddings.aembed_documents(\n",
|
||||
" [\"This is a content of the document\", \"This is another document\"]\n",
|
||||
")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -10,9 +10,42 @@
|
||||
"Let's load the OpenAI Embedding class."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "40ff98ff-58e9-4716-8788-227a5c3f473d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"First we install langchain-openai and set the required env vars"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": null,
|
||||
"id": "c66c4613-6c67-40ca-b3b1-c026750d1742",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "62e3710e-55a0-44fb-ba51-2f1d520dfc38",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "0be1af71",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -22,17 +55,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 29,
|
||||
"execution_count": 5,
|
||||
"id": "2c66e5da",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"embeddings = OpenAIEmbeddings()"
|
||||
"embeddings = OpenAIEmbeddings(model=\"text-embedding-3-large\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 30,
|
||||
"execution_count": 6,
|
||||
"id": "01370375",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -40,33 +73,50 @@
|
||||
"text = \"This is a test document.\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f012c222-3fa9-470a-935c-758b2048d9af",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Usage\n",
|
||||
"### Embed query"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 31,
|
||||
"execution_count": 7,
|
||||
"id": "bfb6142c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Warning: model not found. Using cl100k_base encoding.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"query_result = embeddings.embed_query(text)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 32,
|
||||
"execution_count": 8,
|
||||
"id": "91bc875d-829b-4c3d-8e6f-fc2dda30a3bd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[-0.003186025367556387,\n",
|
||||
" 0.011071979803637493,\n",
|
||||
" -0.004020420763285827,\n",
|
||||
" -0.011658221276953042,\n",
|
||||
" -0.0010534035786864363]"
|
||||
"[-0.014380056377383358,\n",
|
||||
" -0.027191711627651764,\n",
|
||||
" -0.020042716111860304,\n",
|
||||
" 0.057301379620345545,\n",
|
||||
" -0.022267658631828974]"
|
||||
]
|
||||
},
|
||||
"execution_count": 32,
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -75,33 +125,49 @@
|
||||
"query_result[:5]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6b733391-1e23-438b-a6bc-0d77eed9426e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Embed documents"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 33,
|
||||
"execution_count": 9,
|
||||
"id": "0356c3b7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Warning: model not found. Using cl100k_base encoding.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"doc_result = embeddings.embed_documents([text])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 34,
|
||||
"execution_count": 10,
|
||||
"id": "a4b0d49e-0c73-44b6-aed5-5b426564e085",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[-0.003186025367556387,\n",
|
||||
" 0.011071979803637493,\n",
|
||||
" -0.004020420763285827,\n",
|
||||
" -0.011658221276953042,\n",
|
||||
" -0.0010534035786864363]"
|
||||
"[-0.014380056377383358,\n",
|
||||
" -0.027191711627651764,\n",
|
||||
" -0.020042716111860304,\n",
|
||||
" 0.057301379620345545,\n",
|
||||
" -0.022267658631828974]"
|
||||
]
|
||||
},
|
||||
"execution_count": 34,
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -112,131 +178,87 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "bb61bbeb",
|
||||
"id": "e7dc464a-6fa2-4cff-ab2e-49a0566d819b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's load the OpenAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - see [here](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "c0b072cc",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_openai import OpenAIEmbeddings"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
"id": "a56b70f5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"embeddings = OpenAIEmbeddings(model=\"text-embedding-ada-002\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
"id": "14aefb64",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"text = \"This is a test document.\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 25,
|
||||
"id": "3c39ed33",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"query_result = embeddings.embed_query(text)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 26,
|
||||
"id": "2ee7ce9f-d506-4810-8897-e44334412714",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[0.004452846988523035,\n",
|
||||
" 0.034550655976098514,\n",
|
||||
" -0.015029939040690051,\n",
|
||||
" 0.03827273883655212,\n",
|
||||
" 0.005785414075152477]"
|
||||
]
|
||||
},
|
||||
"execution_count": 26,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"query_result[:5]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 27,
|
||||
"id": "e3221db6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"doc_result = embeddings.embed_documents([text])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 28,
|
||||
"id": "a0865409-3a6d-468f-939f-abde17c7cac3",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[0.004452846988523035,\n",
|
||||
" 0.034550655976098514,\n",
|
||||
" -0.015029939040690051,\n",
|
||||
" 0.03827273883655212,\n",
|
||||
" 0.005785414075152477]"
|
||||
]
|
||||
},
|
||||
"execution_count": 28,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"doc_result[0][:5]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "aaad49f8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"## Specify dimensions\n",
|
||||
"\n",
|
||||
"# if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass through\n",
|
||||
"os.environ[\"OPENAI_PROXY\"] = \"http://proxy.yourcompany.com:8080\""
|
||||
"With the `text-embedding-3` class of models, you can specify the size of the embeddings you want returned. For example by default `text-embedding-3-large` returned embeddings of dimension 3072:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "f7be1e7b-54c6-4893-b8ad-b872e6705735",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"3072"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"len(doc_result[0])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "33287142-0835-4958-962f-385ae4447431",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"But by passing in `dimensions=1024` we can reduce the size of our embeddings to 1024:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "854ee772-2de9-4a83-84e0-908033d98e4e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"embeddings_1024 = OpenAIEmbeddings(model=\"text-embedding-3-large\", dimensions=1024)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "3b464396-8d94-478b-8329-849b56e1ae23",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Warning: model not found. Using cl100k_base encoding.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"1024"
|
||||
]
|
||||
},
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"len(embeddings_1024.embed_documents([text])[0])"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"display_name": "poetry-venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
"name": "poetry-venv"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
@@ -248,7 +270,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.9.1"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
|
||||
@@ -52,7 +52,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"embedder = SpacyEmbeddings()"
|
||||
"embedder = SpacyEmbeddings(model_name=\"en_core_web_sm\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -7,7 +7,9 @@
|
||||
"source": [
|
||||
"# Text Embeddings Inference\n",
|
||||
"\n",
|
||||
"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.\n",
|
||||
">[Hugging Face Text Embeddings Inference (TEI)](https://huggingface.co/docs/text-generation-inference/index) is a toolkit for deploying and serving open-source\n",
|
||||
"> text embeddings and sequence classification models. `TEI` enables high-performance extraction for the most popular models,\n",
|
||||
">including `FlagEmbedding`, `Ember`, `GTE` and `E5`.\n",
|
||||
"\n",
|
||||
"To use it within langchain, first install `huggingface-hub`."
|
||||
]
|
||||
@@ -21,7 +23,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet huggingface-hub -q"
|
||||
"%pip install --upgrade huggingface-hub"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -146,9 +148,9 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "conda_python3",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "conda_python3"
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
@@ -160,7 +162,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.13"
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
136
docs/docs/integrations/toolkits/connery.ipynb
Normal file
136
docs/docs/integrations/toolkits/connery.ipynb
Normal file
@@ -0,0 +1,136 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Connery Toolkit\n",
|
||||
"\n",
|
||||
"Using this toolkit, you can integrate Connery Actions into your LangChain agent.\n",
|
||||
"\n",
|
||||
"If you want to use only one particular Connery Action in your agent,\n",
|
||||
"check out the [Connery Action Tool](/docs/integrations/tools/connery) documentation.\n",
|
||||
"\n",
|
||||
"## What is Connery?\n",
|
||||
"\n",
|
||||
"Connery is an open-source plugin infrastructure for AI.\n",
|
||||
"\n",
|
||||
"With Connery, you can easily create a custom plugin with a set of actions and seamlessly integrate them into your LangChain agent.\n",
|
||||
"Connery will take care of critical aspects such as runtime, authorization, secret management, access management, audit logs, and other vital features.\n",
|
||||
"\n",
|
||||
"Furthermore, Connery, supported by our community, provides a diverse collection of ready-to-use open-source plugins for added convenience.\n",
|
||||
"\n",
|
||||
"Learn more about Connery:\n",
|
||||
"\n",
|
||||
"- GitHub: https://github.com/connery-io/connery\n",
|
||||
"- Documentation: https://docs.connery.io\n",
|
||||
"\n",
|
||||
"## Prerequisites\n",
|
||||
"\n",
|
||||
"To use Connery Actions in your LangChain agent, you need to do some preparation:\n",
|
||||
"\n",
|
||||
"1. Set up the Connery runner using the [Quickstart](https://docs.connery.io/docs/runner/quick-start/) guide.\n",
|
||||
"2. Install all the plugins with the actions you want to use in your agent.\n",
|
||||
"3. Set environment variables `CONNERY_RUNNER_URL` and `CONNERY_RUNNER_API_KEY` so the toolkit can communicate with the Connery Runner.\n",
|
||||
"\n",
|
||||
"## Example of using Connery Toolkit\n",
|
||||
"\n",
|
||||
"In the example below, we create an agent that uses two Connery Actions to summarize a public webpage and send the summary by email:\n",
|
||||
"\n",
|
||||
"1. **Summarize public webpage** action from the [Summarization](https://github.com/connery-io/summarization-plugin) plugin.\n",
|
||||
"2. **Send email** action from the [Gmail](https://github.com/connery-io/gmail) plugin.\n",
|
||||
"\n",
|
||||
"You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/4af5385a-afe9-46f6-8a53-57fe2d63c5bc/r)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `CA72DFB0AB4DF6C830B43E14B0782F70` with `{'publicWebpageUrl': 'http://www.paulgraham.com/vb.html'}`\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[33;1m\u001b[1;3m{'summary': 'The author reflects on the concept of life being short and how having children made them realize the true brevity of life. They discuss how time can be converted into discrete quantities and how limited certain experiences are. The author emphasizes the importance of prioritizing and eliminating unnecessary things in life, as well as actively pursuing meaningful experiences. They also discuss the negative impact of getting caught up in online arguments and the need to be aware of how time is being spent. The author suggests pruning unnecessary activities, not waiting to do things that matter, and savoring the time one has.'}\u001b[0m\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `CABC80BB79C15067CA983495324AE709` with `{'recipient': 'test@example.com', 'subject': 'Summary of the webpage', 'body': 'Here is a short summary of the webpage http://www.paulgraham.com/vb.html:\\n\\nThe author reflects on the concept of life being short and how having children made them realize the true brevity of life. They discuss how time can be converted into discrete quantities and how limited certain experiences are. The author emphasizes the importance of prioritizing and eliminating unnecessary things in life, as well as actively pursuing meaningful experiences. They also discuss the negative impact of getting caught up in online arguments and the need to be aware of how time is being spent. The author suggests pruning unnecessary activities, not waiting to do things that matter, and savoring the time one has.\\n\\nYou can find the full webpage [here](http://www.paulgraham.com/vb.html).'}`\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[33;1m\u001b[1;3m{'messageId': '<2f04b00e-122d-c7de-c91e-e78e0c3276d6@gmail.com>'}\u001b[0m\u001b[32;1m\u001b[1;3mI have sent the email with the summary of the webpage to test@example.com. Please check your inbox.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"I have sent the email with the summary of the webpage to test@example.com. Please check your inbox.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from langchain.agents import AgentType, initialize_agent\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain_community.agent_toolkits.connery import ConneryToolkit\n",
|
||||
"from langchain_community.tools.connery import ConneryService\n",
|
||||
"\n",
|
||||
"# Specify your Connery Runner credentials.\n",
|
||||
"os.environ[\"CONNERY_RUNNER_URL\"] = \"\"\n",
|
||||
"os.environ[\"CONNERY_RUNNER_API_KEY\"] = \"\"\n",
|
||||
"\n",
|
||||
"# Specify OpenAI API key.\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = \"\"\n",
|
||||
"\n",
|
||||
"# Specify your email address to receive the email with the summary from example below.\n",
|
||||
"recepient_email = \"test@example.com\"\n",
|
||||
"\n",
|
||||
"# Create a Connery Toolkit with all the available actions from the Connery Runner.\n",
|
||||
"connery_service = ConneryService()\n",
|
||||
"connery_toolkit = ConneryToolkit.create_instance(connery_service)\n",
|
||||
"\n",
|
||||
"# Use OpenAI Functions agent to execute the prompt using actions from the Connery Toolkit.\n",
|
||||
"llm = ChatOpenAI(temperature=0)\n",
|
||||
"agent = initialize_agent(\n",
|
||||
" connery_toolkit.get_tools(), llm, AgentType.OPENAI_FUNCTIONS, verbose=True\n",
|
||||
")\n",
|
||||
"result = agent.run(\n",
|
||||
" f\"\"\"Make a short summary of the webpage http://www.paulgraham.com/vb.html in three sentences\n",
|
||||
"and send it to {recepient_email}. Include the link to the webpage into the body of the email.\"\"\"\n",
|
||||
")\n",
|
||||
"print(result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"NOTE: Connery Action is a structured tool, so you can only use it in the agents supporting structured tools."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.13"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -13,30 +13,65 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.2\u001b[0m\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
|
||||
"Note: you may need to restart the kernel to use updated packages.\n",
|
||||
"\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.2\u001b[0m\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
|
||||
"Note: you may need to restart the kernel to use updated packages.\n",
|
||||
"\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.2\u001b[0m\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
|
||||
"Note: you may need to restart the kernel to use updated packages.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet slack_sdk > /dev/null\n",
|
||||
"%pip install --upgrade --quiet beautifulsoup4 > /dev/null # This is optional but is useful for parsing HTML messages"
|
||||
"%pip install --upgrade --quiet beautifulsoup4 > /dev/null # This is optional but is useful for parsing HTML messages\n",
|
||||
"%pip install --upgrade --quiet python-dotenv > /dev/null # This is for loading environmental variables from a .env file"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Assign Environmental Variables\n",
|
||||
"## Set Environmental Variables\n",
|
||||
"\n",
|
||||
"The toolkit will read the SLACK_USER_TOKEN environmental variable to authenticate the user so you need to set them here. You will also need to set your OPENAI_API_KEY to use the agent later."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"True"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Set environmental variables here"
|
||||
"# Set environmental variables here\n",
|
||||
"# In this example, you set environmental variables by loading a .env file.\n",
|
||||
"import dotenv\n",
|
||||
"\n",
|
||||
"dotenv.load_dotenv()"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -50,9 +85,23 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[SlackGetChannel(client=<slack_sdk.web.client.WebClient object at 0x11eba6a00>),\n",
|
||||
" SlackGetMessage(client=<slack_sdk.web.client.WebClient object at 0x11eba69d0>),\n",
|
||||
" SlackScheduleMessage(client=<slack_sdk.web.client.WebClient object at 0x11eba65b0>),\n",
|
||||
" SlackSendMessage(client=<slack_sdk.web.client.WebClient object at 0x11eba6790>)]"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_community.agent_toolkits import SlackToolkit\n",
|
||||
"\n",
|
||||
@@ -65,31 +114,122 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Use within an Agent"
|
||||
"## Use within an ReAct Agent"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import AgentType, initialize_agent\n",
|
||||
"from langchain_openai import OpenAI"
|
||||
"from langchain import hub\n",
|
||||
"from langchain.agents import AgentExecutor, create_react_agent\n",
|
||||
"from langchain_openai import ChatOpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"agent = initialize_agent(\n",
|
||||
"llm = ChatOpenAI(temperature=0, model=\"gpt-4\")\n",
|
||||
"prompt = hub.pull(\"hwchase17/react\")\n",
|
||||
"agent = create_react_agent(\n",
|
||||
" tools=toolkit.get_tools(),\n",
|
||||
" llm=llm,\n",
|
||||
" verbose=False,\n",
|
||||
" agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,\n",
|
||||
" prompt=prompt,\n",
|
||||
")\n",
|
||||
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent_executor.invoke(\n",
|
||||
" {\"input\": \"Send a greeting to my coworkers in the #general channel.\"}\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mI need to get the list of channels in the workspace.\n",
|
||||
"Action: get_channelid_name_dict\n",
|
||||
"Action Input: {}\u001b[0m\u001b[36;1m\u001b[1;3m[{\"id\": \"C052SCUP4UD\", \"name\": \"general\", \"created\": 1681297313, \"num_members\": 1}, {\"id\": \"C052VBBU4M8\", \"name\": \"test-bots\", \"created\": 1681297343, \"num_members\": 2}, {\"id\": \"C053805TNUR\", \"name\": \"random\", \"created\": 1681297313, \"num_members\": 2}]\u001b[0m\u001b[32;1m\u001b[1;3mI now have the list of channels and their names.\n",
|
||||
"Final Answer: There are 3 channels in the workspace. Their names are \"general\", \"test-bots\", and \"random\".\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'input': 'How many channels are in the workspace? Please list out their names.',\n",
|
||||
" 'output': 'There are 3 channels in the workspace. Their names are \"general\", \"test-bots\", and \"random\".'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_executor.invoke(\n",
|
||||
" {\"input\": \"How many channels are in the workspace? Please list out their names.\"}\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mFirst, I need to identify the channel ID for the #introductions channel.\n",
|
||||
"Action: get_channelid_name_dict\n",
|
||||
"Action Input: None\u001b[0m\u001b[36;1m\u001b[1;3m[{\"id\": \"C052SCUP4UD\", \"name\": \"general\", \"created\": 1681297313, \"num_members\": 1}, {\"id\": \"C052VBBU4M8\", \"name\": \"test-bots\", \"created\": 1681297343, \"num_members\": 2}, {\"id\": \"C053805TNUR\", \"name\": \"random\", \"created\": 1681297313, \"num_members\": 2}]\u001b[0m\u001b[32;1m\u001b[1;3mThe #introductions channel is not listed in the observed channels. I need to inform the user that the #introductions channel does not exist or is not accessible.\n",
|
||||
"Final Answer: The #introductions channel does not exist or is not accessible.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'input': 'Tell me the number of messages sent in the #introductions channel from the past month.',\n",
|
||||
" 'output': 'The #introductions channel does not exist or is not accessible.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_executor.invoke(\n",
|
||||
" {\n",
|
||||
" \"input\": \"Tell me the number of messages sent in the #introductions channel from the past month.\"\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -98,29 +238,7 @@
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent.run(\"Send a greeting to my coworkers in the #general channel.\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent.run(\"How many channels are in the workspace? Please list out their names.\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent.run(\n",
|
||||
" \"Tell me the number of messages sent in the #introductions channel from the past month.\"\n",
|
||||
")"
|
||||
]
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -139,7 +257,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.6"
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
165
docs/docs/integrations/tools/connery.ipynb
Normal file
165
docs/docs/integrations/tools/connery.ipynb
Normal file
@@ -0,0 +1,165 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Connery Action Tool\n",
|
||||
"\n",
|
||||
"Using this tool, you can integrate individual Connery Action into your LangChain agent.\n",
|
||||
"\n",
|
||||
"If you want to use more than one Connery Action in your agent,\n",
|
||||
"check out the [Connery Toolkit](/docs/integrations/toolkits/connery) documentation.\n",
|
||||
"\n",
|
||||
"## What is Connery?\n",
|
||||
"\n",
|
||||
"Connery is an open-source plugin infrastructure for AI.\n",
|
||||
"\n",
|
||||
"With Connery, you can easily create a custom plugin with a set of actions and seamlessly integrate them into your LangChain agent.\n",
|
||||
"Connery will take care of critical aspects such as runtime, authorization, secret management, access management, audit logs, and other vital features.\n",
|
||||
"\n",
|
||||
"Furthermore, Connery, supported by our community, provides a diverse collection of ready-to-use open-source plugins for added convenience.\n",
|
||||
"\n",
|
||||
"Learn more about Connery:\n",
|
||||
"\n",
|
||||
"- GitHub: https://github.com/connery-io/connery\n",
|
||||
"- Documentation: https://docs.connery.io\n",
|
||||
"\n",
|
||||
"## Prerequisites\n",
|
||||
"\n",
|
||||
"To use Connery Actions in your LangChain agent, you need to do some preparation:\n",
|
||||
"\n",
|
||||
"1. Set up the Connery runner using the [Quickstart](https://docs.connery.io/docs/runner/quick-start/) guide.\n",
|
||||
"2. Install all the plugins with the actions you want to use in your agent.\n",
|
||||
"3. Set environment variables `CONNERY_RUNNER_URL` and `CONNERY_RUNNER_API_KEY` so the toolkit can communicate with the Connery Runner.\n",
|
||||
"\n",
|
||||
"## Example of using Connery Action Tool\n",
|
||||
"\n",
|
||||
"In the example below, we fetch action by its ID from the Connery Runner and then call it with the specified parameters.\n",
|
||||
"\n",
|
||||
"Here, we use the ID of the **Send email** action from the [Gmail](https://github.com/connery-io/gmail) plugin."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from langchain.agents import AgentType, initialize_agent\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain_community.tools.connery import ConneryService\n",
|
||||
"\n",
|
||||
"# Specify your Connery Runner credentials.\n",
|
||||
"os.environ[\"CONNERY_RUNNER_URL\"] = \"\"\n",
|
||||
"os.environ[\"CONNERY_RUNNER_API_KEY\"] = \"\"\n",
|
||||
"\n",
|
||||
"# Specify OpenAI API key.\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = \"\"\n",
|
||||
"\n",
|
||||
"# Specify your email address to receive the emails from examples below.\n",
|
||||
"recepient_email = \"test@example.com\"\n",
|
||||
"\n",
|
||||
"# Get the SendEmail action from the Connery Runner by ID.\n",
|
||||
"connery_service = ConneryService()\n",
|
||||
"send_email_action = connery_service.get_action(\"CABC80BB79C15067CA983495324AE709\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Run the action manually."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"manual_run_result = send_email_action.run(\n",
|
||||
" {\n",
|
||||
" \"recipient\": recepient_email,\n",
|
||||
" \"subject\": \"Test email\",\n",
|
||||
" \"body\": \"This is a test email sent from Connery.\",\n",
|
||||
" }\n",
|
||||
")\n",
|
||||
"print(manual_run_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Run the action using the OpenAI Functions agent.\n",
|
||||
"\n",
|
||||
"You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/a37d216f-c121-46da-a428-0e09dc19b1dc/r)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `CABC80BB79C15067CA983495324AE709` with `{'recipient': 'test@example.com', 'subject': 'Late for Meeting', 'body': 'Dear Team,\\n\\nI wanted to inform you that I will be late for the meeting today. I apologize for any inconvenience caused. Please proceed with the meeting without me and I will join as soon as I can.\\n\\nBest regards,\\n[Your Name]'}`\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[36;1m\u001b[1;3m{'messageId': '<d34a694d-50e0-3988-25da-e86b4c51d7a7@gmail.com>'}\u001b[0m\u001b[32;1m\u001b[1;3mI have sent an email to test@example.com informing them that you will be late for the meeting.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"I have sent an email to test@example.com informing them that you will be late for the meeting.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"llm = ChatOpenAI(temperature=0)\n",
|
||||
"agent = initialize_agent(\n",
|
||||
" [send_email_action], llm, AgentType.OPENAI_FUNCTIONS, verbose=True\n",
|
||||
")\n",
|
||||
"agent_run_result = agent.run(\n",
|
||||
" f\"Send an email to the {recepient_email} and say that I will be late for the meeting.\"\n",
|
||||
")\n",
|
||||
"print(agent_run_result)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"NOTE: Connery Action is a structured tool, so you can only use it in the agents supporting structured tools."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.13"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -6,7 +6,7 @@
|
||||
"id": "4x4kQ0VcodAC"
|
||||
},
|
||||
"source": [
|
||||
"# Metaphor Search"
|
||||
"# Exa Search"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -15,13 +15,13 @@
|
||||
"id": "V1x8wEUhodAH"
|
||||
},
|
||||
"source": [
|
||||
"Metaphor is a search engine fully designed for use by LLMs. Search for documents on the internet using **natural language queries**, then retrieve **cleaned HTML content** from desired documents.\n",
|
||||
"Exa (formerly Metaphor Search) is a search engine fully designed for use by LLMs. Search for documents on the internet using **natural language queries**, then retrieve **cleaned HTML content** from desired documents.\n",
|
||||
"\n",
|
||||
"Unlike keyword-based search (Google), Metaphor's neural search capabilities allow it to semantically understand queries and return relevant documents. For example, we could search `\"fascinating article about cats\"` and compare the search results from [Google](https://www.google.com/search?q=fascinating+article+about+cats) and [Metaphor](https://metaphor.systems/search?q=fascinating%20article%20about%20cats&autopromptString=Here%20is%20a%20fascinating%20article%20about%20cats%3A). Google gives us SEO-optimized listicles based on the keyword \"fascinating\". Metaphor just works.\n",
|
||||
"Unlike keyword-based search (Google), Exa's neural search capabilities allow it to semantically understand queries and return relevant documents. For example, we could search `\"fascinating article about cats\"` and compare the search results from [Google](https://www.google.com/search?q=fascinating+article+about+cats) and [Exa](https://search.exa.ai/search?q=fascinating%20article%20about%20cats&autopromptString=Here%20is%20a%20fascinating%20article%20about%20cats%3A). Google gives us SEO-optimized listicles based on the keyword \"fascinating\". Exa just works.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use Metaphor Search with LangChain.\n",
|
||||
"This notebook goes over how to use Exa Search with LangChain.\n",
|
||||
"\n",
|
||||
"First, get a Metaphor API key and add it as an environment variable. Get 1000 free searches/month by [signing up here](https://platform.metaphor.systems/)."
|
||||
"First, get an Exa API key and add it as an environment variable. Get 1000 free searches/month by [signing up here](https://dashboard.exa.ai/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -34,7 +34,88 @@
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"METAPHOR_API_KEY\"] = \"...\""
|
||||
"os.environ[\"EXA_API_KEY\"] = \"...\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"And install the integration package"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet langchain-exa\n",
|
||||
"\n",
|
||||
"# and some deps for this notebook\n",
|
||||
"%pip install --upgrade --quiet langchain langchain-openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using ExaSearchRetriever\n",
|
||||
"\n",
|
||||
"ExaSearchRetriever is a retriever that uses Exa Search to retrieve relevant documents."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[Result(title='Find Us:', url='https://travelila.com/best-time-to-visit-japan/', id='UFLQGtanQffaDErhngnzgA', score=0.1865834891796112, published_date='2021-01-05', author=None, text='If you are planning to spend your next vacation in Japan, then hold your excitement a bit. It would help if you planned which places you will visit in Japan and the country’s best things. It’s entirel', highlights=None, highlight_scores=None), Result(title='When Is The Best Time of Year To Visit Japan?', url='https://boutiquejapan.com/when-is-the-best-time-of-year-to-visit-japan/', id='70b0IMuaQpshjpBpnwsfUg', score=0.17796635627746582, published_date='2022-09-26', author='Andres Zuleta', text='The good news for travelers is that there is no single best time of year to travel to Japan — yet this makes it hard to decide when to visit, as each season has its own special highlights.When plannin', highlights=None, highlight_scores=None), Result(title='Here is the Best Time to Visit Japan - Cooking Sun', url='https://www.cooking-sun.com/best-time-to-visit-japan/', id='2mh-xvoqGPT-ZRvX9GezNQ', score=0.17497511208057404, published_date='2018-12-17', author='Cooking Sun', text='Japan is a diverse and beautiful country that’s brimming with culture. For some travelers, visiting Japan is a dream come true, since it grazes bucket lists across the globe. One of the best parts abo', highlights=None, highlight_scores=None), Result(title='When to Visit Japan? Bests Times and 2023 Travel Tips', url='https://www.jrailpass.com/blog/when-visit-japan-times', id='KqCnY8fF-nc76n1wNpIo1Q', score=0.17359933257102966, published_date='2020-02-18', author='JRailPass', text='When is the best time to visit Japan? This is a question without a simple answer. Japan is a year-round destination, with interesting activities, attractions, and festivities throughout the year.Your ', highlights=None, highlight_scores=None), Result(title='Complete Guide To Visiting Japan In February 2023: Weather, What To See & Do | LIVE JAPAN travel guide', url='https://livejapan.com/en/article-a0002948/', id='i3nmekOdM8_VBxPfcJmxng', score=0.17215865850448608, published_date='2019-11-13', author='Lucio Maurizi', text='\\n \\n \\n HOME\\n Complete Guide To Visiting Japan In February 2023: Weather, What To See & Do\\n \\n \\n \\n \\n \\n \\n Date published: 13 November 2019 \\n Last updated: 26 January 2021 \\n \\n \\n So you’re planning your tra', highlights=None, highlight_scores=None)]\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='Based on the given context, there is no specific best time mentioned to visit Japan. Each season has its own special highlights, and Japan is a year-round destination with interesting activities, attractions, and festivities throughout the year. Therefore, the best time to visit Japan depends on personal preferences and the specific activities or events one wants to experience.')"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_core.prompts import PromptTemplate\n",
|
||||
"from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
|
||||
"from langchain_exa import ExaSearchRetriever, TextContentsOptions\n",
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"# retrieve 5 documents, with content truncated at 1000 characters\n",
|
||||
"retriever = ExaSearchRetriever(\n",
|
||||
" k=5, text_contents_options=TextContentsOptions(max_length=200)\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(\n",
|
||||
" \"\"\"Answer the following query based on the following context:\n",
|
||||
"query: {query}\n",
|
||||
"<context>\n",
|
||||
"{context}\n",
|
||||
"</context\"\"\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI()\n",
|
||||
"\n",
|
||||
"chain = (\n",
|
||||
" RunnableParallel({\"context\": retriever, \"query\": RunnablePassthrough()})\n",
|
||||
" | prompt\n",
|
||||
" | llm\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chain.invoke(\"When is the best time to visit japan?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -43,14 +124,14 @@
|
||||
"id": "ip5_D9MkodAK"
|
||||
},
|
||||
"source": [
|
||||
"## Using the Metaphor SDK as LangChain Agent Tools\n",
|
||||
"## Using the Exa SDK as LangChain Agent Tools\n",
|
||||
"\n",
|
||||
"The [Metaphor SDK](https://docs.metaphor.systems/) creates a client that can use the Metaphor API to perform three functions:\n",
|
||||
"The [Exa SDK](https://docs.exa.ai/) creates a client that can use the Exa API to perform three functions:\n",
|
||||
"- `search`: Given a natural language search query, retrieve a list of search results.\n",
|
||||
"- `find_similar`: Given a URL, retrieve a list of search results corresponding to webpages which are similar to the document at the provided URL.\n",
|
||||
"- `get_content`: Given a list of document ids fetched from `search` or `find_similar`, get cleaned HTML content for each document.\n",
|
||||
"\n",
|
||||
"We can use the `@tool` decorator and docstrings to create LangChain Tool wrappers that tell an LLM agent how to use Metaphor."
|
||||
"We can use the `@tool` decorator and docstrings to create LangChain Tool wrappers that tell an LLM agent how to use Exa."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -61,7 +142,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet metaphor-python"
|
||||
"%pip install --upgrade --quiet langchain-exa"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -72,16 +153,16 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from exa_py import Exa\n",
|
||||
"from langchain.agents import tool\n",
|
||||
"from metaphor_python import Metaphor\n",
|
||||
"\n",
|
||||
"metaphor = Metaphor(api_key=os.environ[\"METAPHOR_API_KEY\"])\n",
|
||||
"exa = Exa(api_key=os.environ[\"EXA_API_KEY\"])\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"@tool\n",
|
||||
"def search(query: str):\n",
|
||||
" \"\"\"Search for a webpage based on the query.\"\"\"\n",
|
||||
" return metaphor.search(f\"{query}\", use_autoprompt=True, num_results=5)\n",
|
||||
" return exa.search(f\"{query}\", use_autoprompt=True, num_results=5)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"@tool\n",
|
||||
@@ -89,7 +170,7 @@
|
||||
" \"\"\"Search for webpages similar to a given URL.\n",
|
||||
" The url passed in should be a URL returned from `search`.\n",
|
||||
" \"\"\"\n",
|
||||
" return metaphor.find_similar(url, num_results=5)\n",
|
||||
" return exa.find_similar(url, num_results=5)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"@tool\n",
|
||||
@@ -97,7 +178,7 @@
|
||||
" \"\"\"Get the contents of a webpage.\n",
|
||||
" The ids passed in should be a list of ids returned from `search`.\n",
|
||||
" \"\"\"\n",
|
||||
" return metaphor.get_contents(ids)\n",
|
||||
" return exa.get_contents(ids)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"tools = [search, get_contents, find_similar]"
|
||||
@@ -109,9 +190,9 @@
|
||||
"id": "sVe2ca9OodAO"
|
||||
},
|
||||
"source": [
|
||||
"### Providing Metaphor Tools to an Agent\n",
|
||||
"### Providing Exa Tools to an Agent\n",
|
||||
"\n",
|
||||
"We can provide the Metaphor tools we just created to a LangChain `OpenAIFunctionsAgent`. When asked to `Summarize for me a fascinating article about cats`, the agent uses the `search` tool to perform a Metaphor search with an appropriate search query, uses the `get_contents` tool to perform Metaphor content retrieval, and then returns a summary of the retrieved content."
|
||||
"We can provide the Exa tools we just created to a LangChain `OpenAIFunctionsAgent`. When asked to `Summarize for me a fascinating article about cats`, the agent uses the `search` tool to perform a Exa search with an appropriate search query, uses the `get_contents` tool to perform Exa content retrieval, and then returns a summary of the retrieved content."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -237,9 +318,11 @@
|
||||
"id": "e3FHjxT-RoIH"
|
||||
},
|
||||
"source": [
|
||||
"## Advanced Metaphor Features\n",
|
||||
"## Advanced Exa Features\n",
|
||||
"\n",
|
||||
"Metaphor supports powerful filters by domain and date. We can provide a more powerful `search` tool to the agent that lets it decide to apply filters if they are useful for the objective. See all of Metaphor's search features [here](https://github.com/metaphorsystems/metaphor-python/)."
|
||||
"Exa supports powerful filters by domain and date. We can provide a more powerful `search` tool to the agent that lets it decide to apply filters if they are useful for the objective. See all of Exa's search features [here](https://github.com/metaphorsystems/metaphor-python/).\n",
|
||||
"\n",
|
||||
"[//]: # \"TODO(erick): switch metaphor github link to exa github link when sdk published\""
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -250,10 +333,10 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from exa_py import Exa\n",
|
||||
"from langchain.agents import tool\n",
|
||||
"from metaphor_python import Metaphor\n",
|
||||
"\n",
|
||||
"metaphor = Metaphor(api_key=os.environ[\"METAPHOR_API_KEY\"])\n",
|
||||
"exa = Exa(api_key=os.environ[\"Exa_API_KEY\"])\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"@tool\n",
|
||||
@@ -262,7 +345,7 @@
|
||||
" Set the optional include_domains (list[str]) parameter to restrict the search to a list of domains.\n",
|
||||
" Set the optional start_published_date (str) parameter to restrict the search to documents published after the date (YYYY-MM-DD).\n",
|
||||
" \"\"\"\n",
|
||||
" return metaphor.search(\n",
|
||||
" return exa.search_and_contents(\n",
|
||||
" f\"{query}\",\n",
|
||||
" use_autoprompt=True,\n",
|
||||
" num_results=5,\n",
|
||||
@@ -276,7 +359,7 @@
|
||||
" \"\"\"Search for webpages similar to a given URL.\n",
|
||||
" The url passed in should be a URL returned from `search`.\n",
|
||||
" \"\"\"\n",
|
||||
" return metaphor.find_similar(url, num_results=5)\n",
|
||||
" return exa.find_similar_and_contents(url, num_results=5)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"@tool\n",
|
||||
@@ -284,7 +367,7 @@
|
||||
" \"\"\"Get the contents of a webpage.\n",
|
||||
" The ids passed in should be a list of ids returned from `search`.\n",
|
||||
" \"\"\"\n",
|
||||
" return metaphor.get_contents(ids)\n",
|
||||
" return exa.get_contents(ids)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"tools = [search, get_contents, find_similar]"
|
||||
@@ -449,7 +532,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.0"
|
||||
"version": "3.11.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
181
docs/docs/integrations/tools/ionic_shopping.ipynb
Normal file
181
docs/docs/integrations/tools/ionic_shopping.ipynb
Normal file
@@ -0,0 +1,181 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "b3G2HfJwwAwc"
|
||||
},
|
||||
"source": [
|
||||
"# Ionic Shopping Tool\n",
|
||||
"\n",
|
||||
"[Ionic](https://www.ioniccommerce.com/) is a plug and play ecommerce marketplace for AI Assistants. By including the [Ionic Tool](https://github.com/ioniccommerce/ionic_langchain) in your agent, you are effortlessly providing your users with the ability to shop and transact directly within your agent, and you'll get a cut of the transaction.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"This is a basic jupyter notebook demonstrating how to integrate the Ionic Tool into your agent. For more information on setting up your Agent with Ionic, see the Ionic [documentation](https://docs.ioniccommerce.com/introduction).\n",
|
||||
"\n",
|
||||
"This Jupyter Notebook demonstrates how to use the Ionic tool with an Agent.\n",
|
||||
"\n",
|
||||
"**Note: The ionic-langchain package is maintained by the Ionic Commerce team, not the LangChain maintainers.**\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"---\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "EIO5SfIb5FiB"
|
||||
},
|
||||
"source": [
|
||||
"## Setup"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "wsPt35XcSuWM"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pip install langchain langchain_openai langchainhub"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "OME5aldfS5FJ"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pip install ionic-langchain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "g1UbcClL5IJR"
|
||||
},
|
||||
"source": [
|
||||
"## Setup Agent"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"id": "5vOjSwyQLguq",
|
||||
"outputId": "e5cda856-1298-4b51-aa93-6e9f22be7279"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from ionic_langchain.tool import Ionic, IonicTool\n",
|
||||
"from langchain import hub\n",
|
||||
"from langchain.agents import AgentExecutor, Tool, create_react_agent\n",
|
||||
"from langchain_openai import OpenAI\n",
|
||||
"\n",
|
||||
"# Based on ReAct Agent\n",
|
||||
"# https://python.langchain.com/docs/modules/agents/agent_types/react\n",
|
||||
"# Please reach out to support@ionicapi.com for help with add'l agent types.\n",
|
||||
"\n",
|
||||
"open_ai_key = \"YOUR KEY HERE\"\n",
|
||||
"model = \"gpt-3.5-turbo-instruct\"\n",
|
||||
"temperature = 0.6\n",
|
||||
"\n",
|
||||
"llm = OpenAI(openai_api_key=open_ai_key, model_name=model, temperature=temperature)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"ionic_tool = IonicTool().tool()\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# The tool comes with its own prompt,\n",
|
||||
"# but you may also update it directly via the description attribute:\n",
|
||||
"\n",
|
||||
"ionic_tool.description = str(\n",
|
||||
" \"\"\"\n",
|
||||
"Ionic is an e-commerce shopping tool. Assistant uses the Ionic Commerce Shopping Tool to find, discover, and compare products from thousands of online retailers. Assistant should use the tool when the user is looking for a product recommendation or trying to find a specific product.\n",
|
||||
"\n",
|
||||
"The user may specify the number of results, minimum price, and maximum price for which they want to see results.\n",
|
||||
"Ionic Tool input is a comma-separated string of values:\n",
|
||||
" - query string (required, must not include commas)\n",
|
||||
" - number of results (default to 4, no more than 10)\n",
|
||||
" - minimum price in cents ($5 becomes 500)\n",
|
||||
" - maximum price in cents\n",
|
||||
"For example, if looking for coffee beans between 5 and 10 dollars, the tool input would be `coffee beans, 5, 500, 1000`.\n",
|
||||
"\n",
|
||||
"Return them as a markdown formatted list with each recommendation from tool results, being sure to include the full PDP URL. For example:\n",
|
||||
"\n",
|
||||
"1. Product 1: [Price] -- link\n",
|
||||
"2. Product 2: [Price] -- link\n",
|
||||
"3. Product 3: [Price] -- link\n",
|
||||
"4. Product 4: [Price] -- link\n",
|
||||
"\"\"\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"tools = [ionic_tool]\n",
|
||||
"\n",
|
||||
"# default prompt for create_react_agent\n",
|
||||
"prompt = hub.pull(\"hwchase17/react\")\n",
|
||||
"\n",
|
||||
"agent = create_react_agent(\n",
|
||||
" llm,\n",
|
||||
" tools,\n",
|
||||
" prompt=prompt,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"agent_executor = AgentExecutor(\n",
|
||||
" agent=agent, tools=tools, handle_parsing_errors=True, verbose=True, max_iterations=5\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "Eb78bHgb5O6u"
|
||||
},
|
||||
"source": [
|
||||
"## Run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
"height": 197
|
||||
},
|
||||
"id": "FxELjaR9URF-",
|
||||
"outputId": "f4bf30ec-64b8-4970-dea1-f0720c60681e"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"input = (\n",
|
||||
" \"I'm looking for a new 4k monitor can you find me some options for less than $1000\"\n",
|
||||
")\n",
|
||||
"agent_executor.invoke({\"input\": input})"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "python",
|
||||
"version": "3.11.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
@@ -19,7 +19,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.tools import PubmedQueryRun"
|
||||
"from langchain_community.tools.pubmed.tool import PubmedQueryRun"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
73
docs/docs/integrations/tools/wikidata.ipynb
Normal file
73
docs/docs/integrations/tools/wikidata.ipynb
Normal file
@@ -0,0 +1,73 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c4b39799",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Wikidata\n",
|
||||
"\n",
|
||||
">[Wikidata](https://wikidata.org/) is a free and open knowledge base that can be read and edited by both humans and machines. Wikidata is one of the world's largest open knowledge bases.\n",
|
||||
"\n",
|
||||
"First, you need to install `wikibase-rest-api-client` and `mediawikiapi` python packages."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "3d9195d4",
|
||||
"metadata": {
|
||||
"vscode": {
|
||||
"languageId": "shellscript"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet wikibase-rest-api-client mediawikiapi"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "955988a1-ebc2-4c9a-9298-c493fe842de1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.tools.wikidata.tool import WikidataAPIWrapper, WikidataQueryRun\n",
|
||||
"\n",
|
||||
"wikidata = WikidataQueryRun(api_wrapper=WikidataAPIWrapper())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "9926a8a7-3e4e-4a97-ba43-7e5a274b9561",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"print(wikidata.run(\"Alan Turing\"))"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.10"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -7,22 +7,12 @@
|
||||
},
|
||||
"source": [
|
||||
"# BigQuery Vector Search\n",
|
||||
"> **BigQueryVectorSearch**:\n",
|
||||
"BigQuery vector search lets you use GoogleSQL to do semantic search, using vector indexes for fast approximate results, or using brute force for exact results.\n",
|
||||
"> [**BigQuery Vector Search**](https://cloud.google.com/bigquery/docs/vector-search-intro) lets you use GoogleSQL to do semantic search, using vector indexes for fast approximate results, or using brute force for exact results.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"This tutorial illustrates how to work with an end-to-end data and embedding management system in LangChain, and provide scalable semantic search in BigQuery."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This is a **private preview (experimental)** feature. Please submit this\n",
|
||||
"[enrollment form](https://docs.google.com/forms/d/18yndSb4dTf2H0orqA9N7NAchQEDQekwWiD5jYfEkGWk/viewform?edit_requested=true)\n",
|
||||
"if you want to enroll BigQuery Vector Search Experimental."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
|
||||
@@ -49,14 +49,6 @@
|
||||
"id": "ae9fcf3e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/Users/jeff/.pyenv/versions/3.10.10/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
|
||||
" from .autonotebook import tqdm as notebook_tqdm\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
|
||||
@@ -416,7 +416,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Similarity Search with filtering\n",
|
||||
"FAISS vectorstore can also support filtering, since the FAISS does not natively support filtering we have to do it manually. This is done by first fetching more results than `k` and then filtering them. You can filter the documents based on metadata. You can also set the `fetch_k` parameter when calling any search method to set how many documents you want to fetch before filtering. Here is a small example:"
|
||||
"FAISS vectorstore can also support filtering, since the FAISS does not natively support filtering we have to do it manually. This is done by first fetching more results than `k` and then filtering them. This filter is either a callble that takes as input a metadata dict and returns a bool, or a metadata dict where each missing key is ignored and each present k must be in a list of values. You can also set the `fetch_k` parameter when calling any search method to set how many documents you want to fetch before filtering. Here is a small example:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -480,6 +480,8 @@
|
||||
],
|
||||
"source": [
|
||||
"results_with_scores = db.similarity_search_with_score(\"foo\", filter=dict(page=1))\n",
|
||||
"# Or with a callable:\n",
|
||||
"# results_with_scores = db.similarity_search_with_score(\"foo\", filter=lambda d: d[\"page\"] == 1)\n",
|
||||
"for doc, score in results_with_scores:\n",
|
||||
" print(f\"Content: {doc.page_content}, Metadata: {doc.metadata}, Score: {score}\")"
|
||||
]
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user