mirror of
https://github.com/hwchase17/langchain.git
synced 2026-04-20 13:28:53 +00:00
Compare commits
88 Commits
v0.1.2
...
bagatur/do
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0851476466 | ||
|
|
ce595f0203 | ||
|
|
fdbfa6b2c8 | ||
|
|
643fb3ab50 | ||
|
|
8d990ba67b | ||
|
|
63da14d620 | ||
|
|
8d299645f9 | ||
|
|
dfd94fb2f0 | ||
|
|
0b740ebd49 | ||
|
|
13cf4594f4 | ||
|
|
6004e9706f | ||
|
|
66aafc0573 | ||
|
|
9e95699277 | ||
|
|
b3ed98dec0 | ||
|
|
3f38e1a457 | ||
|
|
61da2ff24c | ||
|
|
d628a80a5d | ||
|
|
4c7755778d | ||
|
|
2b2285dac0 | ||
|
|
476bf8b763 | ||
|
|
019b6ebe8d | ||
|
|
80fcc50c65 | ||
|
|
5c6e123757 | ||
|
|
0e2e7d8b83 | ||
|
|
d898d2f07b | ||
|
|
ff3163297b | ||
|
|
4ec3fe4680 | ||
|
|
4e160540ff | ||
|
|
c69f599594 | ||
|
|
95ee69a301 | ||
|
|
e135e5257c | ||
|
|
90f5a1c40e | ||
|
|
92e6a641fd | ||
|
|
9ce177580a | ||
|
|
20fcd49348 | ||
|
|
cfc225ecb3 | ||
|
|
26b2ad6d5b | ||
|
|
e529939c54 | ||
|
|
afb25eeec4 | ||
|
|
51c8ef6af4 | ||
|
|
c3530f1c11 | ||
|
|
ba326b98d0 | ||
|
|
54149292f8 | ||
|
|
ef6a335570 | ||
|
|
1f4ac62dee | ||
|
|
39d1cbfecf | ||
|
|
d0a8082188 | ||
|
|
5de59f9236 | ||
|
|
226fe645f1 | ||
|
|
4b7969efc5 | ||
|
|
fb41b68ea1 | ||
|
|
3b0226b2c6 | ||
|
|
c98994c3c9 | ||
|
|
c88750d54b | ||
|
|
e5672bc944 | ||
|
|
404abf139a | ||
|
|
a500527030 | ||
|
|
b9e7f6f38a | ||
|
|
d6275e47f2 | ||
|
|
5694728816 | ||
|
|
a950fa0487 | ||
|
|
1011b681dc | ||
|
|
b26a22f307 | ||
|
|
8da34118bc | ||
|
|
d1b4ead87c | ||
|
|
fbe592a5ce | ||
|
|
d511366dd3 | ||
|
|
774e543e1f | ||
|
|
b9f5104e6c | ||
|
|
35ec0bbd3b | ||
|
|
2ac3a82d85 | ||
|
|
cfe95ab085 | ||
|
|
dd5b8107b1 | ||
|
|
873de14cd8 | ||
|
|
6b2a57161a | ||
|
|
aad2aa7188 | ||
|
|
1b9001db47 | ||
|
|
01c2f27ffa | ||
|
|
369e90d427 | ||
|
|
a1c0cf21c9 | ||
|
|
7ecd2f22ac | ||
|
|
8569b8f680 | ||
|
|
fc196cab12 | ||
|
|
eac91b60c9 | ||
|
|
85e8423312 | ||
|
|
de209af533 | ||
|
|
54f90fc6bc | ||
|
|
1445ac95e8 |
@@ -1,7 +1,16 @@
|
||||
name: "\U0001F680 Feature request"
|
||||
description: Submit a proposal/request for a new LangChain feature
|
||||
labels: ["02 Feature Request"]
|
||||
labels: ["Idea"]
|
||||
description: Suggest ideas for LangChain features and improvements.
|
||||
body:
|
||||
- type: checkboxes
|
||||
id: checks
|
||||
attributes:
|
||||
label: Checked
|
||||
description: Please confirm and check all the following options.
|
||||
options:
|
||||
- label: I searched existing ideas and did not find a similar one.
|
||||
required: true
|
||||
- label: I added a very descriptive title to this idea.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: feature-request
|
||||
validations:
|
||||
@@ -19,12 +28,3 @@ body:
|
||||
label: Motivation
|
||||
description: |
|
||||
Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too.
|
||||
|
||||
- type: textarea
|
||||
id: contribution
|
||||
validations:
|
||||
required: true
|
||||
attributes:
|
||||
label: Your contribution
|
||||
description: |
|
||||
Is there any way that you could help, e.g. by submitting a PR? Make sure to read the [Contributing Guide](https://python.langchain.com/docs/contributing/)
|
||||
69
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
69
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
@@ -1,5 +1,5 @@
|
||||
name: "\U0001F41B Bug Report"
|
||||
description: Submit a bug report to help us improve LangChain. To report a security issue, please instead use the security option below.
|
||||
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the GitHub Discussions.
|
||||
labels: ["02 Bug Report"]
|
||||
body:
|
||||
- type: markdown
|
||||
@@ -7,6 +7,11 @@ body:
|
||||
value: >
|
||||
Thank you for taking the time to file a bug report.
|
||||
|
||||
Use this to report bugs in LangChain.
|
||||
|
||||
If you're not certain that your issue is due to a bug in LangChain, please use [GitHub Discussions](https://github.com/langchain-ai/langchain/discussions)
|
||||
to ask for help with your issue.
|
||||
|
||||
Relevant links to check before filing a bug report to see if your issue has already been reported, fixed or
|
||||
if there's another way to solve your problem:
|
||||
|
||||
@@ -14,7 +19,8 @@ body:
|
||||
[API Reference](https://api.python.langchain.com/en/stable/),
|
||||
[GitHub search](https://github.com/langchain-ai/langchain),
|
||||
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
|
||||
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue)
|
||||
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
|
||||
[LangChain ChatBot](https://chat.langchain.com/)
|
||||
- type: checkboxes
|
||||
id: checks
|
||||
attributes:
|
||||
@@ -27,6 +33,8 @@ body:
|
||||
required: true
|
||||
- label: I used the GitHub search to find a similar question and didn't find it.
|
||||
required: true
|
||||
- label: I am sure that this is a bug in LangChain rather than my code.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: reproduction
|
||||
validations:
|
||||
@@ -38,10 +46,12 @@ body:
|
||||
|
||||
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
|
||||
|
||||
If you're including an error message, please include the full stack trace not just the last error.
|
||||
**Important!**
|
||||
|
||||
**Important!** Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
|
||||
Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
|
||||
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
|
||||
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
|
||||
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
|
||||
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
|
||||
|
||||
placeholder: |
|
||||
The following code:
|
||||
@@ -55,9 +65,16 @@ body:
|
||||
chain = RunnableLambda(bad_code)
|
||||
chain.invoke('Hello!')
|
||||
```
|
||||
|
||||
Include both the error and the full stack trace if reporting an exception!
|
||||
|
||||
- type: textarea
|
||||
id: error
|
||||
validations:
|
||||
required: false
|
||||
attributes:
|
||||
label: Error Message and Stack Trace (if applicable)
|
||||
description: |
|
||||
If you are reporting an error, please include the full error message and stack trace.
|
||||
placeholder: |
|
||||
Exception + full stack trace
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
@@ -76,28 +93,26 @@ body:
|
||||
id: system-info
|
||||
attributes:
|
||||
label: System Info
|
||||
description: Please share your system info with us.
|
||||
description: |
|
||||
Please share your system info with us.
|
||||
|
||||
"pip freeze | grep langchain"
|
||||
platform (windows / linux / mac)
|
||||
python version
|
||||
|
||||
OR if you're on a recent version of langchain-core you can paste the output of:
|
||||
|
||||
python -m langchain_core.sys_info
|
||||
placeholder: |
|
||||
"pip freeze | grep langchain"
|
||||
platform
|
||||
python version
|
||||
|
||||
Alternatively, if you're on a recent version of langchain-core you can paste the output of:
|
||||
|
||||
python -m langchain_core.sys_info
|
||||
|
||||
These will only surface LangChain packages, don't forget to include any other relevant
|
||||
packages you're using (if you're not sure what's relevant, you can paste the entire output of `pip freeze`).
|
||||
validations:
|
||||
required: true
|
||||
- type: checkboxes
|
||||
id: related-components
|
||||
attributes:
|
||||
label: Related Components
|
||||
description: "Select the components related to the issue (if applicable):"
|
||||
options:
|
||||
- label: "LLMs/Chat Models"
|
||||
- label: "Embedding Models"
|
||||
- label: "Prompts / Prompt Templates / Prompt Selectors"
|
||||
- label: "Output Parsers"
|
||||
- label: "Document Loaders"
|
||||
- label: "Vector Stores / Retrievers"
|
||||
- label: "Memory"
|
||||
- label: "Agents / Agent Executors"
|
||||
- label: "Tools / Toolkits"
|
||||
- label: "Chains"
|
||||
- label: "Callbacks/Tracing"
|
||||
- label: "Async"
|
||||
|
||||
3
.github/ISSUE_TEMPLATE/config.yml
vendored
3
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -7,6 +7,9 @@ contact_links:
|
||||
- name: Discord
|
||||
url: https://discord.gg/6adMQxSpJS
|
||||
about: General community discussions
|
||||
- name: Feature Request
|
||||
url: https://www.github.com/langchain-ai/langchain/discussions/categories/ideas
|
||||
about: Suggest a feature or an idea
|
||||
- name: Show and tell
|
||||
about: Show what you built with LangChain
|
||||
url: https://www.github.com/langchain-ai/langchain/discussions/categories/show-and-tell
|
||||
|
||||
@@ -85,21 +85,10 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": null,
|
||||
"id": "2448b6c2",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Graph(nodes={'7308e6063c6d40818c5a0cc1cc7444f2': Node(id='7308e6063c6d40818c5a0cc1cc7444f2', data=<class 'pydantic.main.RunnableParallel<context,question>Input'>), '292bbd8021d44ec3a31fbe724d9002c1': Node(id='292bbd8021d44ec3a31fbe724d9002c1', data=<class 'pydantic.main.RunnableParallel<context,question>Output'>), '9212f219cf05488f95229c56ea02b192': Node(id='9212f219cf05488f95229c56ea02b192', data=VectorStoreRetriever(tags=['FAISS', 'OpenAIEmbeddings'], vectorstore=<langchain_community.vectorstores.faiss.FAISS object at 0x117334f70>)), 'c7a8e65fa5cf44b99dbe7d1d6e36886f': Node(id='c7a8e65fa5cf44b99dbe7d1d6e36886f', data=RunnablePassthrough()), '818b9bfd40a341008373d5b9f9d0784b': Node(id='818b9bfd40a341008373d5b9f9d0784b', data=ChatPromptTemplate(input_variables=['context', 'question'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context', 'question'], template='Answer the question based only on the following context:\\n{context}\\n\\nQuestion: {question}\\n'))])), 'b9f1d3ddfa6b4334a16ea439df22b11e': Node(id='b9f1d3ddfa6b4334a16ea439df22b11e', data=ChatOpenAI(client=<class 'openai.api_resources.chat_completion.ChatCompletion'>, openai_api_key='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', openai_proxy='')), '2bf84f6355c44731848345ca7d0f8ab9': Node(id='2bf84f6355c44731848345ca7d0f8ab9', data=StrOutputParser()), '1aeb2da5da5a43bb8771d3f338a473a2': Node(id='1aeb2da5da5a43bb8771d3f338a473a2', data=<class 'pydantic.main.StrOutputParserOutput'>)}, edges=[Edge(source='7308e6063c6d40818c5a0cc1cc7444f2', target='9212f219cf05488f95229c56ea02b192'), Edge(source='9212f219cf05488f95229c56ea02b192', target='292bbd8021d44ec3a31fbe724d9002c1'), Edge(source='7308e6063c6d40818c5a0cc1cc7444f2', target='c7a8e65fa5cf44b99dbe7d1d6e36886f'), Edge(source='c7a8e65fa5cf44b99dbe7d1d6e36886f', target='292bbd8021d44ec3a31fbe724d9002c1'), Edge(source='292bbd8021d44ec3a31fbe724d9002c1', target='818b9bfd40a341008373d5b9f9d0784b'), Edge(source='818b9bfd40a341008373d5b9f9d0784b', target='b9f1d3ddfa6b4334a16ea439df22b11e'), Edge(source='2bf84f6355c44731848345ca7d0f8ab9', target='1aeb2da5da5a43bb8771d3f338a473a2'), Edge(source='b9f1d3ddfa6b4334a16ea439df22b11e', target='2bf84f6355c44731848345ca7d0f8ab9')])"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain.get_graph()"
|
||||
]
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
1393
docs/docs/expression_language/streaming.ipynb
Normal file
1393
docs/docs/expression_language/streaming.ipynb
Normal file
File diff suppressed because it is too large
Load Diff
@@ -35,6 +35,22 @@
|
||||
"from langchain_openai import OpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "3dd69cb4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"# get a new token: https://dashboard.cohere.ai/\n",
|
||||
"os.environ[\"COHERE_API_KEY\"] = getpass.getpass(\"Cohere API Key:\")\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Open API Key:\")\n",
|
||||
"os.environ[\"HUGGINGFACEHUB_API_TOKEN\"] = getpass.getpass(\"Hugging Face API Key:\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
@@ -44,7 +60,7 @@
|
||||
"source": [
|
||||
"llms = [\n",
|
||||
" OpenAI(temperature=0),\n",
|
||||
" Cohere(model=\"command-xlarge-20221108\", max_tokens=20, temperature=0),\n",
|
||||
" Cohere(temperature=0),\n",
|
||||
" HuggingFaceHub(repo_id=\"google/flan-t5-xl\", model_kwargs={\"temperature\": 1}),\n",
|
||||
"]"
|
||||
]
|
||||
@@ -160,7 +176,7 @@
|
||||
" llm=open_ai_llm, search_chain=search, verbose=True\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"cohere_llm = Cohere(temperature=0, model=\"command-xlarge-20221108\")\n",
|
||||
"cohere_llm = Cohere(temperature=0)\n",
|
||||
"search = SerpAPIWrapper()\n",
|
||||
"self_ask_with_search_cohere = SelfAskWithSearchChain(\n",
|
||||
" llm=cohere_llm, search_chain=search, verbose=True\n",
|
||||
@@ -241,14 +257,6 @@
|
||||
"source": [
|
||||
"model_lab.compare(\"What is the hometown of the reigning men's U.S. Open champion?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "94159131",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
138
docs/docs/integrations/callbacks/comet_tracing.ipynb
Normal file
138
docs/docs/integrations/callbacks/comet_tracing.ipynb
Normal file
@@ -0,0 +1,138 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5371a9bb",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Comet Tracing\n",
|
||||
"\n",
|
||||
"There are two ways to trace your LangChains executions with Comet:\n",
|
||||
"\n",
|
||||
"1. Setting the `LANGCHAIN_COMET_TRACING` environment variable to \"true\". This is the recommended way.\n",
|
||||
"2. Import the `CometTracer` manually and pass it explicitely."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "17c04cc6-c93d-4b6c-a033-e897577f4ed1",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-05-18T12:47:46.580776Z",
|
||||
"start_time": "2023-05-18T12:47:46.577833Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"import comet_llm\n",
|
||||
"\n",
|
||||
"os.environ[\"LANGCHAIN_COMET_TRACING\"] = \"true\"\n",
|
||||
"\n",
|
||||
"# Connect to Comet if no API Key is set\n",
|
||||
"comet_llm.init()\n",
|
||||
"\n",
|
||||
"# comet documentation to configure comet using env variables\n",
|
||||
"# https://www.comet.com/docs/v2/api-and-sdk/llm-sdk/configuration/\n",
|
||||
"# here we are configuring the comet project\n",
|
||||
"os.environ[\"COMET_PROJECT_NAME\"] = \"comet-example-langchain-tracing\"\n",
|
||||
"\n",
|
||||
"from langchain.agents import AgentType, initialize_agent, load_tools\n",
|
||||
"from langchain.llms import OpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "1b62cd48",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-05-18T12:47:47.445229Z",
|
||||
"start_time": "2023-05-18T12:47:47.436424Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Agent run with tracing. Ensure that OPENAI_API_KEY is set appropriately to run this example.\n",
|
||||
"\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"tools = load_tools([\"llm-math\"], llm=llm)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "bfa16b79-aa4b-4d41-a067-70d1f593f667",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-05-18T12:48:01.816137Z",
|
||||
"start_time": "2023-05-18T12:47:49.109574Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent = initialize_agent(\n",
|
||||
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"agent.run(\"What is 2 raised to .123243 power?\") # this should be traced\n",
|
||||
"# An url for the chain like the following should print in your console:\n",
|
||||
"# https://www.comet.com/<workspace>/<project_name>\n",
|
||||
"# The url can be used to view the LLM chain in Comet."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "5e212e7d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Now, we unset the environment variable and use a context manager.\n",
|
||||
"if \"LANGCHAIN_COMET_TRACING\" in os.environ:\n",
|
||||
" del os.environ[\"LANGCHAIN_COMET_TRACING\"]\n",
|
||||
"\n",
|
||||
"from langchain.callbacks.tracers.comet import CometTracer\n",
|
||||
"\n",
|
||||
"tracer = CometTracer()\n",
|
||||
"\n",
|
||||
"# Recreate the LLM, tools and agent and passing the callback to each of them\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"tools = load_tools([\"llm-math\"], llm=llm)\n",
|
||||
"agent = initialize_agent(\n",
|
||||
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"agent.run(\n",
|
||||
" \"What is 2 raised to .123243 power?\", callbacks=[tracer]\n",
|
||||
") # this should be traced"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -46,7 +46,7 @@ thoughts and actions live in your app.
|
||||
```python
|
||||
from langchain_openai import OpenAI
|
||||
from langchain.agents import AgentType, initialize_agent, load_tools
|
||||
from langchain.callbacks import StreamlitCallbackHandler
|
||||
from langchain_community.callbacks import StreamlitCallbackHandler
|
||||
import streamlit as st
|
||||
|
||||
llm = OpenAI(temperature=0, streaming=True)
|
||||
|
||||
@@ -22,44 +22,84 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 1,
|
||||
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T11:25:00.590587Z",
|
||||
"start_time": "2024-01-19T11:25:00.127293Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema import HumanMessage\n",
|
||||
"from langchain_community.chat_models import ChatAnthropic"
|
||||
"from langchain_community.chat_models import ChatAnthropic\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 2,
|
||||
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T11:25:04.349676Z",
|
||||
"start_time": "2024-01-19T11:25:03.964930Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatAnthropic()"
|
||||
"chat = ChatAnthropic(temperature=0, model_name=\"claude-2\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d1f9df276476f0bc",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"The code provided assumes that your ANTHROPIC_API_KEY is set in your environment variables. If you would like to manually specify your API key and also choose a different model, you can use the following code:\n",
|
||||
"```python\n",
|
||||
"chat = ChatAnthropic(temperature=0, anthropic_api_key=\"YOUR_API_KEY\", model_name=\"claude-instant-1.2\")\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"Please note that the default model is \"claude-2,\" and you can check the available models at [here](https://docs.anthropic.com/claude/reference/selecting-a-model)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 3,
|
||||
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T11:25:07.274418Z",
|
||||
"start_time": "2024-01-19T11:25:05.898031Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": "AIMessage(content=' 저는 파이썬을 좋아합니다.')"
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"Translate this sentence from English to French. I love programming.\"\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"chat.invoke(messages)"
|
||||
"system = \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
|
||||
"human = \"{text}\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
|
||||
"\n",
|
||||
"chain = prompt | chat\n",
|
||||
"chain.invoke({\n",
|
||||
" \"input_language\": \"English\",\n",
|
||||
" \"output_language\": \"Korean\",\n",
|
||||
" \"text\": \"I love Python\",\n",
|
||||
"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -72,44 +112,78 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "93a21c5c-6ef9-4688-be60-b2e1f94842fb",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.callbacks.manager import CallbackManager\n",
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 4,
|
||||
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T11:25:10.448733Z",
|
||||
"start_time": "2024-01-19T11:25:08.866277Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": "AIMessage(content=\" Why don't bears like fast food? Because they can't catch it!\")"
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await chat.ainvoke([messages])"
|
||||
"chat = ChatAnthropic(temperature=0, model_name=\"claude-2\")\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([(\"human\", \"Tell me a joke about {topic}\")])\n",
|
||||
"chain = prompt | chat\n",
|
||||
"await chain.ainvoke({\"topic\": \"bear\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 5,
|
||||
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T11:25:24.438696Z",
|
||||
"start_time": "2024-01-19T11:25:14.687480Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" Here are some of the most famous tourist attractions in Japan:\n",
|
||||
"\n",
|
||||
"- Tokyo - Tokyo Tower, Tokyo Skytree, Imperial Palace, Sensoji Temple, Meiji Shrine, Shibuya Crossing\n",
|
||||
"\n",
|
||||
"- Kyoto - Kinkakuji (Golden Pavilion), Fushimi Inari Shrine, Kiyomizu-dera Temple, Arashiyama Bamboo Grove, Gion Geisha District\n",
|
||||
"\n",
|
||||
"- Osaka - Osaka Castle, Dotonbori, Universal Studios Japan, Osaka Aquarium Kaiyukan \n",
|
||||
"\n",
|
||||
"- Hiroshima - Hiroshima Peace Memorial Park and Museum, Itsukushima Shrine (Miyajima Island)\n",
|
||||
"\n",
|
||||
"- Mount Fuji - Iconic and famous mountain, popular for hiking and viewing from places like Hakone and Kawaguchiko Lake\n",
|
||||
"\n",
|
||||
"- Himeji - Himeji Castle, one of Japan's most impressive feudal castles\n",
|
||||
"\n",
|
||||
"- Nara - Todaiji Temple, Nara Park with its bowing deer, Horyuji Temple with some of world's oldest wooden structures \n",
|
||||
"\n",
|
||||
"- Nikko - Elaborate shrines and temples nestled around Nikko National Park\n",
|
||||
"\n",
|
||||
"- Sapporo - Snow"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat = ChatAnthropic(\n",
|
||||
" streaming=True,\n",
|
||||
" verbose=True,\n",
|
||||
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
|
||||
"chat = ChatAnthropic(temperature=0.3, model_name=\"claude-2\")\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [(\"human\", \"Give me a list of famous tourist attractions in Japan\")]\n",
|
||||
")\n",
|
||||
"chat.stream(messages)"
|
||||
"chain = prompt | chat\n",
|
||||
"for chunk in chain.stream({}):\n",
|
||||
" print(chunk.content, end=\"\", flush=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -134,15 +208,130 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 6,
|
||||
"id": "07c47c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T11:25:25.288133Z",
|
||||
"start_time": "2024-01-19T11:25:24.438968Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": "AIMessage(content='파이썬을 사랑합니다.')"
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_anthropic import ChatAnthropicMessages\n",
|
||||
"\n",
|
||||
"chat = ChatAnthropicMessages(model_name=\"claude-instant-1.2\")\n",
|
||||
"chat.invoke(messages)"
|
||||
"system = (\n",
|
||||
" \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
|
||||
")\n",
|
||||
"human = \"{text}\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
|
||||
"\n",
|
||||
"chain = prompt | chat\n",
|
||||
"chain.invoke(\n",
|
||||
" {\n",
|
||||
" \"input_language\": \"English\",\n",
|
||||
" \"output_language\": \"Korean\",\n",
|
||||
" \"text\": \"I love Python\",\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "19e53d75935143fd",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"ChatAnthropicMessages also requires the anthropic_api_key argument, or the ANTHROPIC_API_KEY environment variable must be set. \n",
|
||||
"\n",
|
||||
"ChatAnthropicMessages also supports async and streaming functionality:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "e20a139d30e3d333",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T11:25:26.012325Z",
|
||||
"start_time": "2024-01-19T11:25:25.288358Z"
|
||||
},
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": "AIMessage(content='파이썬을 사랑합니다.')"
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await chain.ainvoke(\n",
|
||||
" {\n",
|
||||
" \"input_language\": \"English\",\n",
|
||||
" \"output_language\": \"Korean\",\n",
|
||||
" \"text\": \"I love Python\",\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "6f34f1073d7e7120",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T11:25:28.323455Z",
|
||||
"start_time": "2024-01-19T11:25:26.012040Z"
|
||||
},
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Here are some of the most famous tourist attractions in Japan:\n",
|
||||
"\n",
|
||||
"- Tokyo Tower - A communication and observation tower in Tokyo modeled after the Eiffel Tower. It offers stunning views of the city.\n",
|
||||
"\n",
|
||||
"- Mount Fuji - Japan's highest and most famous mountain. It's a iconic symbol of Japan and a UNESCO World Heritage Site. \n",
|
||||
"\n",
|
||||
"- Itsukushima Shrine (Miyajima) - A shrine located on an island in Hiroshima prefecture, known for its \"floating\" torii gate that seems to float on water during high tide.\n",
|
||||
"\n",
|
||||
"- Himeji Castle - A UNESCO World Heritage Site famous for having withstood numerous battles without destruction to its intricate white walls and sloping, triangular roofs. \n",
|
||||
"\n",
|
||||
"- Kawaguchiko Station - Near Mount Fuji, this area is known for its scenic Fuji Five Lakes region. \n",
|
||||
"\n",
|
||||
"- Hiroshima Peace Memorial Park and Museum - Commemorates the world's first atomic bombing in Hiroshima on August 6, 1945. \n",
|
||||
"\n",
|
||||
"- Arashiyama Bamboo Grove - A renowned bamboo forest located in Kyoto that draws many visitors.\n",
|
||||
"\n",
|
||||
"- Kegon Falls - One of Japan's largest waterfalls"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [(\"human\", \"Give me a list of famous tourist attractions in Japan\")]\n",
|
||||
")\n",
|
||||
"chain = prompt | chat\n",
|
||||
"for chunk in chain.stream({}):\n",
|
||||
" print(chunk.content, end=\"\", flush=True)"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
@@ -15,9 +15,9 @@
|
||||
"source": [
|
||||
"# AzureMLChatOnlineEndpoint\n",
|
||||
"\n",
|
||||
">[Azure Machine Learning](https://azure.microsoft.com/en-us/products/machine-learning/) is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. `Azure Foundation Models` include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.\n",
|
||||
">[Azure Machine Learning](https://azure.microsoft.com/en-us/products/machine-learning/) is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides foundational and general purpose models from different providers.\n",
|
||||
">\n",
|
||||
">[Azure Machine Learning Online Endpoints](https://learn.microsoft.com/en-us/azure/machine-learning/concept-endpoints). After you train machine learning models or pipelines, you need to deploy them to production so that others can use them for inference. Inference is the process of applying new input data to the machine learning model or pipeline to generate outputs. While these outputs are typically referred to as \"predictions,\" inferencing can be used to generate outputs for other machine learning tasks, such as classification and clustering. In `Azure Machine Learning`, you perform inferencing by using endpoints and deployments. `Endpoints` and `Deployments` allow you to decouple the interface of your production workload from the implementation that serves it.\n",
|
||||
">In general, you need to deploy models in order to consume its predictions (inference). In `Azure Machine Learning`, [Online Endpoints](https://learn.microsoft.com/en-us/azure/machine-learning/concept-endpoints) are used to deploy these models with a real-time serving. They are based on the ideas of `Endpoints` and `Deployments` which allow you to decouple the interface of your production workload from the implementation that serves it.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use a chat model hosted on an `Azure Machine Learning Endpoint`."
|
||||
]
|
||||
@@ -37,10 +37,11 @@
|
||||
"source": [
|
||||
"## Set up\n",
|
||||
"\n",
|
||||
"To use the wrapper, you must [deploy a model on AzureML](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models?view=azureml-api-2#deploying-foundation-models-to-endpoints-for-inferencing) and obtain the following parameters:\n",
|
||||
"You must [deploy a model on Azure ML](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models?view=azureml-api-2#deploying-foundation-models-to-endpoints-for-inferencing) or [to Azure AI studio](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-open) and obtain the following parameters:\n",
|
||||
"\n",
|
||||
"* `endpoint_api_key`: The API key provided by the endpoint\n",
|
||||
"* `endpoint_url`: The REST endpoint url provided by the endpoint"
|
||||
"* `endpoint_url`: The REST endpoint url provided by the endpoint.\n",
|
||||
"* `endpoint_api_type`: Use `endpoint_type='realtime'` when deploying models to **Realtime endpoints** (hosted managed infrastructure). Use `endpoint_type='serverless'` when deploying models using the **Pay-as-you-go** offering (model as a service).\n",
|
||||
"* `endpoint_api_key`: The API key provided by the endpoint"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -51,7 +52,40 @@
|
||||
"\n",
|
||||
"The `content_formatter` parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a `ContentFormatterBase` class is provided to allow users to transform data to their liking. The following content formatters are provided:\n",
|
||||
"\n",
|
||||
"* `LLamaContentFormatter`: Formats request and response data for LLaMa2-chat"
|
||||
"* `LLamaChatContentFormatter`: Formats request and response data for LLaMa2-chat\n",
|
||||
"\n",
|
||||
"*Note: `langchain.chat_models.azureml_endpoint.LLamaContentFormatter` is being deprecated and replaced with `langchain.chat_models.azureml_endpoint.LLamaChatContentFormatter`.*\n",
|
||||
"\n",
|
||||
"You can implement custom content formatters specific for your model deriving from the class `langchain_community.llms.azureml_endpoint.ContentFormatterBase`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Examples\n",
|
||||
"\n",
|
||||
"The following section cotain examples about how to use this class:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema import HumanMessage\n",
|
||||
"from langchain_community.chat_models.azureml_endpoint import (\n",
|
||||
" AzureMLEndpointApiType,\n",
|
||||
" LlamaChatContentFormatter,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example: Chat completions with real-time endpoints"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -76,11 +110,79 @@
|
||||
"\n",
|
||||
"chat = AzureMLChatOnlineEndpoint(\n",
|
||||
" endpoint_url=\"https://<your-endpoint>.<your_region>.inference.ml.azure.com/score\",\n",
|
||||
" endpoint_api_type=AzureMLEndpointApiType.realtime,\n",
|
||||
" endpoint_api_key=\"my-api-key\",\n",
|
||||
" content_formatter=LlamaContentFormatter,\n",
|
||||
" content_formatter=LlamaChatContentFormatter(),\n",
|
||||
")\n",
|
||||
"response = chat(\n",
|
||||
" messages=[HumanMessage(content=\"Will the Collatz conjecture ever be solved?\")]\n",
|
||||
"response = chat.invoke(\n",
|
||||
" [HumanMessage(content=\"Will the Collatz conjecture ever be solved?\")]\n",
|
||||
")\n",
|
||||
"response"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example: Chat completions with pay-as-you-go deployments (model as a service)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = AzureMLChatOnlineEndpoint(\n",
|
||||
" endpoint_url=\"https://<your-endpoint>.<your_region>.inference.ml.azure.com/v1/chat/completions\",\n",
|
||||
" endpoint_api_type=AzureMLEndpointApiType.serverless,\n",
|
||||
" endpoint_api_key=\"my-api-key\",\n",
|
||||
" content_formatter=LlamaChatContentFormatter,\n",
|
||||
")\n",
|
||||
"response = chat.invoke(\n",
|
||||
" [HumanMessage(content=\"Will the Collatz conjecture ever be solved?\")]\n",
|
||||
")\n",
|
||||
"response"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"If you need to pass additional parameters to the model, use `model_kwards` argument:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = AzureMLChatOnlineEndpoint(\n",
|
||||
" endpoint_url=\"https://<your-endpoint>.<your_region>.inference.ml.azure.com/v1/chat/completions\",\n",
|
||||
" endpoint_api_type=AzureMLEndpointApiType.serverless,\n",
|
||||
" endpoint_api_key=\"my-api-key\",\n",
|
||||
" content_formatter=LlamaChatContentFormatter,\n",
|
||||
" model_kwargs={\"temperature\": 0.8},\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Parameters can also be passed during invocation:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"response = chat.invoke(\n",
|
||||
" [HumanMessage(content=\"Will the Collatz conjecture ever be solved?\")],\n",
|
||||
" max_tokens=512,\n",
|
||||
")\n",
|
||||
"response"
|
||||
]
|
||||
|
||||
@@ -13,7 +13,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ChatBaichuan\n",
|
||||
"# Chat with Baichuan-192K\n",
|
||||
"\n",
|
||||
"Baichuan chat models API by Baichuan Intelligent Technology. For more information, see [https://platform.baichuan-ai.com/docs/api](https://platform.baichuan-ai.com/docs/api)"
|
||||
]
|
||||
@@ -44,19 +44,16 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatBaichuan(\n",
|
||||
" baichuan_api_key=\"YOUR_API_KEY\", baichuan_secret_key=\"YOUR_SECRET_KEY\"\n",
|
||||
")"
|
||||
"chat = ChatBaichuan(baichuan_api_key=\"YOUR_API_KEY\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"or you can set `api_key` and `secret_key` in your environment variables\n",
|
||||
"or you can set `api_key` in your environment variables\n",
|
||||
"```bash\n",
|
||||
"export BAICHUAN_API_KEY=YOUR_API_KEY\n",
|
||||
"export BAICHUAN_SECRET_KEY=YOUR_SECRET_KEY\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
@@ -91,7 +88,7 @@
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"## For ChatBaichuan with Streaming"
|
||||
"## Chat with Baichuan-192K with Streaming"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -108,7 +105,6 @@
|
||||
"source": [
|
||||
"chat = ChatBaichuan(\n",
|
||||
" baichuan_api_key=\"YOUR_API_KEY\",\n",
|
||||
" baichuan_secret_key=\"YOUR_SECRET_KEY\",\n",
|
||||
" streaming=True,\n",
|
||||
")"
|
||||
]
|
||||
|
||||
224
docs/docs/integrations/chat/deepinfra.ipynb
Normal file
224
docs/docs/integrations/chat/deepinfra.ipynb
Normal file
@@ -0,0 +1,224 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "bf733a38-db84-4363-89e2-de6735c37230",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# DeepInfra\n",
|
||||
"\n",
|
||||
"[DeepInfra](https://deepinfra.com/?utm_source=langchain) is a serverless inference as a service that provides access to a [variety of LLMs](https://deepinfra.com/models?utm_source=langchain) and [embeddings models](https://deepinfra.com/models?type=embeddings&utm_source=langchain). This notebook goes over how to use LangChain with DeepInfra for chat models."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Set the Environment API Key\n",
|
||||
"Make sure to get your API key from DeepInfra. You have to [Login](https://deepinfra.com/login?from=%2Fdash) and get a new token.\n",
|
||||
"\n",
|
||||
"You are given a 1 hour free of serverless GPU compute to test different models. (see [here](https://github.com/deepinfra/deepctl#deepctl))\n",
|
||||
"You can print your token with `deepctl auth token`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" ········\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# get a new token: https://deepinfra.com/login?from=%2Fdash\n",
|
||||
"\n",
|
||||
"from getpass import getpass\n",
|
||||
"\n",
|
||||
"DEEPINFRA_API_TOKEN = getpass()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"# or pass deepinfra_api_token parameter to the ChatDeepInfra constructor\n",
|
||||
"os.environ[\"DEEPINFRA_API_TOKEN\"] = DEEPINFRA_API_TOKEN"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatDeepInfra\n",
|
||||
"from langchain.schema import HumanMessage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatDeepInfra(model=\"meta-llama/Llama-2-7b-chat-hf\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\" J'aime la programmation.\", additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"messages = [\n",
|
||||
" HumanMessage(\n",
|
||||
" content=\"Translate this sentence from English to French. I love programming.\"\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"chat(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## `ChatDeepInfra` also supports async and streaming functionality:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "93a21c5c-6ef9-4688-be60-b2e1f94842fb",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"LLMResult(generations=[[ChatGeneration(text=\" J'aime programmer.\", generation_info=None, message=AIMessage(content=\" J'aime programmer.\", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('8cc8fb68-1c35-439c-96a0-695036a93652'))])"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await chat.agenerate([messages])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" J'aime la programmation."
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\" J'aime la programmation.\", additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat = ChatDeepInfra(\n",
|
||||
" streaming=True,\n",
|
||||
" verbose=True,\n",
|
||||
" callbacks=[StreamingStdOutCallbackHandler()],\n",
|
||||
")\n",
|
||||
"chat(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c253883f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -18,6 +18,14 @@
|
||||
"\n",
|
||||
"Note: This is separate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
|
||||
"\n",
|
||||
"ChatVertexAI exposes all foundational models available in Google Cloud:\n",
|
||||
"\n",
|
||||
"- Gemini (`gemini-pro` and `gemini-pro-vision`)\n",
|
||||
"- PaLM 2 for Text (`text-bison`)\n",
|
||||
"- Codey for Code Generation (`codechat-bison`)\n",
|
||||
"\n",
|
||||
"For a full and updated list of available models visit [VertexAI documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/overview).\n",
|
||||
"\n",
|
||||
"By default, Google Cloud [does not use](https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance#foundation_model_development) customer data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in [Google's Customer Data Processing Addendum (CDPA)](https://cloud.google.com/terms/data-processing-addendum).\n",
|
||||
"\n",
|
||||
"To use `Google Cloud Vertex AI` PaLM you must have the `langchain-google-vertexai` Python package installed and either:\n",
|
||||
@@ -35,9 +43,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet langchain-google-vertexai"
|
||||
@@ -45,7 +51,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -64,7 +70,7 @@
|
||||
"AIMessage(content=\" J'aime la programmation.\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -98,7 +104,7 @@
|
||||
"AIMessage(content=\"J'aime la programmation.\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -123,7 +129,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -132,7 +138,7 @@
|
||||
"AIMessage(content=' プログラミングが大好きです')"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -159,28 +165,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"execution": {
|
||||
"iopub.execute_input": "2023-06-17T21:09:25.423568Z",
|
||||
"iopub.status.busy": "2023-06-17T21:09:25.423213Z",
|
||||
"iopub.status.idle": "2023-06-17T21:09:25.429641Z",
|
||||
"shell.execute_reply": "2023-06-17T21:09:25.429060Z",
|
||||
"shell.execute_reply.started": "2023-06-17T21:09:25.423546Z"
|
||||
},
|
||||
"tags": []
|
||||
},
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Code generation chat models\n",
|
||||
"You can now leverage the Codey API for code chat within Vertex AI. The model name is:\n",
|
||||
"- codechat-bison: for code assistance"
|
||||
"You can now leverage the Codey API for code chat within Vertex AI. The model available is:\n",
|
||||
"- `codechat-bison`: for code assistance"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
@@ -242,7 +237,7 @@
|
||||
" model_name=\"codechat-bison\", max_output_tokens=1000, temperature=0.5\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"message = chat.invoke(\"Write a Python function to identify all prime numbers\")\n",
|
||||
"message = chat.invoke(\"Write a Python function generating all prime numbers\")\n",
|
||||
"print(message.content)"
|
||||
]
|
||||
},
|
||||
@@ -266,7 +261,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -320,7 +315,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -353,7 +348,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -362,7 +357,7 @@
|
||||
"MyModel(name='Erick', age=27)"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -389,7 +384,7 @@
|
||||
"source": [
|
||||
"## Asynchronous calls\n",
|
||||
"\n",
|
||||
"We can make asynchronous calls via the Runnables [Async Interface](/docs/expression_language/interface)"
|
||||
"We can make asynchronous calls via the Runnables [Async Interface](/docs/expression_language/interface)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -414,10 +409,10 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=' Why do you love programming?')"
|
||||
"AIMessage(content=' अहं प्रोग्रामनं प्रेमामि')"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -428,6 +423,10 @@
|
||||
")\n",
|
||||
"human = \"{text}\"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
|
||||
"\n",
|
||||
"chat = ChatVertexAI(\n",
|
||||
" model_name=\"chat-bison\", max_output_tokens=1000, temperature=0.5\n",
|
||||
")\n",
|
||||
"chain = prompt | chat\n",
|
||||
"\n",
|
||||
"asyncio.run(\n",
|
||||
@@ -483,43 +482,15 @@
|
||||
" sys.stdout.write(chunk.content)\n",
|
||||
" sys.stdout.flush()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"environment": {
|
||||
"kernel": "python3",
|
||||
"name": "common-cpu.m108",
|
||||
"type": "gcloud",
|
||||
"uri": "gcr.io/deeplearning-platform-release/base-cpu:m108"
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
"display_name": "",
|
||||
"name": ""
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.10"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "cc99336516f23363341912c6723b01ace86f02e26b4290be1efc0677e2e2ec24"
|
||||
}
|
||||
"name": "python"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -21,17 +21,31 @@
|
||||
"\n",
|
||||
"1. Select the right LLM(s) for their application\n",
|
||||
"2. Prototype with various open-source and proprietary LLMs\n",
|
||||
"3. Move to production in-line with their security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant infrastructure\n",
|
||||
"3. Access Fine Tuning for open-source LLMs to get industry-leading performance at a fraction of the cost\n",
|
||||
"4. Setup low-cost production APIs according to security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant, multi-cloud infrastructure\n",
|
||||
"\n",
|
||||
"### Steps to Access Models\n",
|
||||
"1. **Explore Available Models:** Start by browsing through the [available models](https://docs.konko.ai/docs/list-of-models) on Konko. Each model caters to different use cases and capabilities.\n",
|
||||
"\n",
|
||||
"This example goes over how to use LangChain to interact with `Konko` [models](https://docs.konko.ai/docs/overview)"
|
||||
"2. **Identify Suitable Endpoints:** Determine which [endpoint](https://docs.konko.ai/docs/list-of-models#list-of-available-models) (ChatCompletion or Completion) supports your selected model.\n",
|
||||
"\n",
|
||||
"3. **Selecting a Model:** [Choose a model](https://docs.konko.ai/docs/list-of-models#list-of-available-models) based on its metadata and how well it fits your use case.\n",
|
||||
"\n",
|
||||
"4. **Prompting Guidelines:** Once a model is selected, refer to the [prompting guidelines](https://docs.konko.ai/docs/prompting) to effectively communicate with it.\n",
|
||||
"\n",
|
||||
"5. **Using the API:** Finally, use the appropriate Konko [API endpoint](https://docs.konko.ai/docs/quickstart-for-completion-and-chat-completion-endpoint) to call the model and receive responses.\n",
|
||||
"\n",
|
||||
"To run this notebook, you'll need Konko API key. You can create one by signing up on [Konko](https://www.konko.ai/).\n",
|
||||
"\n",
|
||||
"This example goes over how to use LangChain to interact with `Konko` ChatCompletion [models](https://docs.konko.ai/docs/list-of-models#konko-hosted-models-for-chatcompletion)\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To run this notebook, you'll need Konko API key. You can request it by messaging support@konko.ai."
|
||||
"To run this notebook, you'll need Konko API key. You can create one by signing up on [Konko](https://www.konko.ai/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -84,36 +98,34 @@
|
||||
"source": [
|
||||
"## Calling a model\n",
|
||||
"\n",
|
||||
"Find a model on the [Konko overview page](https://docs.konko.ai/docs/overview)\n",
|
||||
"Find a model on the [Konko overview page](https://docs.konko.ai/v0.5.0/docs/list-of-models)\n",
|
||||
"\n",
|
||||
"For example, for this [LLama 2 model](https://docs.konko.ai/docs/meta-llama-2-13b-chat). The model id would be: `\"meta-llama/Llama-2-13b-chat-hf\"`\n",
|
||||
"\n",
|
||||
"Another way to find the list of models running on the Konko instance is through this [endpoint](https://docs.konko.ai/reference/listmodels).\n",
|
||||
"Another way to find the list of models running on the Konko instance is through this [endpoint](https://docs.konko.ai/reference/get-models).\n",
|
||||
"\n",
|
||||
"From here, we can initialize our model:\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatKonko(max_tokens=400, model=\"meta-llama/Llama-2-13b-chat-hf\")"
|
||||
"chat = ChatKonko(max_tokens=400, model=\"meta-llama/llama-2-13b-chat\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\" Sure, I'd be happy to explain the Big Bang Theory briefly!\\n\\nThe Big Bang Theory is the leading explanation for the origin and evolution of the universe, based on a vast amount of observational evidence from many fields of science. In essence, the theory posits that the universe began as an infinitely hot and dense point, known as a singularity, around 13.8 billion years ago. This singularity expanded rapidly, and as it did, it cooled and formed subatomic particles, which eventually coalesced into the first atoms, and later into the stars and galaxies we see today.\\n\\nThe theory gets its name from the idea that the universe began in a state of incredibly high energy and temperature, and has been expanding and cooling ever since. This expansion is thought to have been driven by a mysterious force known as dark energy, which is thought to be responsible for the accelerating expansion of the universe.\\n\\nOne of the key predictions of the Big Bang Theory is that the universe should be homogeneous and isotropic on large scales, meaning that it should look the same in all directions and have the same properties everywhere. This prediction has been confirmed by a wealth of observational evidence, including the cosmic microwave background radiation, which is thought to be a remnant of the early universe.\\n\\nOverall, the Big Bang Theory is a well-established and widely accepted explanation for the origins of the universe, and it has been supported by a vast amount of observational evidence from many fields of science.\", additional_kwargs={}, example=False)"
|
||||
"AIMessage(content=\" Sure thing! The Big Bang Theory is a scientific theory that explains the origins of the universe. In short, it suggests that the universe began as an infinitely hot and dense point around 13.8 billion years ago and expanded rapidly. This expansion continues to this day, and it's what makes the universe look the way it does.\\n\\nHere's a brief overview of the key points:\\n\\n1. The universe started as a singularity, a point of infinite density and temperature.\\n2. The singularity expanded rapidly, causing the universe to cool and expand.\\n3. As the universe expanded, particles began to form, including protons, neutrons, and electrons.\\n4. These particles eventually came together to form atoms, and later, stars and galaxies.\\n5. The universe is still expanding today, and the rate of this expansion is accelerating.\\n\\nThat's the Big Bang Theory in a nutshell! It's a pretty mind-blowing idea when you think about it, and it's supported by a lot of scientific evidence. Do you have any other questions about it?\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -125,13 +137,6 @@
|
||||
"]\n",
|
||||
"chat(messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
File diff suppressed because one or more lines are too long
99
docs/docs/integrations/chat/sparkllm.ipynb
Normal file
99
docs/docs/integrations/chat/sparkllm.ipynb
Normal file
@@ -0,0 +1,99 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3ddface67cd10a87",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"# SparkLLM Chat\n",
|
||||
"\n",
|
||||
"SparkLLM chat models API by iFlyTek. For more information, see [iFlyTek Open Platform](https://www.xfyun.cn/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Basic use"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "43daa39972d4c533",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"is_executing": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"\"\"\"For basic init and call\"\"\"\n",
|
||||
"from langchain.chat_models import ChatSparkLLM\n",
|
||||
"from langchain.schema import HumanMessage\n",
|
||||
"\n",
|
||||
"chat = ChatSparkLLM(\n",
|
||||
" spark_app_id=\"<app_id>\", spark_api_key=\"<api_key>\", spark_api_secret=\"<api_secret>\"\n",
|
||||
")\n",
|
||||
"message = HumanMessage(content=\"Hello\")\n",
|
||||
"chat([message])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "df755f4c5689510",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"- Get SparkLLM's app_id, api_key and api_secret from [iFlyTek SparkLLM API Console](https://console.xfyun.cn/services/bm3) (for more info, see [iFlyTek SparkLLM Intro](https://xinghuo.xfyun.cn/sparkapi) ), then set environment variables `IFLYTEK_SPARK_APP_ID`, `IFLYTEK_SPARK_API_KEY` and `IFLYTEK_SPARK_API_SECRET` or pass parameters when creating `ChatSparkLLM` as the demo above."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "984e32ee47bc6772",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"source": [
|
||||
"## For ChatSparkLLM with Streaming"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "7dc162bd65fec08f",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatSparkLLM(streaming=True)\n",
|
||||
"for chunk in chat.stream(\"Hello!\"):\n",
|
||||
" print(chunk.content, end=\"\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 2
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython2",
|
||||
"version": "2.7.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
241
docs/docs/integrations/document_loaders/cassandra.ipynb
Normal file
241
docs/docs/integrations/document_loaders/cassandra.ipynb
Normal file
@@ -0,0 +1,241 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "vm8vn9t8DvC_"
|
||||
},
|
||||
"source": [
|
||||
"# Cassandra"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"[Cassandra](https://cassandra.apache.org/) is a NoSQL, row-oriented, highly scalable and highly available database.Starting with version 5.0, the database ships with [vector search capabilities](https://cassandra.apache.org/doc/trunk/cassandra/vector-search/overview.html)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "5WjXERXzFEhg"
|
||||
},
|
||||
"source": [
|
||||
"## Overview"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "juAmbgoWD17u"
|
||||
},
|
||||
"source": [
|
||||
"The Cassandra Document Loader returns a list of Langchain Documents from a Cassandra database.\n",
|
||||
"\n",
|
||||
"You must either provide a CQL query or a table name to retrieve the documents.\n",
|
||||
"The Loader takes the following parameters:\n",
|
||||
"\n",
|
||||
"* table: (Optional) The table to load the data from.\n",
|
||||
"* session: (Optional) The cassandra driver session. If not provided, the cassio resolved session will be used.\n",
|
||||
"* keyspace: (Optional) The keyspace of the table. If not provided, the cassio resolved keyspace will be used.\n",
|
||||
"* query: (Optional) The query used to load the data.\n",
|
||||
"* page_content_mapper: (Optional) a function to convert a row to string page content. The default converts the row to JSON.\n",
|
||||
"* metadata_mapper: (Optional) a function to convert a row to metadata dict.\n",
|
||||
"* query_parameters: (Optional) The query parameters used when calling session.execute .\n",
|
||||
"* query_timeout: (Optional) The query timeout used when calling session.execute .\n",
|
||||
"* query_custom_payload: (Optional) The query custom_payload used when calling `session.execute`.\n",
|
||||
"* query_execution_profile: (Optional) The query execution_profile used when calling `session.execute`.\n",
|
||||
"* query_host: (Optional) The query host used when calling `session.execute`.\n",
|
||||
"* query_execute_as: (Optional) The query execute_as used when calling `session.execute`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Load documents with the Document Loader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.document_loaders import CassandraLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"### Init from a cassandra driver Session\n",
|
||||
"\n",
|
||||
"You need to create a `cassandra.cluster.Session` object, as described in the [Cassandra driver documentation](https://docs.datastax.com/en/developer/python-driver/latest/api/cassandra/cluster/#module-cassandra.cluster). The details vary (e.g. with network settings and authentication), but this might be something like:"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from cassandra.cluster import Cluster\n",
|
||||
"\n",
|
||||
"cluster = Cluster()\n",
|
||||
"session = cluster.connect()"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"execution_count": null
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"You need to provide the name of an existing keyspace of the Cassandra instance:"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"CASSANDRA_KEYSPACE = input(\"CASSANDRA_KEYSPACE = \")"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"execution_count": null
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"Creating the document loader:"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T15:47:25.893037Z",
|
||||
"start_time": "2024-01-19T15:47:25.889398Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = CassandraLoader(\n",
|
||||
" table=\"movie_reviews\",\n",
|
||||
" session=session,\n",
|
||||
" keyspace=CASSANDRA_KEYSPACE,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"docs = loader.load()"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T15:47:26.399472Z",
|
||||
"start_time": "2024-01-19T15:47:26.389145Z"
|
||||
}
|
||||
},
|
||||
"execution_count": 17
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2024-01-19T15:47:33.287783Z",
|
||||
"start_time": "2024-01-19T15:47:33.277862Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": "Document(page_content='Row(_id=\\'659bdffa16cbc4586b11a423\\', title=\\'Dangerous Men\\', reviewtext=\\'\"Dangerous Men,\" the picture\\\\\\'s production notes inform, took 26 years to reach the big screen. After having seen it, I wonder: What was the rush?\\')', metadata={'table': 'movie_reviews', 'keyspace': 'default_keyspace'})"
|
||||
},
|
||||
"execution_count": 19,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"docs[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"### Init from cassio\n",
|
||||
"\n",
|
||||
"It's also possible to use cassio to configure the session and keyspace."
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import cassio\n",
|
||||
"\n",
|
||||
"cassio.init(contact_points=\"127.0.0.1\", keyspace=CASSANDRA_KEYSPACE)\n",
|
||||
"\n",
|
||||
"loader = CassandraLoader(\n",
|
||||
" table=\"movie_reviews\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"docs = loader.load()"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"execution_count": null
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"collapsed_sections": [
|
||||
"5WjXERXzFEhg"
|
||||
],
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.18"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
BIN
docs/docs/integrations/document_loaders/example_data/fake.vsdx
Normal file
BIN
docs/docs/integrations/document_loaders/example_data/fake.vsdx
Normal file
Binary file not shown.
@@ -12,7 +12,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": null,
|
||||
"id": "2886982e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -100,6 +100,54 @@
|
||||
"docs[0].page_content[:400]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b4ab0a79",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Load list of files"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "092d9a0b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"files = [\"./example_data/whatsapp_chat.txt\", \"./example_data/layout-parser-paper.pdf\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f841c4f8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = UnstructuredFileLoader(files)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "993c240b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"docs = loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "5ce4ff07",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"docs[0].page_content[:400]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "7874d01d",
|
||||
@@ -495,7 +543,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.10"
|
||||
"version": "3.9.0"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
486
docs/docs/integrations/document_loaders/vsdx.ipynb
Normal file
486
docs/docs/integrations/document_loaders/vsdx.ipynb
Normal file
@@ -0,0 +1,486 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Vsdx"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> A [visio file](https://fr.wikipedia.org/wiki/Microsoft_Visio) (with extension .vsdx) is associated with Microsoft Visio, a diagram creation software. It stores information about the structure, layout, and graphical elements of a diagram. This format facilitates the creation and sharing of visualizations in areas such as business, engineering, and computer science."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A Visio file can contain multiple pages. Some of them may serve as the background for others, and this can occur across multiple layers. This **loader** extracts the textual content from each page and its associated pages, enabling the extraction of all visible text from each page, similar to what an OCR algorithm would do."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**WARNING** : Only Visio files with the **.vsdx** extension are compatible with this loader. Files with extensions such as .vsd, ... are not compatible because they cannot be converted to compressed XML."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.document_loaders import VsdxLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = VsdxLoader(file_path=\"./example_data/fake.vsdx\")\n",
|
||||
"documents = loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Display loaded documents**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"------ Page 0 ------\n",
|
||||
"Title page : Summary\n",
|
||||
"Source : ./example_data/fake.vsdx\n",
|
||||
"\n",
|
||||
"==> CONTENT <== \n",
|
||||
"Created by\n",
|
||||
"Created the\n",
|
||||
"Modified by\n",
|
||||
"Modified the\n",
|
||||
"Version\n",
|
||||
"Title\n",
|
||||
"Florian MOREL\n",
|
||||
"2024-01-14\n",
|
||||
"FLORIAN Morel\n",
|
||||
"Today\n",
|
||||
"0.0.0.0.0.1\n",
|
||||
"This is a title\n",
|
||||
"Best Caption of the worl\n",
|
||||
"This is an arrow\n",
|
||||
"This is Earth\n",
|
||||
"This is a bounded arrow\n",
|
||||
"\n",
|
||||
"------ Page 1 ------\n",
|
||||
"Title page : Glossary\n",
|
||||
"Source : ./example_data/fake.vsdx\n",
|
||||
"\n",
|
||||
"==> CONTENT <== \n",
|
||||
"Created by\n",
|
||||
"Created the\n",
|
||||
"Modified by\n",
|
||||
"Modified the\n",
|
||||
"Version\n",
|
||||
"Title\n",
|
||||
"Florian MOREL\n",
|
||||
"2024-01-14\n",
|
||||
"FLORIAN Morel\n",
|
||||
"Today\n",
|
||||
"0.0.0.0.0.1\n",
|
||||
"This is a title\n",
|
||||
"\n",
|
||||
"------ Page 2 ------\n",
|
||||
"Title page : blanket page\n",
|
||||
"Source : ./example_data/fake.vsdx\n",
|
||||
"\n",
|
||||
"==> CONTENT <== \n",
|
||||
"Created by\n",
|
||||
"Created the\n",
|
||||
"Modified by\n",
|
||||
"Modified the\n",
|
||||
"Version\n",
|
||||
"Title\n",
|
||||
"Florian MOREL\n",
|
||||
"2024-01-14\n",
|
||||
"FLORIAN Morel\n",
|
||||
"Today\n",
|
||||
"0.0.0.0.0.1\n",
|
||||
"This is a title\n",
|
||||
"This file is a vsdx file\n",
|
||||
"First text\n",
|
||||
"Second text\n",
|
||||
"Third text\n",
|
||||
"\n",
|
||||
"------ Page 3 ------\n",
|
||||
"Title page : BLABLABLA\n",
|
||||
"Source : ./example_data/fake.vsdx\n",
|
||||
"\n",
|
||||
"==> CONTENT <== \n",
|
||||
"Created by\n",
|
||||
"Created the\n",
|
||||
"Modified by\n",
|
||||
"Modified the\n",
|
||||
"Version\n",
|
||||
"Title\n",
|
||||
"Florian MOREL\n",
|
||||
"2024-01-14\n",
|
||||
"FLORIAN Morel\n",
|
||||
"Today\n",
|
||||
"0.0.0.0.0.1\n",
|
||||
"This is a title\n",
|
||||
"Another RED arrow wow\n",
|
||||
"Arrow with point but red\n",
|
||||
"Green line\n",
|
||||
"User\n",
|
||||
"Captions\n",
|
||||
"Red arrow magic !\n",
|
||||
"Something white\n",
|
||||
"Something Red\n",
|
||||
"This a a completly useless diagramm, cool !!\n",
|
||||
"\n",
|
||||
"But this is for example !\n",
|
||||
"This diagramm is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"This is a page with something...\n",
|
||||
"\n",
|
||||
"WAW I have learned something !\n",
|
||||
"This is a page with something...\n",
|
||||
"\n",
|
||||
"WAW I have learned something !\n",
|
||||
"\n",
|
||||
"X2\n",
|
||||
"\n",
|
||||
"------ Page 4 ------\n",
|
||||
"Title page : What a page !!\n",
|
||||
"Source : ./example_data/fake.vsdx\n",
|
||||
"\n",
|
||||
"==> CONTENT <== \n",
|
||||
"Created by\n",
|
||||
"Created the\n",
|
||||
"Modified by\n",
|
||||
"Modified the\n",
|
||||
"Version\n",
|
||||
"Title\n",
|
||||
"Florian MOREL\n",
|
||||
"2024-01-14\n",
|
||||
"FLORIAN Morel\n",
|
||||
"Today\n",
|
||||
"0.0.0.0.0.1\n",
|
||||
"This is a title\n",
|
||||
"Something white\n",
|
||||
"Something Red\n",
|
||||
"This a a completly useless diagramm, cool !!\n",
|
||||
"\n",
|
||||
"But this is for example !\n",
|
||||
"This diagramm is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"Another RED arrow wow\n",
|
||||
"Arrow with point but red\n",
|
||||
"Green line\n",
|
||||
"User\n",
|
||||
"Captions\n",
|
||||
"Red arrow magic !\n",
|
||||
"\n",
|
||||
"------ Page 5 ------\n",
|
||||
"Title page : next page after previous one\n",
|
||||
"Source : ./example_data/fake.vsdx\n",
|
||||
"\n",
|
||||
"==> CONTENT <== \n",
|
||||
"Created by\n",
|
||||
"Created the\n",
|
||||
"Modified by\n",
|
||||
"Modified the\n",
|
||||
"Version\n",
|
||||
"Title\n",
|
||||
"Florian MOREL\n",
|
||||
"2024-01-14\n",
|
||||
"FLORIAN Morel\n",
|
||||
"Today\n",
|
||||
"0.0.0.0.0.1\n",
|
||||
"This is a title\n",
|
||||
"Another RED arrow wow\n",
|
||||
"Arrow with point but red\n",
|
||||
"Green line\n",
|
||||
"User\n",
|
||||
"Captions\n",
|
||||
"Red arrow magic !\n",
|
||||
"Something white\n",
|
||||
"Something Red\n",
|
||||
"This a a completly useless diagramm, cool !!\n",
|
||||
"\n",
|
||||
"But this is for example !\n",
|
||||
"This diagramm is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor\n",
|
||||
"\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0-\\u00a0incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa\n",
|
||||
"*\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"qui officia deserunt mollit anim id est laborum.\n",
|
||||
"\n",
|
||||
"------ Page 6 ------\n",
|
||||
"Title page : Connector Page\n",
|
||||
"Source : ./example_data/fake.vsdx\n",
|
||||
"\n",
|
||||
"==> CONTENT <== \n",
|
||||
"Created by\n",
|
||||
"Created the\n",
|
||||
"Modified by\n",
|
||||
"Modified the\n",
|
||||
"Version\n",
|
||||
"Title\n",
|
||||
"Florian MOREL\n",
|
||||
"2024-01-14\n",
|
||||
"FLORIAN Morel\n",
|
||||
"Today\n",
|
||||
"0.0.0.0.0.1\n",
|
||||
"This is a title\n",
|
||||
"Something white\n",
|
||||
"Something Red\n",
|
||||
"This a a completly useless diagramm, cool !!\n",
|
||||
"\n",
|
||||
"But this is for example !\n",
|
||||
"This diagramm is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"\n",
|
||||
"------ Page 7 ------\n",
|
||||
"Title page : Useful ↔ Useless page\n",
|
||||
"Source : ./example_data/fake.vsdx\n",
|
||||
"\n",
|
||||
"==> CONTENT <== \n",
|
||||
"Created by\n",
|
||||
"Created the\n",
|
||||
"Modified by\n",
|
||||
"Modified the\n",
|
||||
"Version\n",
|
||||
"Title\n",
|
||||
"Florian MOREL\n",
|
||||
"2024-01-14\n",
|
||||
"FLORIAN Morel\n",
|
||||
"Today\n",
|
||||
"0.0.0.0.0.1\n",
|
||||
"This is a title\n",
|
||||
"Something white\n",
|
||||
"Something Red\n",
|
||||
"This a a completly useless diagramm, cool !!\n",
|
||||
"\n",
|
||||
"But this is for example !\n",
|
||||
"This diagramm is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"Title of this document : BLABLABLA\n",
|
||||
"\n",
|
||||
"------ Page 8 ------\n",
|
||||
"Title page : Alone page\n",
|
||||
"Source : ./example_data/fake.vsdx\n",
|
||||
"\n",
|
||||
"==> CONTENT <== \n",
|
||||
"Black cloud\n",
|
||||
"Unidirectional traffic primary path\n",
|
||||
"Unidirectional traffic backup path\n",
|
||||
"Encapsulation\n",
|
||||
"User\n",
|
||||
"Captions\n",
|
||||
"Bidirectional traffic\n",
|
||||
"Alone, sad\n",
|
||||
"Test of another page\n",
|
||||
"This is a \\\"bannier\\\"\n",
|
||||
"Tests of some exotics characters :\\u00a0\\u00e3\\u00e4\\u00e5\\u0101\\u0103 \\u00fc\\u2554\\u00a0 \\u00a0\\u00bc \\u00c7 \\u25d8\\u25cb\\u2642\\u266b\\u2640\\u00ee\\u2665\n",
|
||||
"This is ethernet\n",
|
||||
"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.\n",
|
||||
"This is an empty case\n",
|
||||
"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.\n",
|
||||
"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor\n",
|
||||
"\\u00a0 \\u00a0 \\u00a0 \\u00a0 \\u00a0 \\u00a0 \\u00a0 \\u00a0 \\u00a0 \\u00a0 \\u00a0 \\u00a0 \\u00a0 \\u00a0 \\u00a0 \\u00a0 \\u00a0 \\u00a0 \\u00a0 \\u00a0-\\u00a0 incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in\n",
|
||||
"\n",
|
||||
"\n",
|
||||
" voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa \n",
|
||||
"*\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"qui officia deserunt mollit anim id est laborum.\n",
|
||||
"\n",
|
||||
"------ Page 9 ------\n",
|
||||
"Title page : BG\n",
|
||||
"Source : ./example_data/fake.vsdx\n",
|
||||
"\n",
|
||||
"==> CONTENT <== \n",
|
||||
"Best Caption of the worl\n",
|
||||
"This is an arrow\n",
|
||||
"This is Earth\n",
|
||||
"This is a bounded arrow\n",
|
||||
"Created by\n",
|
||||
"Created the\n",
|
||||
"Modified by\n",
|
||||
"Modified the\n",
|
||||
"Version\n",
|
||||
"Title\n",
|
||||
"Florian MOREL\n",
|
||||
"2024-01-14\n",
|
||||
"FLORIAN Morel\n",
|
||||
"Today\n",
|
||||
"0.0.0.0.0.1\n",
|
||||
"This is a title\n",
|
||||
"\n",
|
||||
"------ Page 10 ------\n",
|
||||
"Title page : BG + caption1\n",
|
||||
"Source : ./example_data/fake.vsdx\n",
|
||||
"\n",
|
||||
"==> CONTENT <== \n",
|
||||
"Created by\n",
|
||||
"Created the\n",
|
||||
"Modified by\n",
|
||||
"Modified the\n",
|
||||
"Version\n",
|
||||
"Title\n",
|
||||
"Florian MOREL\n",
|
||||
"2024-01-14\n",
|
||||
"FLORIAN Morel\n",
|
||||
"Today\n",
|
||||
"0.0.0.0.0.1\n",
|
||||
"This is a title\n",
|
||||
"Another RED arrow wow\n",
|
||||
"Arrow with point but red\n",
|
||||
"Green line\n",
|
||||
"User\n",
|
||||
"Captions\n",
|
||||
"Red arrow magic !\n",
|
||||
"Something white\n",
|
||||
"Something Red\n",
|
||||
"This a a completly useless diagramm, cool !!\n",
|
||||
"\n",
|
||||
"But this is for example !\n",
|
||||
"This diagramm is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"Useful\\u2194 Useless page\\u00a0\n",
|
||||
"\n",
|
||||
"Tests of some exotics characters :\\u00a0\\u00e3\\u00e4\\u00e5\\u0101\\u0103 \\u00fc\\u2554\\u00a0\\u00a0\\u00bc \\u00c7 \\u25d8\\u25cb\\u2642\\u266b\\u2640\\u00ee\\u2665\n",
|
||||
"\n",
|
||||
"------ Page 11 ------\n",
|
||||
"Title page : BG+\n",
|
||||
"Source : ./example_data/fake.vsdx\n",
|
||||
"\n",
|
||||
"==> CONTENT <== \n",
|
||||
"Created by\n",
|
||||
"Created the\n",
|
||||
"Modified by\n",
|
||||
"Modified the\n",
|
||||
"Version\n",
|
||||
"Title\n",
|
||||
"Florian MOREL\n",
|
||||
"2024-01-14\n",
|
||||
"FLORIAN Morel\n",
|
||||
"Today\n",
|
||||
"0.0.0.0.0.1\n",
|
||||
"This is a title\n",
|
||||
"\n",
|
||||
"------ Page 12 ------\n",
|
||||
"Title page : BG WITH CONTENT\n",
|
||||
"Source : ./example_data/fake.vsdx\n",
|
||||
"\n",
|
||||
"==> CONTENT <== \n",
|
||||
"Created by\n",
|
||||
"Created the\n",
|
||||
"Modified by\n",
|
||||
"Modified the\n",
|
||||
"Version\n",
|
||||
"Title\n",
|
||||
"Florian MOREL\n",
|
||||
"2024-01-14\n",
|
||||
"FLORIAN Morel\n",
|
||||
"Today\n",
|
||||
"0.0.0.0.0.1\n",
|
||||
"This is a title\n",
|
||||
"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.\n",
|
||||
"\n",
|
||||
"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. - Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.\n",
|
||||
"This is a page with a lot of text\n",
|
||||
"\n",
|
||||
"------ Page 13 ------\n",
|
||||
"Title page : 2nd caption with ____________________________________________________________________ content\n",
|
||||
"Source : ./example_data/fake.vsdx\n",
|
||||
"\n",
|
||||
"==> CONTENT <== \n",
|
||||
"Created by\n",
|
||||
"Created the\n",
|
||||
"Modified by\n",
|
||||
"Modified the\n",
|
||||
"Version\n",
|
||||
"Title\n",
|
||||
"Florian MOREL\n",
|
||||
"2024-01-14\n",
|
||||
"FLORIAN Morel\n",
|
||||
"Today\n",
|
||||
"0.0.0.0.0.1\n",
|
||||
"This is a title\n",
|
||||
"Another RED arrow wow\n",
|
||||
"Arrow with point but red\n",
|
||||
"Green line\n",
|
||||
"User\n",
|
||||
"Captions\n",
|
||||
"Red arrow magic !\n",
|
||||
"Something white\n",
|
||||
"Something Red\n",
|
||||
"This a a completly useless diagramm, cool !!\n",
|
||||
"\n",
|
||||
"But this is for example !\n",
|
||||
"This diagramm is a base of many pages in this file. But it is editable in file \\\"BG WITH CONTENT\\\"\n",
|
||||
"Only connectors on this page. This is the CoNNeCtor page\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"for i, doc in enumerate(documents):\n",
|
||||
" print(f\"\\n------ Page {doc.metadata['page']} ------\")\n",
|
||||
" print(f\"Title page : {doc.metadata['page_name']}\")\n",
|
||||
" print(f\"Source : {doc.metadata['source']}\")\n",
|
||||
" print(\"\\n==> CONTENT <== \")\n",
|
||||
" print(doc.page_content)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -6,9 +6,9 @@
|
||||
"source": [
|
||||
"# Azure ML\n",
|
||||
"\n",
|
||||
"[Azure ML](https://azure.microsoft.com/en-us/products/machine-learning/) is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.\n",
|
||||
"[Azure ML](https://azure.microsoft.com/en-us/products/machine-learning/) is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides foundational and general purpose models from different providers.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use an LLM hosted on an `AzureML online endpoint`"
|
||||
"This notebook goes over how to use an LLM hosted on an `Azure ML Online Endpoint`."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -26,11 +26,12 @@
|
||||
"source": [
|
||||
"## Set up\n",
|
||||
"\n",
|
||||
"To use the wrapper, you must [deploy a model on AzureML](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models?view=azureml-api-2#deploying-foundation-models-to-endpoints-for-inferencing) and obtain the following parameters:\n",
|
||||
"You must [deploy a model on Azure ML](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models?view=azureml-api-2#deploying-foundation-models-to-endpoints-for-inferencing) or [to Azure AI studio](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-open) and obtain the following parameters:\n",
|
||||
"\n",
|
||||
"* `endpoint_api_key`: Required - The API key provided by the endpoint\n",
|
||||
"* `endpoint_url`: Required - The REST endpoint url provided by the endpoint\n",
|
||||
"* `deployment_name`: Not required - The deployment name of the model using the endpoint"
|
||||
"* `endpoint_url`: The REST endpoint url provided by the endpoint.\n",
|
||||
"* `endpoint_api_type`: Use `endpoint_type='realtime'` when deploying models to **Realtime endpoints** (hosted managed infrastructure). Use `endpoint_type='serverless'` when deploying models using the **Pay-as-you-go** offering (model as a service).\n",
|
||||
"* `endpoint_api_key`: The API key provided by the endpoint.\n",
|
||||
"* `deployment_name`: (Optional) The deployment name of the model using the endpoint."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -46,31 +47,107 @@
|
||||
"* `HFContentFormatter`: Formats request and response data for text-generation Hugging Face models\n",
|
||||
"* `LLamaContentFormatter`: Formats request and response data for LLaMa2\n",
|
||||
"\n",
|
||||
"*Note: `OSSContentFormatter` is being deprecated and replaced with `GPT2ContentFormatter`. The logic is the same but `GPT2ContentFormatter` is a more suitable name. You can still continue to use `OSSContentFormatter` as the changes are backwards compatible.*\n",
|
||||
"\n",
|
||||
"Below is an example using a summarization model from Hugging Face."
|
||||
"*Note: `OSSContentFormatter` is being deprecated and replaced with `GPT2ContentFormatter`. The logic is the same but `GPT2ContentFormatter` is a more suitable name. You can still continue to use `OSSContentFormatter` as the changes are backwards compatible.*"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Custom Content Formatter"
|
||||
"## Examples"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example: LlaMa 2 completions with real-time endpoints"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"HaSeul won her first music show trophy with \"So What\" on Mnet's M Countdown. Loona released their second EP titled [#] (read as hash] on February 5, 2020. HaSeul did not take part in the promotion of the album because of mental health issues. On October 19, 2020, they released their third EP called [12:00]. It was their first album to enter the Billboard 200, debuting at number 112. On June 2, 2021, the group released their fourth EP called Yummy-Yummy. On August 27, it was announced that they are making their Japanese debut on September 15 under Universal Music Japan sublabel EMI Records.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema import HumanMessage\n",
|
||||
"from langchain_community.llms.azureml_endpoint import (\n",
|
||||
" AzureMLEndpointApiType,\n",
|
||||
" LlamaContentFormatter,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"llm = AzureMLOnlineEndpoint(\n",
|
||||
" endpoint_url=\"https://<your-endpoint>.<your_region>.inference.ml.azure.com/score\",\n",
|
||||
" endpoint_api_type=AzureMLEndpointApiType.realtime,\n",
|
||||
" endpoint_api_key=\"my-api-key\",\n",
|
||||
" content_formatter=LlamaContentFormatter(),\n",
|
||||
" model_kwargs={\"temperature\": 0.8, \"max_new_tokens\": 400},\n",
|
||||
")\n",
|
||||
"response = llm.invoke(\"Write me a song about sparkling water:\")\n",
|
||||
"response"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Model parameters can also be indicated during invocation:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"response = llm.invoke(\"Write me a song about sparkling water:\", temperature=0.5)\n",
|
||||
"response"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example: Chat completions with pay-as-you-go deployments (model as a service)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema import HumanMessage\n",
|
||||
"from langchain_community.llms.azureml_endpoint import (\n",
|
||||
" AzureMLEndpointApiType,\n",
|
||||
" LlamaContentFormatter,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"llm = AzureMLOnlineEndpoint(\n",
|
||||
" endpoint_url=\"https://<your-endpoint>.<your_region>.inference.ml.azure.com/v1/completions\",\n",
|
||||
" endpoint_api_type=AzureMLEndpointApiType.serverless,\n",
|
||||
" endpoint_api_key=\"my-api-key\",\n",
|
||||
" content_formatter=LlamaContentFormatter(),\n",
|
||||
" model_kwargs={\"temperature\": 0.8, \"max_new_tokens\": 400},\n",
|
||||
")\n",
|
||||
"response = llm.invoke(\"Write me a song about sparkling water:\")\n",
|
||||
"response"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example: Custom content formatter\n",
|
||||
"\n",
|
||||
"Below is an example using a summarization model from Hugging Face."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"import os\n",
|
||||
@@ -104,6 +181,7 @@
|
||||
"content_formatter = CustomFormatter()\n",
|
||||
"\n",
|
||||
"llm = AzureMLOnlineEndpoint(\n",
|
||||
" endpoint_api_type=\"realtime\",\n",
|
||||
" endpoint_api_key=os.getenv(\"BART_ENDPOINT_API_KEY\"),\n",
|
||||
" endpoint_url=os.getenv(\"BART_ENDPOINT_URL\"),\n",
|
||||
" model_kwargs={\"temperature\": 0.8, \"max_new_tokens\": 400},\n",
|
||||
@@ -132,7 +210,7 @@
|
||||
"that Loona will release the double A-side single, \"Hula Hoop / Star Seed\" on September 15, with a physical CD release on October \n",
|
||||
"20.[53] In December, Chuu filed an injunction to suspend her exclusive contract with Blockberry Creative.[54][55]\n",
|
||||
"\"\"\"\n",
|
||||
"summarized_text = llm(large_text)\n",
|
||||
"summarized_text = llm.invoke(large_text)\n",
|
||||
"print(summarized_text)"
|
||||
]
|
||||
},
|
||||
@@ -140,22 +218,14 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Dolly with LLMChain"
|
||||
"### Example: Dolly with LLMChain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Many people are willing to talk about themselves; it's others who seem to be stuck up. Try to understand others where they're coming from. Like minded people can build a tribe together.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
@@ -177,31 +247,22 @@
|
||||
")\n",
|
||||
"\n",
|
||||
"chain = LLMChain(llm=llm, prompt=prompt)\n",
|
||||
"print(chain.run({\"word_count\": 100, \"topic\": \"how to make friends\"}))"
|
||||
"print(chain.invoke({\"word_count\": 100, \"topic\": \"how to make friends\"}))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Serializing an LLM\n",
|
||||
"## Serializing an LLM\n",
|
||||
"You can also save and load LLM configurations"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[1mAzureMLOnlineEndpoint\u001b[0m\n",
|
||||
"Params: {'deployment_name': 'databricks-dolly-v2-12b-4', 'model_kwargs': {'temperature': 0.2, 'max_tokens': 150, 'top_p': 0.8, 'frequency_penalty': 0.32, 'presence_penalty': 0.072}}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.llms.loading import load_llm\n",
|
||||
"\n",
|
||||
@@ -224,9 +285,9 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"display_name": "langchain",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
"name": "langchain"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
@@ -238,7 +299,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.11.5"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
100
docs/docs/integrations/llms/konko.ipynb
Normal file
100
docs/docs/integrations/llms/konko.ipynb
Normal file
@@ -0,0 +1,100 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "136d9ba6-c42a-435b-9e19-77ebcc7a3145",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ChatKonko\n",
|
||||
"\n",
|
||||
">[Konko](https://www.konko.ai/) API is a fully managed Web API designed to help application developers:\n",
|
||||
"\n",
|
||||
"Konko API is a fully managed API designed to help application developers:\n",
|
||||
"\n",
|
||||
"1. Select the right LLM(s) for their application\n",
|
||||
"2. Prototype with various open-source and proprietary LLMs\n",
|
||||
"3. Access Fine Tuning for open-source LLMs to get industry-leading performance at a fraction of the cost\n",
|
||||
"4. Setup low-cost production APIs according to security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant, multi-cloud infrastructure\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0d896d07-82b4-4f38-8c37-f0bc8b0e4fe1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Steps to Access Models\n",
|
||||
"1. **Explore Available Models:** Start by browsing through the [available models](https://docs.konko.ai/docs/list-of-models) on Konko. Each model caters to different use cases and capabilities.\n",
|
||||
"\n",
|
||||
"2. **Identify Suitable Endpoints:** Determine which [endpoint](https://docs.konko.ai/docs/list-of-models#list-of-available-models) (ChatCompletion or Completion) supports your selected model.\n",
|
||||
"\n",
|
||||
"3. **Selecting a Model:** [Choose a model](https://docs.konko.ai/docs/list-of-models#list-of-available-models) based on its metadata and how well it fits your use case.\n",
|
||||
"\n",
|
||||
"4. **Prompting Guidelines:** Once a model is selected, refer to the [prompting guidelines](https://docs.konko.ai/docs/prompting) to effectively communicate with it.\n",
|
||||
"\n",
|
||||
"5. **Using the API:** Finally, use the appropriate Konko [API endpoint](https://docs.konko.ai/docs/quickstart-for-completion-and-chat-completion-endpoint) to call the model and receive responses.\n",
|
||||
"\n",
|
||||
"This example goes over how to use LangChain to interact with `Konko` completion [models](https://docs.konko.ai/docs/list-of-models#konko-hosted-models-for-completion)\n",
|
||||
"\n",
|
||||
"To run this notebook, you'll need Konko API key. You can create one by signing up on [Konko](https://www.konko.ai/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "dd70bccb-7a65-42d0-a3f2-8116f3549da7",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"Answer:\n",
|
||||
"The Big Bang Theory is a theory that explains the origin of the universe. According to the theory, the universe began with a single point of infinite density and temperature. This point is called the singularity. The singularity exploded and expanded rapidly. The expansion of the universe is still continuing.\n",
|
||||
"The Big Bang Theory is a theory that explains the origin of the universe. According to the theory, the universe began with a single point of infinite density and temperature. This point is called the singularity. The singularity exploded and expanded rapidly. The expansion of the universe is still continuing.\n",
|
||||
"\n",
|
||||
"Question\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.llms import Konko\n",
|
||||
"\n",
|
||||
"llm = Konko(model=\"mistralai/mistral-7b-v0.1\", temperature=0.1, max_tokens=128)\n",
|
||||
"\n",
|
||||
"input_ = \"\"\"You are a helpful assistant. Explain Big Bang Theory briefly.\"\"\"\n",
|
||||
"print(llm(input_))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "78148bf7-2211-40b4-93a7-e90139ab1169",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
File diff suppressed because one or more lines are too long
@@ -139,7 +139,9 @@
|
||||
"\n",
|
||||
"chain_with_history = RunnableWithMessageHistory(\n",
|
||||
" chain,\n",
|
||||
" RedisChatMessageHistory,\n",
|
||||
" lambda session_id: RedisChatMessageHistory(\n",
|
||||
" session_id, url=\"redis://localhost:6379\"\n",
|
||||
" ),\n",
|
||||
" input_messages_key=\"question\",\n",
|
||||
" history_messages_key=\"history\",\n",
|
||||
")\n",
|
||||
|
||||
266
docs/docs/integrations/memory/tidb_chat_message_history.ipynb
Normal file
266
docs/docs/integrations/memory/tidb_chat_message_history.ipynb
Normal file
@@ -0,0 +1,266 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# TiDB\n",
|
||||
"\n",
|
||||
"> [TiDB](https://github.com/pingcap/tidb) is an open-source, cloud-native, distributed, MySQL-Compatible database for elastic scale and real-time analytics.\n",
|
||||
"\n",
|
||||
"This notebook introduces how to use TiDB to store chat message history. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"Firstly, we will install the following dependencies:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet langchain langchain_openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Configuring your OpenAI Key"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Input your OpenAI API key:\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Finally, we will configure the connection to a TiDB. In this notebook, we will follow the standard connection method provided by TiDB Cloud to establish a secure and efficient database connection."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# copy from tidb cloud console\n",
|
||||
"tidb_connection_string_template = \"mysql+pymysql://<USER>:<PASSWORD>@<HOST>:4000/<DB>?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true\"\n",
|
||||
"tidb_password = getpass.getpass(\"Input your TiDB password:\")\n",
|
||||
"tidb_connection_string = tidb_connection_string_template.replace(\n",
|
||||
" \"<PASSWORD>\", tidb_password\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Generating historical data\n",
|
||||
"\n",
|
||||
"Creating a set of historical data, which will serve as the foundation for our upcoming demonstrations."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from datetime import datetime\n",
|
||||
"\n",
|
||||
"from langchain_community.chat_message_histories import TiDBChatMessageHistory\n",
|
||||
"\n",
|
||||
"history = TiDBChatMessageHistory(\n",
|
||||
" connection_string=tidb_connection_string,\n",
|
||||
" session_id=\"code_gen\",\n",
|
||||
" earliest_time=datetime.utcnow(), # Optional to set earliest_time to load messages after this time point.\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"history.add_user_message(\"How's our feature going?\")\n",
|
||||
"history.add_ai_message(\n",
|
||||
" \"It's going well. We are working on testing now. It will be released in Feb.\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[HumanMessage(content=\"How's our feature going?\"),\n",
|
||||
" AIMessage(content=\"It's going well. We are working on testing now. It will be released in Feb.\")]"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"history.messages"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chatting with historical data\n",
|
||||
"\n",
|
||||
"Let’s build upon the historical data generated earlier to create a dynamic chat interaction. \n",
|
||||
"\n",
|
||||
"Firstly, Creating a Chat Chain with LangChain:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"prompt = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\n",
|
||||
" \"system\",\n",
|
||||
" \"You're an assistant who's good at coding. You're helping a startup build\",\n",
|
||||
" ),\n",
|
||||
" MessagesPlaceholder(variable_name=\"history\"),\n",
|
||||
" (\"human\", \"{question}\"),\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"chain = prompt | ChatOpenAI()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Building a Runnable on History:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
|
||||
"\n",
|
||||
"chain_with_history = RunnableWithMessageHistory(\n",
|
||||
" chain,\n",
|
||||
" lambda session_id: TiDBChatMessageHistory(\n",
|
||||
" session_id=session_id, connection_string=tidb_connection_string\n",
|
||||
" ),\n",
|
||||
" input_messages_key=\"question\",\n",
|
||||
" history_messages_key=\"history\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Initiating the Chat:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content='There are 31 days in January, so there are 30 days until our feature is released in February.')"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"response = chain_with_history.invoke(\n",
|
||||
" {\"question\": \"Today is Jan 1st. How many days until our feature is released?\"},\n",
|
||||
" config={\"configurable\": {\"session_id\": \"code_gen\"}},\n",
|
||||
")\n",
|
||||
"response"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Checking the history data"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[HumanMessage(content=\"How's our feature going?\"),\n",
|
||||
" AIMessage(content=\"It's going well. We are working on testing now. It will be released in Feb.\"),\n",
|
||||
" HumanMessage(content='Today is Jan 1st. How many days until our feature is released?'),\n",
|
||||
" AIMessage(content='There are 31 days in January, so there are 30 days until our feature is released in February.')]"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"history.reload_cache()\n",
|
||||
"history.messages"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "langchain",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.13"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -17,6 +17,8 @@ google/flan\* models can be viewed [here](https://deepinfra.com/models?type=text
|
||||
|
||||
You can view a [list of request and response parameters](https://deepinfra.com/meta-llama/Llama-2-70b-chat-hf/api).
|
||||
|
||||
Chat models [follow openai api](https://deepinfra.com/meta-llama/Llama-2-70b-chat-hf/api?example=openai-http)
|
||||
|
||||
## Wrappers
|
||||
|
||||
### LLM
|
||||
@@ -34,3 +36,11 @@ There is also an DeepInfra Embeddings wrapper, you can access with
|
||||
```python
|
||||
from langchain_community.embeddings import DeepInfraEmbeddings
|
||||
```
|
||||
|
||||
### Chat Models
|
||||
|
||||
There is a chat-oriented wrapper as well, accessible with
|
||||
|
||||
```python
|
||||
from langchain_community.chat_models import ChatDeepInfra
|
||||
```
|
||||
|
||||
24
docs/docs/integrations/providers/kdbai.mdx
Normal file
24
docs/docs/integrations/providers/kdbai.mdx
Normal file
@@ -0,0 +1,24 @@
|
||||
# KDB.AI
|
||||
|
||||
>[KDB.AI](https://kdb.ai) is a powerful knowledge-based vector database and search engine that allows you to build scalable, reliable AI applications, using real-time data, by providing advanced search, recommendation and personalization.
|
||||
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
Install the Python SDK:
|
||||
|
||||
```bash
|
||||
pip install kdbai-client
|
||||
```
|
||||
|
||||
|
||||
## Vector store
|
||||
|
||||
There exists a wrapper around KDB.AI indexes, allowing you to use it as a vectorstore,
|
||||
whether for semantic search or example selection.
|
||||
|
||||
```python
|
||||
from langchain_community.vectorstores import KDBAI
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the KDB.AI vectorstore, see [this notebook](/docs/integrations/vectorstores/kdbai)
|
||||
@@ -60,21 +60,27 @@ konko.Model.list()
|
||||
|
||||
## Calling a model
|
||||
|
||||
Find a model on the [Konko Introduction page](https://docs.konko.ai/docs#available-models)
|
||||
|
||||
For example, for this [LLama 2 model](https://docs.konko.ai/docs/meta-llama-2-13b-chat). The model id would be: `"meta-llama/Llama-2-13b-chat-hf"`
|
||||
Find a model on the [Konko Introduction page](https://docs.konko.ai/docs/list-of-models)
|
||||
|
||||
Another way to find the list of models running on the Konko instance is through this [endpoint](https://docs.konko.ai/reference/listmodels).
|
||||
|
||||
From here, we can initialize our model:
|
||||
## Examples of Endpoint Usage
|
||||
|
||||
```python
|
||||
chat_instance = ChatKonko(max_tokens=10, model = 'meta-llama/Llama-2-13b-chat-hf')
|
||||
```
|
||||
|
||||
And run it:
|
||||
- **ChatCompletion with Mistral-7B:**
|
||||
```python
|
||||
chat_instance = ChatKonko(max_tokens=10, model = 'mistralai/mistral-7b-instruct-v0.1')
|
||||
msg = HumanMessage(content="Hi")
|
||||
chat_response = chat_instance([msg])
|
||||
|
||||
```
|
||||
|
||||
```python
|
||||
msg = HumanMessage(content="Hi")
|
||||
chat_response = chat_instance([msg])
|
||||
```
|
||||
- **Completion with mistralai/Mistral-7B-v0.1:**
|
||||
```python
|
||||
from langchain.llms import Konko
|
||||
llm = Konko(max_tokens=800, model='mistralai/Mistral-7B-v0.1')
|
||||
prompt = "Generate a Product Description for Apple Iphone 15"
|
||||
response = llm(prompt)
|
||||
```
|
||||
|
||||
For further assistance, contact [support@konko.ai](mailto:support@konko.ai) or join our [Discord](https://discord.gg/TXV2s3z7RZ).
|
||||
34
docs/docs/integrations/providers/tigergraph.mdx
Normal file
34
docs/docs/integrations/providers/tigergraph.mdx
Normal file
@@ -0,0 +1,34 @@
|
||||
# TigerGraph
|
||||
|
||||
This page covers how to use the TigerGraph ecosystem within LangChain.
|
||||
|
||||
What is TigerGraph?
|
||||
|
||||
**TigerGraph in a nutshell:**
|
||||
|
||||
- TigerGraph is a natively distributed and high-performance graph database.
|
||||
- The storage of data in a graph format of vertices and edges leads to rich relationships, ideal for grouding LLM responses.
|
||||
- Get started quickly with TigerGraph by visiting [their website](https://tigergraph.com/).
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
- Install the Python SDK with `pip install pyTigerGraph`
|
||||
|
||||
## Wrappers
|
||||
|
||||
### TigerGraph Store
|
||||
To utilize the TigerGraph InquiryAI functionality, you can import `TigerGraph` from `langchain_community.graphs`.
|
||||
|
||||
```python
|
||||
import pyTigerGraph as tg
|
||||
conn = tg.TigerGraphConnection(host="DATABASE_HOST_HERE", graphname="GRAPH_NAME_HERE", username="USERNAME_HERE", password="PASSWORD_HERE")
|
||||
|
||||
### ==== CONFIGURE INQUIRYAI HOST ====
|
||||
conn.ai.configureInquiryAIHost("INQUIRYAI_HOST_HERE")
|
||||
|
||||
from langchain_community.graphs import TigerGraph
|
||||
graph = TigerGraph(conn)
|
||||
result = graph.query("How many servers are there?")
|
||||
print(result)
|
||||
```
|
||||
|
||||
@@ -24,7 +24,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": null,
|
||||
"id": "b37bd138-4f3c-4d2c-bc4b-be705ce27a09",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -40,7 +40,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 13,
|
||||
"id": "c47b0b26-6d51-4beb-aedb-ad09740a9a2b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -55,19 +55,12 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "2268c17f-5cc3-457b-928b-0d470154c3a8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "28e8dc12",
|
||||
"metadata": {},
|
||||
"execution_count": 14,
|
||||
"id": "6fa3d916",
|
||||
"metadata": {
|
||||
"jp-MarkdownHeadingCollapsed": true,
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Helper function for printing docs\n",
|
||||
@@ -95,8 +88,8 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"id": "9fbcc58f",
|
||||
"execution_count": 15,
|
||||
"id": "b7648612",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -111,28 +104,20 @@
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 2:\n",
|
||||
"\n",
|
||||
"We cannot let this happen. \n",
|
||||
"\n",
|
||||
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
|
||||
"\n",
|
||||
"Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 3:\n",
|
||||
"\n",
|
||||
"As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n",
|
||||
"\n",
|
||||
"While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 3:\n",
|
||||
"\n",
|
||||
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
|
||||
"\n",
|
||||
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 4:\n",
|
||||
"\n",
|
||||
"He met the Ukrainian people. \n",
|
||||
"\n",
|
||||
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
|
||||
"\n",
|
||||
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
|
||||
"\n",
|
||||
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 5:\n",
|
||||
"\n",
|
||||
"I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \n",
|
||||
"\n",
|
||||
"I’ve worked on these issues a long time. \n",
|
||||
@@ -141,64 +126,86 @@
|
||||
"\n",
|
||||
"So let’s not abandon our streets. Or choose between safety and equal justice.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 5:\n",
|
||||
"\n",
|
||||
"He met the Ukrainian people. \n",
|
||||
"\n",
|
||||
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
|
||||
"\n",
|
||||
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
|
||||
"\n",
|
||||
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 6:\n",
|
||||
"\n",
|
||||
"So let’s not abandon our streets. Or choose between safety and equal justice. \n",
|
||||
"\n",
|
||||
"Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. \n",
|
||||
"\n",
|
||||
"That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 7:\n",
|
||||
"\n",
|
||||
"But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. \n",
|
||||
"\n",
|
||||
"Vice President Harris and I ran for office with a new economic vision for America. \n",
|
||||
"\n",
|
||||
"Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up \n",
|
||||
"and the middle out, not from the top down. \n",
|
||||
"\n",
|
||||
"Because we know that when the middle class grows, the poor have a ladder up and the wealthy do very well. \n",
|
||||
"\n",
|
||||
"America used to have the best roads, bridges, and airports on Earth. \n",
|
||||
"\n",
|
||||
"Now our infrastructure is ranked 13th in the world.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 7:\n",
|
||||
"\n",
|
||||
"And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. \n",
|
||||
"\n",
|
||||
"By the end of this year, the deficit will be down to less than half what it was before I took office. \n",
|
||||
"\n",
|
||||
"The only president ever to cut the deficit by more than one trillion dollars in a single year. \n",
|
||||
"\n",
|
||||
"Lowering your costs also means demanding more competition. \n",
|
||||
"\n",
|
||||
"I’m a capitalist, but capitalism without competition isn’t capitalism. \n",
|
||||
"\n",
|
||||
"It’s exploitation—and it drives up prices.\n",
|
||||
"and the middle out, not from the top down.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 8:\n",
|
||||
"\n",
|
||||
"For the past 40 years we were told that if we gave tax breaks to those at the very top, the benefits would trickle down to everyone else. \n",
|
||||
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
|
||||
"\n",
|
||||
"But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. \n",
|
||||
"\n",
|
||||
"Vice President Harris and I ran for office with a new economic vision for America.\n",
|
||||
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 9:\n",
|
||||
"\n",
|
||||
"All told, we created 369,000 new manufacturing jobs in America just last year. \n",
|
||||
"The widow of Sergeant First Class Heath Robinson. \n",
|
||||
"\n",
|
||||
"Powered by people I’ve met like JoJo Burgess, from generations of union steelworkers from Pittsburgh, who’s here with us tonight. \n",
|
||||
"He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. \n",
|
||||
"\n",
|
||||
"Stationed near Baghdad, just yards from burn pits the size of football fields. \n",
|
||||
"\n",
|
||||
"Heath’s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter. \n",
|
||||
"\n",
|
||||
"But cancer from prolonged exposure to burn pits ravaged Heath’s lungs and body. \n",
|
||||
"\n",
|
||||
"Danielle says Heath was a fighter to the very end.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 10:\n",
|
||||
"\n",
|
||||
"As I’ve told Xi Jinping, it is never a good bet to bet against the American people. \n",
|
||||
"\n",
|
||||
"We’ll create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. \n",
|
||||
"\n",
|
||||
"And we’ll do it all to withstand the devastating effects of the climate crisis and promote environmental justice.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 11:\n",
|
||||
"\n",
|
||||
"As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.” \n",
|
||||
"\n",
|
||||
"It’s time. \n",
|
||||
"\n",
|
||||
"But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills.\n",
|
||||
"But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. \n",
|
||||
"\n",
|
||||
"Inflation is robbing them of the gains they might otherwise feel. \n",
|
||||
"\n",
|
||||
"I get it. That’s why my top priority is getting prices under control.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 10:\n",
|
||||
"Document 12:\n",
|
||||
"\n",
|
||||
"I’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. \n",
|
||||
"This was a bipartisan effort, and I want to thank the members of both parties who worked to make it happen. \n",
|
||||
"\n",
|
||||
"And fourth, let’s end cancer as we know it. \n",
|
||||
"We’re done talking about infrastructure weeks. \n",
|
||||
"\n",
|
||||
"This is personal to me and Jill, to Kamala, and to so many of you. \n",
|
||||
"We’re going to have an infrastructure decade. \n",
|
||||
"\n",
|
||||
"Cancer is the #2 cause of death in America–second only to heart disease.\n",
|
||||
"It is going to transform America and put us on a path to win the economic competition of the 21st Century that we face with the rest of the world—particularly with China. \n",
|
||||
"\n",
|
||||
"As I’ve told Xi Jinping, it is never a good bet to bet against the American people.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 11:\n",
|
||||
"Document 13:\n",
|
||||
"\n",
|
||||
"He will never extinguish their love of freedom. He will never weaken the resolve of the free world. \n",
|
||||
"\n",
|
||||
@@ -210,100 +217,8 @@
|
||||
"\n",
|
||||
"I understand.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 12:\n",
|
||||
"\n",
|
||||
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n",
|
||||
"\n",
|
||||
"Last year COVID-19 kept us apart. This year we are finally together again. \n",
|
||||
"\n",
|
||||
"Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n",
|
||||
"\n",
|
||||
"With a duty to one another to the American people to the Constitution. \n",
|
||||
"\n",
|
||||
"And with an unwavering resolve that freedom will always triumph over tyranny.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 13:\n",
|
||||
"\n",
|
||||
"I know. \n",
|
||||
"\n",
|
||||
"One of those soldiers was my son Major Beau Biden. \n",
|
||||
"\n",
|
||||
"We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. \n",
|
||||
"\n",
|
||||
"But I’m committed to finding out everything we can. \n",
|
||||
"\n",
|
||||
"Committed to military families like Danielle Robinson from Ohio. \n",
|
||||
"\n",
|
||||
"The widow of Sergeant First Class Heath Robinson. \n",
|
||||
"\n",
|
||||
"He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 14:\n",
|
||||
"\n",
|
||||
"And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n",
|
||||
"\n",
|
||||
"So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n",
|
||||
"\n",
|
||||
"First, beat the opioid epidemic. \n",
|
||||
"\n",
|
||||
"There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 15:\n",
|
||||
"\n",
|
||||
"Third, support our veterans. \n",
|
||||
"\n",
|
||||
"Veterans are the best of us. \n",
|
||||
"\n",
|
||||
"I’ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home. \n",
|
||||
"\n",
|
||||
"My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. \n",
|
||||
"\n",
|
||||
"Our troops in Iraq and Afghanistan faced many dangers.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 16:\n",
|
||||
"\n",
|
||||
"When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we haven’t done in a long time: build a better America. \n",
|
||||
"\n",
|
||||
"For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation. \n",
|
||||
"\n",
|
||||
"And I know you’re tired, frustrated, and exhausted. \n",
|
||||
"\n",
|
||||
"But I also know this.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 17:\n",
|
||||
"\n",
|
||||
"Now is the hour. \n",
|
||||
"\n",
|
||||
"Our moment of responsibility. \n",
|
||||
"\n",
|
||||
"Our test of resolve and conscience, of history itself. \n",
|
||||
"\n",
|
||||
"It is in this moment that our character is formed. Our purpose is found. Our future is forged. \n",
|
||||
"\n",
|
||||
"Well I know this nation. \n",
|
||||
"\n",
|
||||
"We will meet the test. \n",
|
||||
"\n",
|
||||
"To protect freedom and liberty, to expand fairness and opportunity. \n",
|
||||
"\n",
|
||||
"We will save democracy. \n",
|
||||
"\n",
|
||||
"As hard as these times have been, I am more optimistic about America today than I have been my whole life.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 18:\n",
|
||||
"\n",
|
||||
"He didn’t know how to stop fighting, and neither did she. \n",
|
||||
"\n",
|
||||
"Through her pain she found purpose to demand we do better. \n",
|
||||
"\n",
|
||||
"Tonight, Danielle—we are. \n",
|
||||
"\n",
|
||||
"The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. \n",
|
||||
"\n",
|
||||
"And tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 19:\n",
|
||||
"\n",
|
||||
"I understand. \n",
|
||||
"\n",
|
||||
"I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. \n",
|
||||
@@ -314,26 +229,87 @@
|
||||
"\n",
|
||||
"Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 15:\n",
|
||||
"\n",
|
||||
"My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. \n",
|
||||
"\n",
|
||||
"Our troops in Iraq and Afghanistan faced many dangers. \n",
|
||||
"\n",
|
||||
"One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. \n",
|
||||
"\n",
|
||||
"When they came home, many of the world’s fittest and best trained warriors were never the same. \n",
|
||||
"\n",
|
||||
"Headaches. Numbness. Dizziness.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 16:\n",
|
||||
"\n",
|
||||
"Danielle says Heath was a fighter to the very end. \n",
|
||||
"\n",
|
||||
"He didn’t know how to stop fighting, and neither did she. \n",
|
||||
"\n",
|
||||
"Through her pain she found purpose to demand we do better. \n",
|
||||
"\n",
|
||||
"Tonight, Danielle—we are. \n",
|
||||
"\n",
|
||||
"The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. \n",
|
||||
"\n",
|
||||
"And tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 17:\n",
|
||||
"\n",
|
||||
"Cancer is the #2 cause of death in America–second only to heart disease. \n",
|
||||
"\n",
|
||||
"Last month, I announced our plan to supercharge \n",
|
||||
"the Cancer Moonshot that President Obama asked me to lead six years ago. \n",
|
||||
"\n",
|
||||
"Our goal is to cut the cancer death rate by at least 50% over the next 25 years, turn more cancers from death sentences into treatable diseases. \n",
|
||||
"\n",
|
||||
"More support for patients and families. \n",
|
||||
"\n",
|
||||
"To get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 18:\n",
|
||||
"\n",
|
||||
"My plan to fight inflation will lower your costs and lower the deficit. \n",
|
||||
"\n",
|
||||
"17 Nobel laureates in economics say my plan will ease long-term inflationary pressures. Top business leaders and most Americans support my plan. And here’s the plan: \n",
|
||||
"\n",
|
||||
"First – cut the cost of prescription drugs. Just look at insulin. One in ten Americans has diabetes. In Virginia, I met a 13-year-old boy named Joshua Davis.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 19:\n",
|
||||
"\n",
|
||||
"Let’s pass the Paycheck Fairness Act and paid leave. \n",
|
||||
"\n",
|
||||
"Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n",
|
||||
"\n",
|
||||
"Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges. \n",
|
||||
"\n",
|
||||
"And let’s pass the PRO Act when a majority of workers want to form a union—they shouldn’t be stopped.\n",
|
||||
"----------------------------------------------------------------------------------------------------\n",
|
||||
"Document 20:\n",
|
||||
"\n",
|
||||
"So let’s not abandon our streets. Or choose between safety and equal justice. \n",
|
||||
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n",
|
||||
"\n",
|
||||
"Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. \n",
|
||||
"Last year COVID-19 kept us apart. This year we are finally together again. \n",
|
||||
"\n",
|
||||
"That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.\n"
|
||||
"Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n",
|
||||
"\n",
|
||||
"With a duty to one another to the American people to the Constitution. \n",
|
||||
"\n",
|
||||
"And with an unwavering resolve that freedom will always triumph over tyranny.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||
"from langchain_community.document_loaders import TextLoader\n",
|
||||
"from langchain_community.embeddings import CohereEmbeddings\n",
|
||||
"from langchain_community.vectorstores import FAISS\n",
|
||||
"from langchain_openai import OpenAIEmbeddings\n",
|
||||
"\n",
|
||||
"documents = TextLoader(\"../../modules/state_of_the_union.txt\").load()\n",
|
||||
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)\n",
|
||||
"texts = text_splitter.split_documents(documents)\n",
|
||||
"retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever(\n",
|
||||
"retriever = FAISS.from_documents(texts, CohereEmbeddings()).as_retriever(\n",
|
||||
" search_kwargs={\"k\": 20}\n",
|
||||
")\n",
|
||||
"\n",
|
||||
@@ -353,8 +329,8 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 31,
|
||||
"id": "9a658023",
|
||||
"execution_count": 16,
|
||||
"id": "b83dfedb",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -388,9 +364,9 @@
|
||||
"source": [
|
||||
"from langchain.retrievers import ContextualCompressionRetriever\n",
|
||||
"from langchain.retrievers.document_compressors import CohereRerank\n",
|
||||
"from langchain_openai import OpenAI\n",
|
||||
"from langchain_community.llms import Cohere\n",
|
||||
"\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"llm = Cohere(temperature=0)\n",
|
||||
"compressor = CohereRerank()\n",
|
||||
"compression_retriever = ContextualCompressionRetriever(\n",
|
||||
" base_compressor=compressor, base_retriever=retriever\n",
|
||||
@@ -412,7 +388,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 32,
|
||||
"execution_count": 17,
|
||||
"id": "367dafe0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -422,19 +398,19 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 33,
|
||||
"execution_count": 18,
|
||||
"id": "ae697ca4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = RetrievalQA.from_chain_type(\n",
|
||||
" llm=OpenAI(temperature=0), retriever=compression_retriever\n",
|
||||
" llm=Cohere(temperature=0), retriever=compression_retriever\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 34,
|
||||
"execution_count": 19,
|
||||
"id": "46ee62fc",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -442,10 +418,10 @@
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'query': 'What did the president say about Ketanji Brown Jackson',\n",
|
||||
" 'result': \" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and that she is a consensus builder who has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\"}"
|
||||
" 'result': \" The president speaks highly of Ketanji Brown Jackson, stating that she is one of the nation's top legal minds, and will continue the legacy of excellence of Justice Breyer. The president also mentions that he worked with her family and that she comes from a family of public school educators and police officers. Since her nomination, she has received support from various groups, including the Fraternal Order of Police and judges from both major political parties. \\n\\nWould you like me to extract another sentence from the provided text? \"}"
|
||||
]
|
||||
},
|
||||
"execution_count": 34,
|
||||
"execution_count": 19,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -453,14 +429,6 @@
|
||||
"source": [
|
||||
"chain({\"query\": query})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "700a8133",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
186
docs/docs/integrations/stores/sql.ipynb
Normal file
186
docs/docs/integrations/stores/sql.ipynb
Normal file
@@ -0,0 +1,186 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: SQL\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# SQLStore\n",
|
||||
"\n",
|
||||
"The `SQLStrStore` and `SQLDocStore` implement remote data access and persistence to store strings or LangChain documents in your SQL instance."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"['value1', 'value2']\n",
|
||||
"['key2']\n",
|
||||
"['key2']\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_community.storage import SQLStrStore\n",
|
||||
"\n",
|
||||
"# simple example using an SQLStrStore to store strings\n",
|
||||
"# same as you would use in \"InMemoryStore\" but using SQL persistence\n",
|
||||
"CONNECTION_STRING = \"postgresql+psycopg2://user:pass@localhost:5432/db\"\n",
|
||||
"COLLECTION_NAME = \"test_collection\"\n",
|
||||
"\n",
|
||||
"store = SQLStrStore(\n",
|
||||
" collection_name=COLLECTION_NAME,\n",
|
||||
" connection_string=CONNECTION_STRING,\n",
|
||||
")\n",
|
||||
"store.mset([(\"key1\", \"value1\"), (\"key2\", \"value2\")])\n",
|
||||
"print(store.mget([\"key1\", \"key2\"]))\n",
|
||||
"# ['value1', 'value2']\n",
|
||||
"store.mdelete([\"key1\"])\n",
|
||||
"print(list(store.yield_keys()))\n",
|
||||
"# ['key2']\n",
|
||||
"print(list(store.yield_keys(prefix=\"k\")))\n",
|
||||
"# ['key2']\n",
|
||||
"# delete the COLLECTION_NAME collection"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Integration with ParentRetriever and PGVector\n",
|
||||
"\n",
|
||||
"When using PGVector, you already have a SQL instance running. Here is a convenient way of using this instance to store documents associated to vectors. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Prepare the PGVector vectorestore with something like this:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.vectorstores import PGVector\n",
|
||||
"from langchain_openai import OpenAIEmbeddings"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"embeddings = OpenAIEmbeddings()\n",
|
||||
"vector_db = PGVector.from_existing_index(\n",
|
||||
" embedding=embeddings,\n",
|
||||
" collection_name=COLLECTION_NAME,\n",
|
||||
" connection_string=CONNECTION_STRING,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Then create the parent retiever using `SQLDocStore` to persist the documents"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders import TextLoader\n",
|
||||
"from langchain.retrievers import ParentDocumentRetriever\n",
|
||||
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||
"from langchain_community.storage import SQLDocStore\n",
|
||||
"\n",
|
||||
"CONNECTION_STRING = \"postgresql+psycopg2://user:pass@localhost:5432/db\"\n",
|
||||
"COLLECTION_NAME = \"state_of_the_union_test\"\n",
|
||||
"docstore = SQLDocStore(\n",
|
||||
" collection_name=COLLECTION_NAME,\n",
|
||||
" connection_string=CONNECTION_STRING,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"loader = TextLoader(\"./state_of_the_union.txt\")\n",
|
||||
"documents = loader.load()\n",
|
||||
"\n",
|
||||
"parent_splitter = RecursiveCharacterTextSplitter(chunk_size=400)\n",
|
||||
"child_splitter = RecursiveCharacterTextSplitter(chunk_size=50)\n",
|
||||
"\n",
|
||||
"retriever = ParentDocumentRetriever(\n",
|
||||
" vectorstore=vector_db,\n",
|
||||
" docstore=docstore,\n",
|
||||
" child_splitter=child_splitter,\n",
|
||||
" parent_splitter=parent_splitter,\n",
|
||||
")\n",
|
||||
"retriever.add_documents(documents)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Delete a collection"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.storage import SQLStrStore\n",
|
||||
"\n",
|
||||
"# delete the COLLECTION_NAME collection\n",
|
||||
"CONNECTION_STRING = \"postgresql+psycopg2://user:pass@localhost:5432/db\"\n",
|
||||
"COLLECTION_NAME = \"test_collection\"\n",
|
||||
"store = SQLStrStore(\n",
|
||||
" collection_name=COLLECTION_NAME,\n",
|
||||
" connection_string=CONNECTION_STRING,\n",
|
||||
")\n",
|
||||
"store.delete_collection()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -9,9 +9,11 @@
|
||||
"\n",
|
||||
"This notebook covers how to get started with [Robocorp Action Server](https://github.com/robocorp/robocorp) action toolkit and LangChain.\n",
|
||||
"\n",
|
||||
"Robocorp is the easiest way to extend the capabilities of AI agents, assistants and copilots with custom actions.\n",
|
||||
"\n",
|
||||
"## Installation\n",
|
||||
"\n",
|
||||
"First, see the [Robocorp Quickstart](https://github.com/robocorp/robocorp#quickstart) on how to setup Action Server and create your Actions.\n",
|
||||
"First, see the [Robocorp Quickstart](https://github.com/robocorp/robocorp#quickstart) on how to setup `Action Server` and create your Actions.\n",
|
||||
"\n",
|
||||
"In your LangChain application, install the `langchain-robocorp` package: "
|
||||
]
|
||||
@@ -20,13 +22,61 @@
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "4c3bef91",
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Install package\n",
|
||||
"%pip install --upgrade --quiet langchain-robocorp"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "dd53ad19-4a62-46d1-a2f7-151cfd282590",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"When you create the new `Action Server` following the above quickstart.\n",
|
||||
"\n",
|
||||
"It will create a directory with files, including `action.py`.\n",
|
||||
"\n",
|
||||
"We can add python function as actions as shown [here](https://github.com/robocorp/robocorp/tree/master/actions#describe-your-action).\n",
|
||||
"\n",
|
||||
"Let's add a dummy function to `action.py`.\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"@action\n",
|
||||
"def get_weather_forecast(city: str, days: int, scale: str = \"celsius\") -> str:\n",
|
||||
" \"\"\"\n",
|
||||
" Returns weather conditions forecast for a given city.\n",
|
||||
"\n",
|
||||
" Args:\n",
|
||||
" city (str): Target city to get the weather conditions for\n",
|
||||
" days: How many day forecast to return\n",
|
||||
" scale (str): Temperature scale to use, should be one of \"celsius\" or \"fahrenheit\"\n",
|
||||
"\n",
|
||||
" Returns:\n",
|
||||
" str: The requested weather conditions forecast\n",
|
||||
" \"\"\"\n",
|
||||
" return \"75F and sunny :)\"\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"We then start the server:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"action-server start\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"And we can see: \n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"Found new action: get_weather_forecast\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Test locally by going to the server running at `http://localhost:8080` and use the UI to run the function."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2b4f3e15",
|
||||
@@ -38,17 +88,47 @@
|
||||
"\n",
|
||||
"- `LANGCHAIN_TRACING_V2=true`: To enable LangSmith log run tracing that can also be bind to respective Action Server action run logs. See [LangSmith documentation](https://docs.smith.langchain.com/tracing#log-runs) for more.\n",
|
||||
"\n",
|
||||
"## Usage"
|
||||
"## Usage\n",
|
||||
"\n",
|
||||
"We started the local action server, above, running on `http://localhost:8080`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 7,
|
||||
"id": "62e0dbc3",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `robocorp_action_server_get_weather_forecast` with `{'city': 'San Francisco', 'days': 1, 'scale': 'fahrenheit'}`\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[33;1m\u001b[1;3m\"75F and sunny :)\"\u001b[0m\u001b[32;1m\u001b[1;3mThe current weather today in San Francisco is 75F and sunny.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'input': 'What is the current weather today in San Francisco in fahrenheit?',\n",
|
||||
" 'output': 'The current weather today in San Francisco is 75F and sunny.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.agents import AgentExecutor, OpenAIFunctionsAgent\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
@@ -69,8 +149,7 @@
|
||||
"\n",
|
||||
"executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"executor.invoke(\"What is the current date?\")"
|
||||
"executor.invoke(\"What is the current weather today in San Francisco in fahrenheit?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -80,12 +159,14 @@
|
||||
"source": [
|
||||
"### Single input tools\n",
|
||||
"\n",
|
||||
"By default `toolkit.get_tools()` will return the actions as Structured Tools. To return single input tools, pass a Chat model to be used for processing the inputs."
|
||||
"By default `toolkit.get_tools()` will return the actions as Structured Tools. \n",
|
||||
"\n",
|
||||
"To return single input tools, pass a Chat model to be used for processing the inputs."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 9,
|
||||
"id": "1dc7db86",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -112,7 +193,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.5"
|
||||
"version": "3.9.16"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -9,7 +9,7 @@
|
||||
"\n",
|
||||
"This notebook goes over how to use the `arxiv` tool with an agent. \n",
|
||||
"\n",
|
||||
"First, you need to install `arxiv` python package."
|
||||
"First, you need to install the `arxiv` python package."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -36,20 +36,18 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import AgentType, initialize_agent, load_tools\n",
|
||||
"from langchain import hub\n",
|
||||
"from langchain.agents import AgentExecutor, create_react_agent, load_tools\n",
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(temperature=0.0)\n",
|
||||
"tools = load_tools(\n",
|
||||
" [\"arxiv\"],\n",
|
||||
")\n",
|
||||
"prompt = hub.pull(\"hwchase17/react\")\n",
|
||||
"\n",
|
||||
"agent_chain = initialize_agent(\n",
|
||||
" tools,\n",
|
||||
" llm,\n",
|
||||
" agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n",
|
||||
" verbose=True,\n",
|
||||
")"
|
||||
"agent = create_react_agent(llm, tools, prompt)\n",
|
||||
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -67,10 +65,9 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mI need to use Arxiv to search for the paper.\n",
|
||||
"Action: Arxiv\n",
|
||||
"Action Input: \"1605.08386\"\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3mPublished: 2016-05-26\n",
|
||||
"\u001b[32;1m\u001b[1;3mI should use the arxiv tool to search for the paper with the given identifier.\n",
|
||||
"Action: arxiv\n",
|
||||
"Action Input: 1605.08386\u001b[0m\u001b[36;1m\u001b[1;3mPublished: 2016-05-26\n",
|
||||
"Title: Heat-bath random walks with Markov bases\n",
|
||||
"Authors: Caprice Stanley, Tobias Windisch\n",
|
||||
"Summary: Graphs on lattice points are studied whose edges come from a finite set of\n",
|
||||
@@ -79,18 +76,15 @@
|
||||
"then study the mixing behaviour of heat-bath random walks on these graphs. We\n",
|
||||
"also state explicit conditions on the set of moves so that the heat-bath random\n",
|
||||
"walk, a generalization of the Glauber dynamics, is an expander in fixed\n",
|
||||
"dimension.\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mThe paper is about heat-bath random walks with Markov bases on graphs of lattice points.\n",
|
||||
"Final Answer: The paper 1605.08386 is about heat-bath random walks with Markov bases on graphs of lattice points.\u001b[0m\n",
|
||||
"dimension.\u001b[0m\u001b[32;1m\u001b[1;3mThe paper \"1605.08386\" is titled \"Heat-bath random walks with Markov bases\" and is authored by Caprice Stanley and Tobias Windisch. It was published on May 26, 2016. The paper discusses the study of graphs on lattice points with edges coming from a finite set of allowed moves. It explores the diameter of these graphs and the mixing behavior of heat-bath random walks on them. The paper also discusses conditions for the heat-bath random walk to be an expander in fixed dimension.\n",
|
||||
"Final Answer: The paper \"1605.08386\" is about heat-bath random walks with Markov bases.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'The paper 1605.08386 is about heat-bath random walks with Markov bases on graphs of lattice points.'"
|
||||
]
|
||||
"text/plain": "{'input': \"What's the paper 1605.08386 about?\",\n 'output': 'The paper \"1605.08386\" is about heat-bath random walks with Markov bases.'}"
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
@@ -98,8 +92,10 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_chain.run(\n",
|
||||
" \"What's the paper 1605.08386 about?\",\n",
|
||||
"agent_executor.invoke(\n",
|
||||
" {\n",
|
||||
" \"input\": \"What's the paper 1605.08386 about?\",\n",
|
||||
" }\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -130,15 +126,15 @@
|
||||
"id": "c89c110c-96ac-4fe1-ba3e-6056543d1a59",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Run a query to get information about some `scientific article`/articles. The query text is limited to 300 characters.\n",
|
||||
"You can use the ArxivAPIWrapper to get information about a scientific article or articles. The query text is limited to 300 characters.\n",
|
||||
"\n",
|
||||
"It returns these article fields:\n",
|
||||
"The ArxivAPIWrapper returns these article fields:\n",
|
||||
"- Publishing date\n",
|
||||
"- Title\n",
|
||||
"- Authors\n",
|
||||
"- Summary\n",
|
||||
"\n",
|
||||
"Next query returns information about one article with arxiv Id equal \"1605.08386\". "
|
||||
"The following query returns information about one article with the arxiv ID \"1605.08386\". "
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -151,9 +147,7 @@
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Published: 2016-05-26\\nTitle: Heat-bath random walks with Markov bases\\nAuthors: Caprice Stanley, Tobias Windisch\\nSummary: Graphs on lattice points are studied whose edges come from a finite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs on\\nfibers of a fixed integer matrix can be bounded from above by a constant. We\\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\\nalso state explicit conditions on the set of moves so that the heat-bath random\\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\\ndimension.'"
|
||||
]
|
||||
"text/plain": "'Published: 2016-05-26\\nTitle: Heat-bath random walks with Markov bases\\nAuthors: Caprice Stanley, Tobias Windisch\\nSummary: Graphs on lattice points are studied whose edges come from a finite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs on\\nfibers of a fixed integer matrix can be bounded from above by a constant. We\\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\\nalso state explicit conditions on the set of moves so that the heat-bath random\\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\\ndimension.'"
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
@@ -186,9 +180,7 @@
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Published: 2017-10-10\\nTitle: On Mixing Behavior of a Family of Random Walks Determined by a Linear Recurrence\\nAuthors: Caprice Stanley, Seth Sullivant\\nSummary: We study random walks on the integers mod $G_n$ that are determined by an\\ninteger sequence $\\\\{ G_n \\\\}_{n \\\\geq 1}$ generated by a linear recurrence\\nrelation. Fourier analysis provides explicit formulas to compute the\\neigenvalues of the transition matrices and we use this to bound the mixing time\\nof the random walks.\\n\\nPublished: 2016-05-26\\nTitle: Heat-bath random walks with Markov bases\\nAuthors: Caprice Stanley, Tobias Windisch\\nSummary: Graphs on lattice points are studied whose edges come from a finite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs on\\nfibers of a fixed integer matrix can be bounded from above by a constant. We\\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\\nalso state explicit conditions on the set of moves so that the heat-bath random\\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\\ndimension.\\n\\nPublished: 2003-03-18\\nTitle: Calculation of fluxes of charged particles and neutrinos from atmospheric showers\\nAuthors: V. Plyaskin\\nSummary: The results on the fluxes of charged particles and neutrinos from a\\n3-dimensional (3D) simulation of atmospheric showers are presented. An\\nagreement of calculated fluxes with data on charged particles from the AMS and\\nCAPRICE detectors is demonstrated. Predictions on neutrino fluxes at different\\nexperimental sites are compared with results from other calculations.'"
|
||||
]
|
||||
"text/plain": "'Published: 2017-10-10\\nTitle: On Mixing Behavior of a Family of Random Walks Determined by a Linear Recurrence\\nAuthors: Caprice Stanley, Seth Sullivant\\nSummary: We study random walks on the integers mod $G_n$ that are determined by an\\ninteger sequence $\\\\{ G_n \\\\}_{n \\\\geq 1}$ generated by a linear recurrence\\nrelation. Fourier analysis provides explicit formulas to compute the\\neigenvalues of the transition matrices and we use this to bound the mixing time\\nof the random walks.\\n\\nPublished: 2016-05-26\\nTitle: Heat-bath random walks with Markov bases\\nAuthors: Caprice Stanley, Tobias Windisch\\nSummary: Graphs on lattice points are studied whose edges come from a finite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs on\\nfibers of a fixed integer matrix can be bounded from above by a constant. We\\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\\nalso state explicit conditions on the set of moves so that the heat-bath random\\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\\ndimension.\\n\\nPublished: 2003-03-18\\nTitle: Calculation of fluxes of charged particles and neutrinos from atmospheric showers\\nAuthors: V. Plyaskin\\nSummary: The results on the fluxes of charged particles and neutrinos from a\\n3-dimensional (3D) simulation of atmospheric showers are presented. An\\nagreement of calculated fluxes with data on charged particles from the AMS and\\nCAPRICE detectors is demonstrated. Predictions on neutrino fluxes at different\\nexperimental sites are compared with results from other calculations.'"
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
@@ -218,9 +210,7 @@
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'No good Arxiv Result was found'"
|
||||
]
|
||||
"text/plain": "'No good Arxiv Result was found'"
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
|
||||
@@ -28,6 +28,9 @@
|
||||
"1. You must install and set up the JaguarDB server and its HTTP gateway server.\n",
|
||||
" Please refer to the instructions in:\n",
|
||||
" [www.jaguardb.com](http://www.jaguardb.com)\n",
|
||||
" For quick setup in docker environment:\n",
|
||||
" docker pull jaguardb/jaguardb_with_http\n",
|
||||
" docker run -d -p 8888:8888 -p 8080:8080 --name jaguardb_with_http jaguardb/jaguardb_with_http\n",
|
||||
"\n",
|
||||
"2. You must install the http client package for JaguarDB:\n",
|
||||
" ```\n",
|
||||
@@ -126,6 +129,8 @@
|
||||
"Add the texts from the text splitter to our vectorstore\n",
|
||||
"\"\"\"\n",
|
||||
"vectorstore.add_documents(docs)\n",
|
||||
"# or tag the documents:\n",
|
||||
"# vectorstore.add_documents(more_docs, text_tag=\"tags to these documents\")\n",
|
||||
"\n",
|
||||
"\"\"\" Get the retriever object \"\"\"\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
|
||||
510
docs/docs/integrations/vectorstores/kdbai.ipynb
Normal file
510
docs/docs/integrations/vectorstores/kdbai.ipynb
Normal file
@@ -0,0 +1,510 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "08b3f3a3-7542-4d39-a9a1-f66e50ec3c0f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# KDB.AI\n",
|
||||
"\n",
|
||||
"> [KDB.AI](https://kdb.ai/) is a powerful knowledge-based vector database and search engine that allows you to build scalable, reliable AI applications, using real-time data, by providing advanced search, recommendation and personalization.\n",
|
||||
"\n",
|
||||
"[This example](https://github.com/KxSystems/kdbai-samples/blob/main/document_search/document_search.ipynb) demonstrates how to use KDB.AI to run semantic search on unstructured text documents.\n",
|
||||
"\n",
|
||||
"To access your end point and API keys, [sign up to KDB.AI here](https://kdb.ai/get-started/).\n",
|
||||
"\n",
|
||||
"To set up your development environment, follow the instructions on the [KDB.AI pre-requisites page](https://code.kx.com/kdbai/pre-requisites.html).\n",
|
||||
"\n",
|
||||
"The following examples demonstrate some of the ways you can interact with KDB.AI through LangChain.\n",
|
||||
"\n",
|
||||
"## Import required packages"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "2704194d-c42d-463d-b162-fb95262e052c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import time\n",
|
||||
"from getpass import getpass\n",
|
||||
"\n",
|
||||
"import kdbai_client as kdbai\n",
|
||||
"import pandas as pd\n",
|
||||
"import requests\n",
|
||||
"from langchain.chains import RetrievalQA\n",
|
||||
"from langchain.document_loaders import PyPDFLoader\n",
|
||||
"from langchain_community.vectorstores import KDBAI\n",
|
||||
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "04848fcf-e128-4d63-af6c-b3991531d62e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdin",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"KDB.AI endpoint: https://ui.qa.cld.kx.com/instance/pcnvlmi860\n",
|
||||
"KDB.AI API key: ········\n",
|
||||
"OpenAI API Key: ········\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"KDBAI_ENDPOINT = input(\"KDB.AI endpoint: \")\n",
|
||||
"KDBAI_API_KEY = getpass(\"KDB.AI API key: \")\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = getpass(\"OpenAI API Key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "d08a1468-6bff-4a65-8b4a-9835cfa997ad",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"TEMP = 0.0\n",
|
||||
"K = 3"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "63a111d8-2422-4d33-85c0-bc95d25e330a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create a KBD.AI Session"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "9ffe4fee-2dc3-4943-917b-28adc3a69472",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Create a KDB.AI session...\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(\"Create a KDB.AI session...\")\n",
|
||||
"session = kdbai.Session(endpoint=KDBAI_ENDPOINT, api_key=KDBAI_API_KEY)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a2ea7e87-f65c-43d9-bc67-be7bda86def2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create a table"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "da27f31c-890e-46c0-8e01-1b8474ee3a70",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Create table \"documents\"...\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print('Create table \"documents\"...')\n",
|
||||
"schema = {\n",
|
||||
" \"columns\": [\n",
|
||||
" {\"name\": \"id\", \"pytype\": \"str\"},\n",
|
||||
" {\"name\": \"text\", \"pytype\": \"bytes\"},\n",
|
||||
" {\n",
|
||||
" \"name\": \"embeddings\",\n",
|
||||
" \"pytype\": \"float32\",\n",
|
||||
" \"vectorIndex\": {\"dims\": 1536, \"metric\": \"L2\", \"type\": \"hnsw\"},\n",
|
||||
" },\n",
|
||||
" {\"name\": \"tag\", \"pytype\": \"str\"},\n",
|
||||
" {\"name\": \"title\", \"pytype\": \"bytes\"},\n",
|
||||
" ]\n",
|
||||
"}\n",
|
||||
"table = session.create_table(\"documents\", schema)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "930ba64a-1cf9-4892-9335-8745c830497c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"CPU times: user 44.1 ms, sys: 6.04 ms, total: 50.2 ms\n",
|
||||
"Wall time: 213 ms\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"562978"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"URL = 'https://www.conseil-constitutionnel.fr/node/3850/pdf'\n",
|
||||
"PDF = 'Déclaration_des_droits_de_l_homme_et_du_citoyen.pdf'\n",
|
||||
"open(PDF, 'wb').write(requests.get(URL).content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0f7da153-e7d4-4a4c-b044-ad7b4d893c7f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Read a PDF"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "00873e6b-f204-4dca-b82b-1c45d0b83ee5",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Read a PDF...\n",
|
||||
"CPU times: user 156 ms, sys: 12.5 ms, total: 169 ms\n",
|
||||
"Wall time: 183 ms\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"3"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"print('Read a PDF...')\n",
|
||||
"loader = PyPDFLoader(PDF)\n",
|
||||
"pages = loader.load_and_split()\n",
|
||||
"len(pages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3536c7db-0db7-446a-b61e-149fd3c2d1d8",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create a Vector Database from PDF Text"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "b06d4a96-c3d5-426b-9e22-12925b14e5e6",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Create a Vector Database from PDF text...\n",
|
||||
"CPU times: user 211 ms, sys: 18.4 ms, total: 229 ms\n",
|
||||
"Wall time: 2.23 s\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"['3ef27d23-47cf-419b-8fe9-5dfae9e8e895',\n",
|
||||
" 'd3a9a69d-28f5-434b-b95b-135db46695c8',\n",
|
||||
" 'd2069bda-c0b8-4791-b84d-0c6f84f4be34']"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"print('Create a Vector Database from PDF text...')\n",
|
||||
"embeddings = OpenAIEmbeddings(model='text-embedding-ada-002')\n",
|
||||
"texts = [p.page_content for p in pages]\n",
|
||||
"metadata = pd.DataFrame(index=list(range(len(texts))))\n",
|
||||
"metadata['tag'] = 'law'\n",
|
||||
"metadata['title'] = 'Déclaration des Droits de l\\'Homme et du Citoyen de 1789'.encode('utf-8')\n",
|
||||
"vectordb = KDBAI(table, embeddings)\n",
|
||||
"vectordb.add_texts(texts=texts, metadatas=metadata)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3b658f9a-61dd-4a88-9bcb-4651992f610d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create LangChain Pipeline"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "6d848577-1192-4bb0-b721-37f52be5d9d0",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Create LangChain Pipeline...\n",
|
||||
"CPU times: user 40.8 ms, sys: 4.69 ms, total: 45.5 ms\n",
|
||||
"Wall time: 44.7 ms\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"print('Create LangChain Pipeline...')\n",
|
||||
"qabot = RetrievalQA.from_chain_type(chain_type='stuff',\n",
|
||||
" llm=ChatOpenAI(model='gpt-3.5-turbo-16k', temperature=TEMP), \n",
|
||||
" retriever=vectordb.as_retriever(search_kwargs=dict(k=K)),\n",
|
||||
" return_source_documents=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "21113a5e-d72d-4a44-9714-6b23ec95b755",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Summarize the document in English"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "81668f8f-a416-4b58-93d2-8e0924ceca23",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"Summarize the document in English:\n",
|
||||
"\n",
|
||||
"The document is the Declaration of the Rights of Man and of the Citizen of 1789. It was written by the representatives of the French people and aims to declare the natural, inalienable, and sacred rights of every individual. These rights include freedom, property, security, and resistance to oppression. The document emphasizes the importance of equality and the principle that sovereignty resides in the nation. It also highlights the role of law in protecting individual rights and ensuring the common good. The document asserts the right to freedom of thought, expression, and religion, as long as it does not disturb public order. It emphasizes the need for a public force to guarantee the rights of all citizens and the importance of a fair and equal distribution of public contributions. The document also recognizes the right of citizens to hold public officials accountable and states that any society without the guarantee of rights and separation of powers does not have a constitution. Finally, it affirms the inviolable and sacred nature of property, stating that it can only be taken away for public necessity and with just compensation.\n",
|
||||
"CPU times: user 144 ms, sys: 50.2 ms, total: 194 ms\n",
|
||||
"Wall time: 4.96 s\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"Q = 'Summarize the document in English:'\n",
|
||||
"print(f'\\n\\n{Q}\\n')\n",
|
||||
"print(qabot.invoke(dict(query=Q))['result'])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9ce7667e-8c89-466c-8040-9ba62f3e57ec",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Query the Data"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "e02a7acb-99ac-48f8-b93c-d95a8f9e87d4",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"Is it a fair law and why ?\n",
|
||||
"\n",
|
||||
"As an AI language model, I don't have personal opinions. However, I can provide some analysis based on the given context. The text provided is an excerpt from the Declaration of the Rights of Man and of the Citizen of 1789, which is considered a foundational document in the history of human rights. It outlines the natural and inalienable rights of individuals, such as freedom, property, security, and resistance to oppression. It also emphasizes the principles of equality, the rule of law, and the separation of powers. \n",
|
||||
"\n",
|
||||
"Whether or not this law is considered fair is subjective and can vary depending on individual perspectives and societal norms. However, many consider the principles and rights outlined in this declaration to be fundamental and just. It is important to note that this declaration was a significant step towards establishing principles of equality and individual rights in France and has influenced subsequent human rights documents worldwide.\n",
|
||||
"CPU times: user 85.1 ms, sys: 5.93 ms, total: 91.1 ms\n",
|
||||
"Wall time: 5.11 s\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"Q = 'Is it a fair law and why ?'\n",
|
||||
"print(f'\\n\\n{Q}\\n')\n",
|
||||
"print(qabot.invoke(dict(query=Q))['result'])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "24dc85bd-cd35-4fb3-9d01-e00a896fd9a1",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"What are the rights and duties of the man, the citizen and the society ?\n",
|
||||
"\n",
|
||||
"According to the Declaration of the Rights of Man and of the Citizen of 1789, the rights and duties of man, citizen, and society are as follows:\n",
|
||||
"\n",
|
||||
"Rights of Man:\n",
|
||||
"1. Men are born and remain free and equal in rights. Social distinctions can only be based on common utility.\n",
|
||||
"2. The purpose of political association is the preservation of the natural and imprescriptible rights of man, which are liberty, property, security, and resistance to oppression.\n",
|
||||
"3. The principle of sovereignty resides essentially in the nation. No body or individual can exercise any authority that does not emanate expressly from it.\n",
|
||||
"4. Liberty consists of being able to do anything that does not harm others. The exercise of natural rights of each man has no limits other than those that ensure the enjoyment of these same rights by other members of society. These limits can only be determined by law.\n",
|
||||
"5. The law has the right to prohibit only actions harmful to society. Anything not prohibited by law cannot be prevented, and no one can be compelled to do what it does not command.\n",
|
||||
"6. The law is the expression of the general will. All citizens have the right to participate personally, or through their representatives, in its formation. It must be the same for all, whether it protects or punishes. All citizens, being equal in its eyes, are equally eligible to all public dignities, places, and employments, according to their abilities, and without other distinction than that of their virtues and talents.\n",
|
||||
"7. No man can be accused, arrested, or detained except in cases determined by law and according to the forms it has prescribed. Those who solicit, expedite, execute, or cause to be executed arbitrary orders must be punished. But any citizen called or seized in virtue of the law must obey instantly; he renders himself culpable by resistance.\n",
|
||||
"8. The law should establish only strictly and evidently necessary penalties, and no one can be punished except in virtue of a law established and promulgated prior to the offense, and legally applied.\n",
|
||||
"9. Every man being presumed innocent until he has been declared guilty, if it is judged indispensable to arrest him, any rigor that is not necessary to secure his person must be severely repressed by the law.\n",
|
||||
"10. No one should be disturbed for his opinions, even religious ones, as long as their manifestation does not disturb the established public order by law.\n",
|
||||
"11. The free communication of ideas and opinions is one of the most precious rights of man. Every citizen may therefore speak, write, and print freely, except to respond to the abuse of this liberty in cases determined by law.\n",
|
||||
"12. The guarantee of the rights of man and of the citizen requires a public force. This force is therefore instituted for the advantage of all and not for the particular utility of those to whom it is entrusted.\n",
|
||||
"13. For the maintenance of the public force and for the expenses of administration, a common contribution is necessary. It must be equally distributed among all citizens, in proportion to their abilities.\n",
|
||||
"14. All citizens have the right to ascertain, by themselves or through their representatives, the necessity of the public contribution, to consent to it freely, to follow its use, and to determine its amount, basis, collection, and duration.\n",
|
||||
"15. Society has the right to ask any public agent for an account of his administration.\n",
|
||||
"16. Any society in which the guarantee of rights is not assured, nor the separation of powers determined, has no constitution.\n",
|
||||
"17. Property being an inviolable and sacred right, no one can be deprived of it, except when public necessity, legally determined, obviously requires it, and under the condition of a just and prior indemnity.\n",
|
||||
"\n",
|
||||
"Duties of the Citizen:\n",
|
||||
"The Declaration does not explicitly mention the duties of the citizen, but it emphasizes the importance of the law and the need for citizens to obey it. It also highlights the responsibility of citizens to participate in the formation of the law and to hold public agents accountable for their administration.\n",
|
||||
"\n",
|
||||
"Duties of Society:\n",
|
||||
"The Declaration does not explicitly mention the duties of society, but it emphasizes the importance of guaranteeing the rights of man and of the citizen. Society is responsible for ensuring the preservation of these rights and the separation of powers, as well as providing a public force for the advantage of all.\n",
|
||||
"CPU times: user 86.5 ms, sys: 5.45 ms, total: 92 ms\n",
|
||||
"Wall time: 14.9 s\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"Q = 'What are the rights and duties of the man, the citizen and the society ?'\n",
|
||||
"print(f'\\n\\n{Q}\\n')\n",
|
||||
"print(qabot.invoke(dict(query=Q))['result'])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "d55b7980-a3d3-45d7-a310-99492edb098b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"Is this law practical ?\n",
|
||||
"\n",
|
||||
"As an AI language model, I cannot provide personal opinions or subjective judgments on whether a law is practical or not. The texts provided are excerpts from the French Constitution and the Declaration of the Rights of Man and of the Citizen of 1789. These texts outline fundamental rights and principles that form the basis of the French legal system. The practicality of a law is often a matter of interpretation and can vary depending on the context and specific circumstances. It is ultimately up to legal experts, lawmakers, and the judiciary to determine the practicality and application of these laws in specific cases.\n",
|
||||
"CPU times: user 91.4 ms, sys: 5.89 ms, total: 97.3 ms\n",
|
||||
"Wall time: 2.78 s\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"Q = 'Is this law practical ?'\n",
|
||||
"print(f'\\n\\n{Q}\\n')\n",
|
||||
"print(qabot.invoke(dict(query=Q))['result'])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5f9d0a3c-4941-4f65-b6b8-aefe4f6abd14",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Clean up the Documents table"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "cdddda29-e28d-423f-b1c6-f77d39acc3dd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"True"
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Clean up KDB.AI \"documents\" table and index for similarity search\n",
|
||||
"# so this notebook could be played again and again\n",
|
||||
"session.table(\"documents\").drop()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "23cb1359-f32c-4b47-a885-cbf3cbae5b14",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -68,7 +68,44 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 19,
|
||||
"id": "0fda552b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Collecting tika\n",
|
||||
" Downloading tika-2.6.0.tar.gz (27 kB)\n",
|
||||
" Preparing metadata (setup.py) ... \u001b[?25ldone\n",
|
||||
"\u001b[?25hRequirement already satisfied: setuptools in /Users/omaraly/anaconda3/lib/python3.11/site-packages (from tika) (68.2.2)\n",
|
||||
"Requirement already satisfied: requests in /Users/omaraly/anaconda3/lib/python3.11/site-packages (from tika) (2.31.0)\n",
|
||||
"Requirement already satisfied: charset-normalizer<4,>=2 in /Users/omaraly/anaconda3/lib/python3.11/site-packages (from requests->tika) (2.1.1)\n",
|
||||
"Requirement already satisfied: idna<4,>=2.5 in /Users/omaraly/anaconda3/lib/python3.11/site-packages (from requests->tika) (3.4)\n",
|
||||
"Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/omaraly/anaconda3/lib/python3.11/site-packages (from requests->tika) (1.26.16)\n",
|
||||
"Requirement already satisfied: certifi>=2017.4.17 in /Users/omaraly/anaconda3/lib/python3.11/site-packages (from requests->tika) (2022.12.7)\n",
|
||||
"Building wheels for collected packages: tika\n",
|
||||
" Building wheel for tika (setup.py) ... \u001b[?25ldone\n",
|
||||
"\u001b[?25h Created wheel for tika: filename=tika-2.6.0-py3-none-any.whl size=32621 sha256=b3f03c9dbd7f347d712c49027704d48f1a368f31560be9b4ee131f79a52e176f\n",
|
||||
" Stored in directory: /Users/omaraly/Library/Caches/pip/wheels/27/ba/2f/37420d1191bdae5e855d69b8e913673045bfd395cbd78ad697\n",
|
||||
"Successfully built tika\n",
|
||||
"Installing collected packages: tika\n",
|
||||
"Successfully installed tika-2.6.0\n",
|
||||
"\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.3.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.2\u001b[0m\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
|
||||
"Note: you may need to restart the kernel to use updated packages.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install tika"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 37,
|
||||
"id": "920f4644",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -100,7 +137,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 39,
|
||||
"id": "a8c513ab",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -117,7 +154,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 40,
|
||||
"id": "fc516993",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -131,29 +168,37 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Others may not be democratic but nevertheless depend upon a rules-based international system.\n",
|
||||
"6 N A T I O N A L S E C U R I T Y S T R A T E G Y Page 7 \n",
|
||||
"\n",
|
||||
"Yet what we share in common, and the prospect of a freer and more open world, makes such a broad coalition necessary and worthwhile.\n",
|
||||
"This National Security Strategy lays out our plan to achieve a better future of a free, open, secure, and prosperous world.\n",
|
||||
"\n",
|
||||
"We will listen to and consider ideas that our partners suggest about how to do this.\n",
|
||||
"Our strategy is rooted in our national interests: to protect the security of the American people; to expand economic prosperity and opportunity; and to realize and defend the democratic values at the heart of the American way of life.\n",
|
||||
"\n",
|
||||
"Building this inclusive coalition requires reinforcing the multilateral system to uphold the founding principles of the United Nations, including respect for international law.\n",
|
||||
"We can do none of this alone and we do not have to.\n",
|
||||
"\n",
|
||||
"141 countries expressed support at the United Nations General Assembly for a resolution condemning Russia’s unprovoked aggression against Ukraine.\n",
|
||||
"Most nations around the world define their interests in ways that are compatible with ours.\n",
|
||||
"\n",
|
||||
"We continue to demonstrate this approach by engaging all regions across all issues, not in terms of what we are against but what we are for.\n",
|
||||
"We will build the strongest and broadest possible coalition of nations that seek to cooperate with each other, while competing with those powers that offer a darker vision and thwarting their efforts to threaten our interests.\n",
|
||||
"\n",
|
||||
"This year, we partnered with ASEAN to advance clean energy infrastructure and maritime security in the region.\n",
|
||||
"Our Enduring Role The need for a strong and purposeful American role in the world has never been greater.\n",
|
||||
"\n",
|
||||
"We kickstarted the Prosper Africa Build Together Campaign to fuel economic growth across the continent and bolster trade and investment in the clean energy, health, and digital technology sectors.\n",
|
||||
"The world is becoming more divided and unstable.\n",
|
||||
"\n",
|
||||
"We are working to develop a partnership with countries on the Atlantic Ocean to establish and carry out a shared approach to advancing our joint development, economic, environmental, scientific, and maritime governance goals.\n",
|
||||
"Global increases in inflation since the COVID-19 pandemic began have made life more difficult for many.\n",
|
||||
"\n",
|
||||
"We galvanized regional action to address the core challenges facing the Western Hemisphere by spearheading the Americas Partnership for Economic Prosperity to drive economic recovery and by mobilizing the region behind a bold and unprecedented approach to migration through the Los Angeles Declaration on Migration and Protection.\n",
|
||||
"The basic laws and principles governing relations among nations, including the United Nations Charter and the protection it affords all states from being invaded by their neighbors or having their borders redrawn by force, are under attack.\n",
|
||||
"\n",
|
||||
"In the Middle East, we have worked to enhance deterrence toward Iran, de-escalate regional conflicts, deepen integration among a diverse set of partners in the region, and bolster energy stability.\n",
|
||||
"The risk of conflict between major powers is increasing.\n",
|
||||
"\n",
|
||||
"A prime example of an inclusive coalition is IPEF, which we launched alongside a dozen regional partners that represent 40 percent of the world’s GDP.\n"
|
||||
"Democracies and autocracies are engaged in a contest to show which system of governance can best deliver for their people and the world.\n",
|
||||
"\n",
|
||||
"Competition to develop and deploy foundational technologies that will transform our security and economy is intensifying.\n",
|
||||
"\n",
|
||||
"Global cooperation on shared interests has frayed, even as the need for that cooperation takes on existential importance.\n",
|
||||
"\n",
|
||||
"The scale of these changes grows with each passing year, as do the risks of inaction.\n",
|
||||
"\n",
|
||||
"Although the international environment has become more contested, the United States remains the world’s leading power.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -173,7 +218,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 41,
|
||||
"id": "8804a21d",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -192,7 +237,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"execution_count": 42,
|
||||
"id": "756a6887",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -251,7 +296,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 43,
|
||||
"id": "9427195f",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -263,10 +308,10 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"LLMRailsRetriever(tags=None, metadata=None, vectorstore=<langchain_community.vectorstores.llm_rails.LLMRails object at 0x107b9c040>, search_type='similarity', search_kwargs={'k': 5})"
|
||||
"LLMRailsRetriever(vectorstore=<langchain_community.vectorstores.llm_rails.LLMRails object at 0x1235b0e50>)"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"execution_count": 43,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -278,7 +323,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": 44,
|
||||
"id": "f3c70c31",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -290,17 +335,21 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Document(page_content='But we will do so as the last resort and only when the objectives and mission are clear and achievable, consistent with our values and laws, alongside non-military tools, and the mission is undertaken with the informed consent of the American people.\\n\\nOur approach to national defense is described in detail in the 2022 National Defense Strategy.\\n\\nOur starting premise is that a powerful U.S. military helps advance and safeguard vital U.S. national interests by backstopping diplomacy, confronting aggression, deterring conflict, projecting strength, and protecting the American people and their economic interests.\\n\\nAmid intensifying competition, the military’s role is to maintain and gain warfighting advantages while limiting those of our competitors.\\n\\nThe military will act urgently to sustain and strengthen deterrence, with the PRC as its pacing challenge.\\n\\nWe will make disciplined choices regarding our national defense and focus our attention on the military’s primary responsibilities: to defend the homeland, and deter attacks and aggression against the United States, our allies and partners, while being prepared to fight and win the Nation’s wars should diplomacy and deterrence fail.\\n\\nTo do so, we will combine our strengths to achieve maximum effect in deterring acts of aggression—an approach we refer to as integrated deterrence (see text box on page 22).\\n\\nWe will operate our military using a campaigning mindset—sequencing logically linked military activities to advance strategy-aligned priorities.\\n\\nAnd, we will build a resilient force and defense ecosystem to ensure we can perform these functions for decades to come.\\n\\nWe ended America’s longest war in Afghanistan, and with it an era of major military operations to remake other societies, even as we have maintained the capacity to address terrorist threats to the American people as they emerge.\\n\\n20 NATIONAL SECURITY STRATEGY Page 21 \\x90\\x90\\x90\\x90\\x90\\x90\\n\\nA combat-credible military is the foundation of deterrence and America’s ability to prevail in conflict.', metadata={'type': 'file', 'url': 'https://cdn.llmrails.com/dst_d94b490c-4638-4247-ad5e-9aa0e7ef53c1/c2d63a2ea3cd406cb522f8312bc1535d', 'name': 'Biden-Harris-Administrations-National-Security-Strategy-10.2022.pdf'})"
|
||||
"[Document(page_content='But we will do so as the last resort and only when the objectives and mission are clear and achievable, consistent with our values and laws, alongside non-military tools, and the mission is undertaken with the informed consent of the American people.\\n\\nOur approach to national defense is described in detail in the 2022 National Defense Strategy.\\n\\nOur starting premise is that a powerful U.S. military helps advance and safeguard vital U.S. national interests by backstopping diplomacy, confronting aggression, deterring conflict, projecting strength, and protecting the American people and their economic interests.\\n\\nAmid intensifying competition, the military’s role is to maintain and gain warfighting advantages while limiting those of our competitors.\\n\\nThe military will act urgently to sustain and strengthen deterrence, with the PRC as its pacing challenge.\\n\\nWe will make disciplined choices regarding our national defense and focus our attention on the military’s primary responsibilities: to defend the homeland, and deter attacks and aggression against the United States, our allies and partners, while being prepared to fight and win the Nation’s wars should diplomacy and deterrence fail.\\n\\nTo do so, we will combine our strengths to achieve maximum effect in deterring acts of aggression—an approach we refer to as integrated deterrence (see text box on page 22).\\n\\nWe will operate our military using a campaigning mindset—sequencing logically linked military activities to advance strategy-aligned priorities.\\n\\nAnd, we will build a resilient force and defense ecosystem to ensure we can perform these functions for decades to come.\\n\\nWe ended America’s longest war in Afghanistan, and with it an era of major military operations to remake other societies, even as we have maintained the capacity to address terrorist threats to the American people as they emerge.\\n\\n20 NATIONAL SECURITY STRATEGY Page 21 \\x90\\x90\\x90\\x90\\x90\\x90\\n\\nA combat-credible military is the foundation of deterrence and America’s ability to prevail in conflict.', metadata={'type': 'file', 'url': 'https://cdn.llmrails.com/dst_466092be-e79a-49f3-b3e6-50e51ddae186/a63892afdee3469d863520351bd5af9f', 'name': 'Biden-Harris-Administrations-National-Security-Strategy-10.2022.pdf', 'filters': {}}),\n",
|
||||
" Document(page_content='Your text here', metadata={'type': 'text', 'url': 'https://cdn.llmrails.com/dst_466092be-e79a-49f3-b3e6-50e51ddae186/63c17ac6395e4be1967c63a16356818e', 'name': '71370a91-7f58-4cc7-b2e7-546325960330', 'filters': {}}),\n",
|
||||
" Document(page_content='Page 1 NATIONAL SECURITY STRATEGY OCTOBER 2022 Page 2 October 12, 2022 From the earliest days of my Presidency, I have argued that our world is at an inflection point.\\n\\nHow we respond to the tremendous challenges and the unprecedented opportunities we face today will determine the direction of our world and impact the security and prosperity of the American people for generations to come.\\n\\nThe 2022 National Security Strategy outlines how my Administration will seize this decisive decade to advance America’s vital interests, position the United States to outmaneuver our geopolitical competitors, tackle shared challenges, and set our world firmly on a path toward a brighter and more hopeful tomorrow.\\n\\nAround the world, the need for American leadership is as great as it has ever been.\\n\\nWe are in the midst of a strategic competition to shape the future of the international order.\\n\\nMeanwhile, shared challenges that impact people everywhere demand increased global cooperation and nations stepping up to their responsibilities at a moment when this has become more difficult.\\n\\nIn response, the United States will lead with our values, and we will work in lockstep with our allies and partners and with all those who share our interests.\\n\\nWe will not leave our future vulnerable to the whims of those who do not share our vision for a world that is free, open, prosperous, and secure.\\n\\nAs the world continues to navigate the lingering impacts of the pandemic and global economic uncertainty, there is no nation better positioned to lead with strength and purpose than the United States of America.\\n\\nFrom the moment I took the oath of office, my Administration has focused on investing in America’s core strategic advantages.\\n\\nOur economy has added 10 million jobs and unemployment rates have reached near record lows.\\n\\nManufacturing jobs have come racing back to the United States.\\n\\nWe’re rebuilding our economy from the bottom up and the middle out.', metadata={'type': 'file', 'url': 'https://cdn.llmrails.com/dst_466092be-e79a-49f3-b3e6-50e51ddae186/a63892afdee3469d863520351bd5af9f', 'name': 'Biden-Harris-Administrations-National-Security-Strategy-10.2022.pdf', 'filters': {}}),\n",
|
||||
" Document(page_content='Your text here', metadata={'type': 'text', 'url': 'https://cdn.llmrails.com/dst_466092be-e79a-49f3-b3e6-50e51ddae186/8c414a9306e04d47a300f0289ba6e9cf', 'name': 'dacc29f5-8c63-46e0-b5aa-cab2d3c99fb7', 'filters': {}}),\n",
|
||||
" Document(page_content='To ensure our nuclear deterrent remains responsive to the threats we face, we are modernizing the nuclear Triad, nuclear command, control, and communications, and our nuclear weapons infrastructure, as well as strengthening our extended deterrence commitments to our Allies.\\n\\nWe remain equally committed to reducing the risks of nuclear war.\\n\\nThis includes taking further steps to reduce the role of nuclear weapons in our strategy and pursuing realistic goals for mutual, verifiable arms control, which contribute to our deterrence strategy and strengthen the global non-proliferation regime.\\n\\nThe most important investments are those made in the extraordinary All-Volunteer Force of the Army, Marine Corps, Navy, Air Force, Space Force, Coast Guard—together with our Department of Defense civilian workforce.\\n\\nOur service members are the backbone of America’s national defense and we are committed to their wellbeing and their families while in service and beyond.\\n\\nWe will maintain our foundational principle of civilian control of the military, recognizing that healthy civil-military relations rooted in mutual respect are essential to military effectiveness.\\n\\nWe will strengthen the effectiveness of the force by promoting diversity and inclusion; intensifying our suicide prevention efforts; eliminating the scourges of sexual assault, harassment, and other forms of violence, abuse, and discrimination; and rooting out violent extremism.\\n\\nWe will also uphold our Nation’s sacred obligation to care for veterans and their families when our troops return home.\\n\\nNATIONAL SECURITY STRATEGY 21 Page 22 \\x90\\x90\\x90\\x90\\x90\\x90\\n\\nIntegrated Deterrence The United States has a vital interest in deterring aggression by the PRC, Russia, and other states.\\n\\nMore capable competitors and new strategies of threatening behavior below and above the traditional threshold of conflict mean we cannot afford to rely solely on conventional forces and nuclear deterrence.\\n\\nOur defense strategy must sustain and strengthen deterrence, with the PRC as our pacing challenge.', metadata={'type': 'file', 'url': 'https://cdn.llmrails.com/dst_466092be-e79a-49f3-b3e6-50e51ddae186/a63892afdee3469d863520351bd5af9f', 'name': 'Biden-Harris-Administrations-National-Security-Strategy-10.2022.pdf', 'filters': {}})]"
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"execution_count": 44,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"query = \"What is your approach to national defense\"\n",
|
||||
"retriever.get_relevant_documents(query)[0]"
|
||||
"retriever.invoke(query)"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -320,7 +369,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.4"
|
||||
"version": "3.11.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -201,6 +201,120 @@
|
||||
"source": [
|
||||
"After retreival you can go on querying it as usual."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"### Per-User Retrieval\n",
|
||||
"\n",
|
||||
"When building a retrieval app, you often have to build it with multiple users in mind. This means that you may be storing data not just for one user, but for many different users, and they should not be able to see eachother’s data.\n",
|
||||
"\n",
|
||||
"Milvus recommends using [partition_key](https://milvus.io/docs/multi_tenancy.md#Partition-key-based-multi-tenancy) to implement multi-tenancy, here is an example."
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"pycharm": {
|
||||
"name": "#%% md\n"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.documents import Document\n",
|
||||
"\n",
|
||||
"docs = [\n",
|
||||
" Document(page_content=\"i worked at kensho\", metadata={\"namespace\": \"harrison\"}),\n",
|
||||
" Document(page_content=\"i worked at facebook\", metadata={\"namespace\": \"ankush\"}),\n",
|
||||
"]\n",
|
||||
"vectorstore = Milvus.from_documents(\n",
|
||||
" docs,\n",
|
||||
" embeddings,\n",
|
||||
" connection_args={\"host\": \"127.0.0.1\", \"port\": \"19530\"},\n",
|
||||
" drop_old=True,\n",
|
||||
" partition_key_field=\"namespace\", # Use the \"namespace\" field as the partition key\n",
|
||||
")"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"pycharm": {
|
||||
"name": "#%%\n"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"To conduct a search using the partition key, you should include either of the following in the boolean expression of the search request:\n",
|
||||
"\n",
|
||||
"`search_kwargs={\"expr\": '<partition_key> == \"xxxx\"'}`\n",
|
||||
"\n",
|
||||
"`search_kwargs={\"expr\": '<partition_key> == in [\"xxx\", \"xxx\"]'}`\n",
|
||||
"\n",
|
||||
"Do replace `<partition_key>` with the name of the field that is designated as the partition key.\n",
|
||||
"\n",
|
||||
"Milvus changes to a partition based on the specified partition key, filters entities according to the partition key, and searches among the filtered entities.\n"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"pycharm": {
|
||||
"name": "#%% md\n"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": "[Document(page_content='i worked at facebook', metadata={'namespace': 'ankush'})]"
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# This will only get documents for Ankush\n",
|
||||
"vectorstore.as_retriever(\n",
|
||||
" search_kwargs={\"expr\": 'namespace == \"ankush\"'}\n",
|
||||
").get_relevant_documents(\"where did i work?\")"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"pycharm": {
|
||||
"name": "#%%\n"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": "[Document(page_content='i worked at kensho', metadata={'namespace': 'harrison'})]"
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# This will only get documents for Harrison\n",
|
||||
"vectorstore.as_retriever(\n",
|
||||
" search_kwargs={\"expr\": 'namespace == \"harrison\"'}\n",
|
||||
").get_relevant_documents(\"where did i work?\")"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"pycharm": {
|
||||
"name": "#%%\n"
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -224,4 +338,4 @@
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
}
|
||||
@@ -17,7 +17,7 @@
|
||||
"id": "b823d64a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setting Up Your Environment[](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/rockset#setting-up-environment)\n",
|
||||
"## Setting Up Your Environment\n",
|
||||
"\n",
|
||||
"1. Leverage the `Rockset` console to create a [collection](https://rockset.com/docs/collections/) with the Write API as your source. In this walkthrough, we create a collection named `langchain_demo`. \n",
|
||||
" \n",
|
||||
@@ -249,14 +249,6 @@
|
||||
"\n",
|
||||
"Keep an eye on https://rockset.com/ for future updates in this space."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "054de494-e6c0-453a-becd-ebfb2fdf541a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -19,7 +19,7 @@
|
||||
"\n",
|
||||
"This notebook goes through how to create your own custom agent.\n",
|
||||
"\n",
|
||||
"In this example, we will use OpenAI Function Calling to create this agent.\n",
|
||||
"In this example, we will use OpenAI Tool Calling to create this agent.\n",
|
||||
"**This is generally the most reliable way to create agents.**\n",
|
||||
"\n",
|
||||
"We will first create it WITHOUT memory, but we will then show how to add memory in.\n",
|
||||
@@ -61,10 +61,21 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "fbe32b5f",
|
||||
"execution_count": 2,
|
||||
"id": "490bab35-adbb-4b45-8d0d-232414121e97",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"3"
|
||||
]
|
||||
},
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.agents import tool\n",
|
||||
"\n",
|
||||
@@ -75,6 +86,16 @@
|
||||
" return len(word)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"get_word_length.invoke(\"abc\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "c9821fb3-4449-49a0-a708-88a18d39e068",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tools = [get_word_length]"
|
||||
]
|
||||
},
|
||||
@@ -91,7 +112,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": 4,
|
||||
"id": "aa4b50ea",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -116,22 +137,24 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Bind tools to LLM\n",
|
||||
"How does the agent know what tools it can use?\n",
|
||||
"In this case we're relying on OpenAI function calling LLMs, which take functions as a separate argument and have been specifically trained to know when to invoke those functions.\n",
|
||||
"\n",
|
||||
"To pass in our tools to the agent, we just need to format them to the [OpenAI function format](https://openai.com/blog/function-calling-and-other-api-updates) and pass them to our model. (By `bind`-ing the functions, we're making sure that they're passed in each time the model is invoked.)"
|
||||
"How does the agent know what tools it can use?\n",
|
||||
"\n",
|
||||
"In this case we're relying on OpenAI tool calling LLMs, which take tools as a separate argument and have been specifically trained to know when to invoke those tools.\n",
|
||||
"\n",
|
||||
"To pass in our tools to the agent, we just need to format them to the [OpenAI tool format](https://platform.openai.com/docs/api-reference/chat/create) and pass them to our model. (By `bind`-ing the functions, we're making sure that they're passed in each time the model is invoked.)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 5,
|
||||
"id": "e82713b6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.tools.convert_to_openai import format_tool_to_openai_function\n",
|
||||
"from langchain_community.tools.convert_to_openai import format_tool_to_openai_tool\n",
|
||||
"\n",
|
||||
"llm_with_tools = llm.bind(functions=[format_tool_to_openai_function(t) for t in tools])"
|
||||
"llm_with_tools = llm.bind(tools=[format_tool_to_openai_tool(tool) for tool in tools])"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -146,30 +169,32 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": 6,
|
||||
"id": "925a8ca4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents.format_scratchpad import format_to_openai_function_messages\n",
|
||||
"from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser\n",
|
||||
"from langchain.agents.format_scratchpad.openai_tools import (\n",
|
||||
" format_to_openai_tool_messages,\n",
|
||||
")\n",
|
||||
"from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser\n",
|
||||
"\n",
|
||||
"agent = (\n",
|
||||
" {\n",
|
||||
" \"input\": lambda x: x[\"input\"],\n",
|
||||
" \"agent_scratchpad\": lambda x: format_to_openai_function_messages(\n",
|
||||
" \"agent_scratchpad\": lambda x: format_to_openai_tool_messages(\n",
|
||||
" x[\"intermediate_steps\"]\n",
|
||||
" ),\n",
|
||||
" }\n",
|
||||
" | prompt\n",
|
||||
" | llm_with_tools\n",
|
||||
" | OpenAIFunctionsAgentOutputParser()\n",
|
||||
" | OpenAIToolsAgentOutputParser()\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 7,
|
||||
"id": "9af9734e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -181,7 +206,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"execution_count": 8,
|
||||
"id": "653b1617",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -193,10 +218,10 @@
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `get_word_length` with `{'word': 'educa'}`\n",
|
||||
"Invoking: `get_word_length` with `{'word': 'eudca'}`\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[36;1m\u001b[1;3m5\u001b[0m\u001b[32;1m\u001b[1;3mThere are 5 letters in the word \"educa\".\u001b[0m\n",
|
||||
"\u001b[0m\u001b[36;1m\u001b[1;3m5\u001b[0m\u001b[32;1m\u001b[1;3mThere are 5 letters in the word \"eudca\".\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
@@ -204,17 +229,21 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'input': 'How many letters in the word educa',\n",
|
||||
" 'output': 'There are 5 letters in the word \"educa\".'}"
|
||||
"[{'actions': [OpenAIToolAgentAction(tool='get_word_length', tool_input={'word': 'eudca'}, log=\"\\nInvoking: `get_word_length` with `{'word': 'eudca'}`\\n\\n\\n\", message_log=[AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_U9SR78eT398r9UbzID2N9LXh', 'function': {'arguments': '{\\n \"word\": \"eudca\"\\n}', 'name': 'get_word_length'}, 'type': 'function'}]})], tool_call_id='call_U9SR78eT398r9UbzID2N9LXh')],\n",
|
||||
" 'messages': [AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_U9SR78eT398r9UbzID2N9LXh', 'function': {'arguments': '{\\n \"word\": \"eudca\"\\n}', 'name': 'get_word_length'}, 'type': 'function'}]})]},\n",
|
||||
" {'steps': [AgentStep(action=OpenAIToolAgentAction(tool='get_word_length', tool_input={'word': 'eudca'}, log=\"\\nInvoking: `get_word_length` with `{'word': 'eudca'}`\\n\\n\\n\", message_log=[AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_U9SR78eT398r9UbzID2N9LXh', 'function': {'arguments': '{\\n \"word\": \"eudca\"\\n}', 'name': 'get_word_length'}, 'type': 'function'}]})], tool_call_id='call_U9SR78eT398r9UbzID2N9LXh'), observation=5)],\n",
|
||||
" 'messages': [FunctionMessage(content='5', name='get_word_length')]},\n",
|
||||
" {'output': 'There are 5 letters in the word \"eudca\".',\n",
|
||||
" 'messages': [AIMessage(content='There are 5 letters in the word \"eudca\".')]}]"
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_executor.invoke({\"input\": \"How many letters in the word educa\"})"
|
||||
"list(agent_executor.stream({\"input\": \"How many letters in the word eudca\"}))"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -227,7 +256,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"execution_count": 9,
|
||||
"id": "60f5dc19",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -237,7 +266,7 @@
|
||||
"AIMessage(content='There are 6 letters in the word \"educa\".')"
|
||||
]
|
||||
},
|
||||
"execution_count": 15,
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -270,7 +299,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"execution_count": 10,
|
||||
"id": "169006d5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -301,7 +330,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"execution_count": 11,
|
||||
"id": "8c03f36c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -321,7 +350,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"execution_count": 12,
|
||||
"id": "5429d97f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -329,14 +358,14 @@
|
||||
"agent = (\n",
|
||||
" {\n",
|
||||
" \"input\": lambda x: x[\"input\"],\n",
|
||||
" \"agent_scratchpad\": lambda x: format_to_openai_function_messages(\n",
|
||||
" \"agent_scratchpad\": lambda x: format_to_openai_tool_messages(\n",
|
||||
" x[\"intermediate_steps\"]\n",
|
||||
" ),\n",
|
||||
" \"chat_history\": lambda x: x[\"chat_history\"],\n",
|
||||
" }\n",
|
||||
" | prompt\n",
|
||||
" | llm_with_tools\n",
|
||||
" | OpenAIFunctionsAgentOutputParser()\n",
|
||||
" | OpenAIToolsAgentOutputParser()\n",
|
||||
")\n",
|
||||
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
|
||||
]
|
||||
@@ -351,7 +380,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"execution_count": 13,
|
||||
"id": "9d9da346",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -386,7 +415,7 @@
|
||||
" 'output': 'No, \"educa\" is not a real word in English.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 19,
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -402,14 +431,6 @@
|
||||
")\n",
|
||||
"agent_executor.invoke({\"input\": \"is that a real word?\", \"chat_history\": chat_history})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f21bcd99",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -428,7 +449,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
"version": "3.11.4"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -1,350 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b69e747b-4e79-4caf-8f8b-c6e70275a31d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Event Streaming\n",
|
||||
"\n",
|
||||
"**NEW** This is a new API only works with recent versions of langchain-core!\n",
|
||||
"\n",
|
||||
"In this notebook, we'll see how to use `astream_events` to stream **token by token** from LLM calls used within the tools invoked by the agent. \n",
|
||||
"\n",
|
||||
"We will **only** stream tokens from LLMs used within tools and from no other LLMs (just to show that we can)! \n",
|
||||
"\n",
|
||||
"Feel free to adapt this example to the needs of your application.\n",
|
||||
"\n",
|
||||
"Our agent will use the OpenAI tools API for tool invocation, and we'll provide the agent with two tools:\n",
|
||||
"\n",
|
||||
"1. `where_cat_is_hiding`: A tool that uses an LLM to tell us where the cat is hiding\n",
|
||||
"2. `tell_me_a_joke_about`: A tool that can use an LLM to tell a joke about the given topic\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## ⚠️ Beta API ⚠️ ##\n",
|
||||
"\n",
|
||||
"Event Streaming is a **beta** API, and may change a bit based on feedback.\n",
|
||||
"\n",
|
||||
"Keep in mind the following constraints (repeated in tools section):\n",
|
||||
"\n",
|
||||
"* streaming only works properly if using `async`\n",
|
||||
"* propagate callbacks if definning custom functions / runnables\n",
|
||||
"* If creating a tool that uses an LLM, make sure to use `.astream()` on the LLM rather than `.ainvoke` to ask the LLM to stream tokens.\n",
|
||||
"\n",
|
||||
"## Event Hooks Reference\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Here is a reference table that shows some events that might be emitted by the various Runnable objects.\n",
|
||||
"Definitions for some of the Runnable are included after the table.\n",
|
||||
"\n",
|
||||
"⚠️ When streaming the inputs for the runnable will not be available until the input stream has been entirely consumed This means that the inputs will be available at for the corresponding `end` hook rather than `start` event.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"| event | name | chunk | input | output |\n",
|
||||
"|----------------------|------------------|---------------------------------|-----------------------------------------------|-------------------------------------------------|\n",
|
||||
"| on_chat_model_start | [model name] | | {\"messages\": [[SystemMessage, HumanMessage]]} | |\n",
|
||||
"| on_chat_model_stream | [model name] | AIMessageChunk(content=\"hello\") | | |\n",
|
||||
"| on_chat_model_end | [model name] | | {\"messages\": [[SystemMessage, HumanMessage]]} | {\"generations\": [...], \"llm_output\": None, ...} |\n",
|
||||
"| on_llm_start | [model name] | | {'input': 'hello'} | |\n",
|
||||
"| on_llm_stream | [model name] | 'Hello' | | |\n",
|
||||
"| on_llm_end | [model name] | | 'Hello human!' |\n",
|
||||
"| on_chain_start | format_docs | | | |\n",
|
||||
"| on_chain_stream | format_docs | \"hello world!, goodbye world!\" | | |\n",
|
||||
"| on_chain_end | format_docs | | [Document(...)] | \"hello world!, goodbye world!\" |\n",
|
||||
"| on_tool_start | some_tool | | {\"x\": 1, \"y\": \"2\"} | |\n",
|
||||
"| on_tool_stream | some_tool | {\"x\": 1, \"y\": \"2\"} | | |\n",
|
||||
"| on_tool_end | some_tool | | | {\"x\": 1, \"y\": \"2\"} |\n",
|
||||
"| on_retriever_start | [retriever name] | | {\"query\": \"hello\"} | |\n",
|
||||
"| on_retriever_chunk | [retriever name] | {documents: [...]} | | |\n",
|
||||
"| on_retriever_end | [retriever name] | | {\"query\": \"hello\"} | {documents: [...]} |\n",
|
||||
"| on_prompt_start | [template_name] | | {\"question\": \"hello\"} | |\n",
|
||||
"| on_prompt_end | [template_name] | | {\"question\": \"hello\"} | ChatPromptValue(messages: [SystemMessage, ...]) |\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Here are declarations associated with the events shown above:\n",
|
||||
"\n",
|
||||
"`format_docs`:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"def format_docs(docs: List[Document]) -> str:\n",
|
||||
" '''Format the docs.'''\n",
|
||||
" return \", \".join([doc.page_content for doc in docs])\n",
|
||||
"\n",
|
||||
"format_docs = RunnableLambda(format_docs)\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"`some_tool`:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"@tool\n",
|
||||
"def some_tool(x: int, y: str) -> dict:\n",
|
||||
" '''Some_tool.'''\n",
|
||||
" return {\"x\": x, \"y\": y}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"`prompt`:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"template = ChatPromptTemplate.from_messages(\n",
|
||||
" [(\"system\", \"You are Cat Agent 007\"), (\"human\", \"{question}\")]\n",
|
||||
").with_config({\"run_name\": \"my_template\", \"tags\": [\"my_template\"]})\n",
|
||||
"```\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "29205bef-2288-48e9-9067-f19072277a97",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain import hub\n",
|
||||
"from langchain.agents import AgentExecutor, create_openai_tools_agent\n",
|
||||
"from langchain.tools import tool\n",
|
||||
"from langchain_core.callbacks import Callbacks\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"from langchain_openai import ChatOpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d6b0fafa-ce3b-489b-bf1d-d37b87f4819e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create the model\n",
|
||||
"\n",
|
||||
"**Attention** For older versions of langchain, we must set `streaming=True`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "fa3c3761-a1cd-4118-8559-ea4d8857d394",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model = ChatOpenAI(temperature=0, streaming=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b76e1a3b-2983-42d9-ac12-4a0f32cd4a24",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Tools\n",
|
||||
"\n",
|
||||
"We define two tools that rely on a chat model to generate output!\n",
|
||||
"\n",
|
||||
"Please note a few different things:\n",
|
||||
"\n",
|
||||
"1. The tools are **async**\n",
|
||||
"1. The model is invoked using **.astream()** to force the output to stream\n",
|
||||
"1. For older langchain versions you should set `streaming=True` on the model!\n",
|
||||
"1. We attach tags to the model so that we can filter on said tags in our callback handler\n",
|
||||
"1. The tools accept callbacks and propagate them to the model as a runtime argument"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "c767f760-fe52-47e5-9c2a-622f03507aaf",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"@tool\n",
|
||||
"async def where_cat_is_hiding(callbacks: Callbacks) -> str: # <--- Accept callbacks\n",
|
||||
" \"\"\"Where is the cat hiding right now?\"\"\"\n",
|
||||
" chunks = [\n",
|
||||
" chunk\n",
|
||||
" async for chunk in model.astream(\n",
|
||||
" \"Give one up to three word answer about where the cat might be hiding in the house right now.\",\n",
|
||||
" {\n",
|
||||
" \"tags\": [\"tool_llm\"],\n",
|
||||
" \"callbacks\": callbacks,\n",
|
||||
" }, # <--- Propagate callbacks and assign a tag to this model\n",
|
||||
" )\n",
|
||||
" ]\n",
|
||||
" return \"\".join(chunk.content for chunk in chunks)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"@tool\n",
|
||||
"async def tell_me_a_joke_about(\n",
|
||||
" topic: str, callbacks: Callbacks\n",
|
||||
") -> str: # <--- Accept callbacks\n",
|
||||
" \"\"\"Tell a joke about a given topic.\"\"\"\n",
|
||||
" template = ChatPromptTemplate.from_messages(\n",
|
||||
" [\n",
|
||||
" (\"system\", \"You are Cat Agent 007. You are funny and know many jokes.\"),\n",
|
||||
" (\"human\", \"Tell me a long joke about {topic}\"),\n",
|
||||
" ]\n",
|
||||
" )\n",
|
||||
" chain = template | model.with_config({\"tags\": [\"tool_llm\"]})\n",
|
||||
" chunks = [\n",
|
||||
" chunk\n",
|
||||
" async for chunk in chain.astream({\"topic\": topic}, {\"callbacks\": callbacks})\n",
|
||||
" ]\n",
|
||||
" return \"\".join(chunk.content for chunk in chunks)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "cba476f8-29da-4c2c-9134-186871caf7ae",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Initialize the Agent"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "0bab4488-bf4c-461f-b41e-5e60310fe0f2",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"input_variables=['agent_scratchpad', 'input'] input_types={'chat_history': typing.List[typing.Union[langchain_core.messages.ai.AIMessage, langchain_core.messages.human.HumanMessage, langchain_core.messages.chat.ChatMessage, langchain_core.messages.system.SystemMessage, langchain_core.messages.function.FunctionMessage, langchain_core.messages.tool.ToolMessage]], 'agent_scratchpad': typing.List[typing.Union[langchain_core.messages.ai.AIMessage, langchain_core.messages.human.HumanMessage, langchain_core.messages.chat.ChatMessage, langchain_core.messages.system.SystemMessage, langchain_core.messages.function.FunctionMessage, langchain_core.messages.tool.ToolMessage]]} messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant')), MessagesPlaceholder(variable_name='chat_history', optional=True), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')), MessagesPlaceholder(variable_name='agent_scratchpad')]\n",
|
||||
"[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant')), MessagesPlaceholder(variable_name='chat_history', optional=True), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')), MessagesPlaceholder(variable_name='agent_scratchpad')]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Get the prompt to use - you can modify this!\n",
|
||||
"prompt = hub.pull(\"hwchase17/openai-tools-agent\")\n",
|
||||
"print(prompt)\n",
|
||||
"print(prompt.messages)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "1762f4e1-402a-4bfb-af26-eb5b7b8f56bd",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tools = [tell_me_a_joke_about, where_cat_is_hiding]\n",
|
||||
"agent = create_openai_tools_agent(model.with_config({\"tags\": [\"agent\"]}), tools, prompt)\n",
|
||||
"executor = AgentExecutor(agent=agent, tools=tools)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "841271d7-1de1-41a9-9387-bb04368537f1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Stream the output\n",
|
||||
"\n",
|
||||
"The streamed output is shown with a `|` as the delimiter between tokens. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "a5d94bd8-4a55-4527-b21a-4245a38c7c26",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: This API is in beta and may change in the future.\n",
|
||||
" warn_beta(\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"--\n",
|
||||
"Starting tool: where_cat_is_hiding with inputs: {}\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"|Under| the| bed|.||\n",
|
||||
"\n",
|
||||
"Ended tool: where_cat_is_hiding\n",
|
||||
"--\n",
|
||||
"Starting tool: tell_me_a_joke_about with inputs: {'topic': 'under the bed'}\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"|Sure|,| here|'s| a| long| joke| about| what|'s| hiding| under| the| bed|:\n",
|
||||
"\n",
|
||||
"|Once| upon| a| time|,| there| was| a| mis|chie|vous| little| boy| named| Tim|my|.| Tim|my| had| always| been| afraid| of| what| might| be| lurking| under| his| bed| at| night|.| Every| evening|,| he| would| ti|pt|oe| into| his| room|,| turn| off| the| lights|,| and| then| make| a| daring| leap| onto| his| bed|,| ensuring| that| nothing| could| grab| his| ankles|.\n",
|
||||
"\n",
|
||||
"|One| night|,| Tim|my|'s| parents| decided| to| play| a| prank| on| him|.| They| hid| a| remote|-controlled| toy| monster| under| his| bed|,| complete| with| glowing| eyes| and| a| grow|ling| sound| effect|.| As| Tim|my| settled| into| bed|,| his| parents| quietly| sn|uck| into| his| room|,| ready| to| give| him| the| scare| of| a| lifetime|.\n",
|
||||
"\n",
|
||||
"|Just| as| Tim|my| was| about| to| drift| off| to| sleep|,| he| heard| a| faint| grow|l| coming| from| under| his| bed|.| His| eyes| widened| with| fear|,| and| his| heart| started| racing|.| He| must|ered| up| the| courage| to| peek| under| the| bed|,| and| to| his| surprise|,| he| saw| a| pair| of| glowing| eyes| staring| back| at| him|.\n",
|
||||
"\n",
|
||||
"|Terr|ified|,| Tim|my| jumped| out| of| bed| and| ran| to| his| parents|,| screaming|,| \"|There|'s| a| monster| under| my| bed|!| Help|!\"\n",
|
||||
"\n",
|
||||
"|His| parents|,| trying| to| st|ifle| their| laughter|,| rushed| into| his| room|.| They| pretended| to| be| just| as| scared| as| Tim|my|,| and| together|,| they| brav|ely| approached| the| bed|.| Tim|my|'s| dad| grabbed| a| bro|om|stick|,| ready| to| defend| his| family| against| the| imaginary| monster|.\n",
|
||||
"\n",
|
||||
"|As| they| got| closer|,| the| \"|monster|\"| under| the| bed| started| to| move|.| Tim|my|'s| mom|,| unable| to| contain| her| laughter| any| longer|,| pressed| a| button| on| the| remote| control|,| causing| the| toy| monster| to| sc|urry| out| from| under| the| bed|.| Tim|my|'s| fear| quickly| turned| into| confusion|,| and| then| into| laughter| as| he| realized| it| was| all| just| a| prank|.\n",
|
||||
"\n",
|
||||
"|From| that| day| forward|,| Tim|my| learned| that| sometimes| the| things| we| fear| the| most| are| just| fig|ments| of| our| imagination|.| And| as| for| what|'s| hiding| under| his| bed|?| Well|,| it|'s| just| dust| b|unn|ies| and| the| occasional| missing| sock|.| Nothing| to| be| afraid| of|!\n",
|
||||
"\n",
|
||||
"|Remember|,| laughter| is| the| best| monster| repell|ent|!||\n",
|
||||
"\n",
|
||||
"Ended tool: tell_me_a_joke_about\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"async for event in executor.astream_events(\n",
|
||||
" {\"input\": \"where is the cat hiding? Tell me a joke about that location?\"},\n",
|
||||
" include_tags=[\"tool_llm\"],\n",
|
||||
" include_types=[\"tool\"],\n",
|
||||
"):\n",
|
||||
" hook = event[\"event\"]\n",
|
||||
" if hook == \"on_chat_model_stream\":\n",
|
||||
" print(event[\"data\"][\"chunk\"].content, end=\"|\")\n",
|
||||
" elif hook in {\"on_chat_model_start\", \"on_chat_model_end\"}:\n",
|
||||
" print()\n",
|
||||
" print()\n",
|
||||
" elif hook == \"on_tool_start\":\n",
|
||||
" print(\"--\")\n",
|
||||
" print(\n",
|
||||
" f\"Starting tool: {event['name']} with inputs: {event['data'].get('input')}\"\n",
|
||||
" )\n",
|
||||
" elif hook == \"on_tool_end\":\n",
|
||||
" print(f\"Ended tool: {event['name']}\")\n",
|
||||
" else:\n",
|
||||
" pass"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -95,42 +95,40 @@ prompt = PromptTemplate.from_template("1 + {number} = ")
|
||||
|
||||
# Constructor callback: First, let's explicitly set the StdOutCallbackHandler when initializing our chain
|
||||
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])
|
||||
chain.run(number=2)
|
||||
chain.invoke({"number":2})
|
||||
|
||||
# Use verbose flag: Then, let's use the `verbose` flag to achieve the same result
|
||||
chain = LLMChain(llm=llm, prompt=prompt, verbose=True)
|
||||
chain.run(number=2)
|
||||
chain.invoke({"number":2})
|
||||
|
||||
# Request callbacks: Finally, let's use the request `callbacks` to achieve the same result
|
||||
chain = LLMChain(llm=llm, prompt=prompt)
|
||||
chain.run(number=2, callbacks=[handler])
|
||||
chain.invoke({"number":2}, {"callbacks":[handler]})
|
||||
|
||||
```
|
||||
|
||||
<CodeOutputBlock lang="python">
|
||||
|
||||
```
|
||||
> Entering new LLMChain chain...
|
||||
Prompt after formatting:
|
||||
1 + 2 =
|
||||
> Entering new LLMChain chain...
|
||||
Prompt after formatting:
|
||||
1 + 2 =
|
||||
|
||||
> Finished chain.
|
||||
> Finished chain.
|
||||
|
||||
|
||||
> Entering new LLMChain chain...
|
||||
Prompt after formatting:
|
||||
1 + 2 =
|
||||
> Entering new LLMChain chain...
|
||||
Prompt after formatting:
|
||||
1 + 2 =
|
||||
|
||||
> Finished chain.
|
||||
> Finished chain.
|
||||
|
||||
|
||||
> Entering new LLMChain chain...
|
||||
Prompt after formatting:
|
||||
1 + 2 =
|
||||
> Entering new LLMChain chain...
|
||||
Prompt after formatting:
|
||||
1 + 2 =
|
||||
|
||||
> Finished chain.
|
||||
|
||||
|
||||
'\n\n3'
|
||||
> Finished chain.
|
||||
```
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
@@ -359,11 +359,20 @@
|
||||
"Here we've gone over how to add application logic for incorporating historical outputs, but we're still manually updating the chat history and inserting it into each input. In a real Q&A application we'll want some way of persisting chat history and some way of automatically inserting and updating it.\n",
|
||||
"\n",
|
||||
"For this we can use:\n",
|
||||
"\n",
|
||||
"- [BaseChatMessageHistory](/docs/modules/memory/chat_messages/): Store chat history.\n",
|
||||
"- [RunnableWithMessageHistory](/docs/expression_language/how_to/message_history): Wrapper for an LCEL chain and a `BaseChatMessageHistory` that handles injecting chat history into inputs and updating it after each invocation.\n",
|
||||
"\n",
|
||||
"For a detailed walkthrough of how to use these classes together to create a stateful conversational chain, head to the [How to add message history (memory)](/docs/expression_language/how_to/message_history) LCEL page."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "1f67a60a-0a31-4315-9cce-19c78d658f6a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -298,6 +298,18 @@
|
||||
" config={\"configurable\": {\"search_kwargs\": {\"namespace\": \"ankush\"}}},\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"For more vectorstore implementations for multi-user, please refer to specific pages, such as [Milvus](/docs/integrations/vectorstores/milvus)."
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"pycharm": {
|
||||
"name": "#%% md\n"
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -321,4 +333,4 @@
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
}
|
||||
@@ -20,6 +20,7 @@
|
||||
"- It can answer questions based on the databases' schema as well as on the databases' content (like describing a specific table).\n",
|
||||
"- It can recover from errors by running a generated query, catching the traceback and regenerating it correctly.\n",
|
||||
"- It can query the database as many times as needed to answer the user question.\n",
|
||||
"- It will save tokens by only retrieving the schema from relevant tables.\n",
|
||||
"\n",
|
||||
"To initialize the agent we'll use the [create_sql_agent](https://api.python.langchain.com/en/latest/agent_toolkits/langchain_community.agent_toolkits.sql.base.create_sql_agent.html) constructor. This agent uses the `SQLDatabaseToolkit` which contains tools to: \n",
|
||||
"\n",
|
||||
@@ -35,7 +36,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": 43,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -51,7 +52,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -81,7 +82,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -98,7 +99,7 @@
|
||||
"\"[(1, 'AC/DC'), (2, 'Accept'), (3, 'Aerosmith'), (4, 'Alanis Morissette'), (5, 'Alice In Chains'), (6, 'Antônio Carlos Jobim'), (7, 'Apocalyptica'), (8, 'Audioslave'), (9, 'BackBeat'), (10, 'Billy Cobham')]\""
|
||||
]
|
||||
},
|
||||
"execution_count": 1,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -121,9 +122,16 @@
|
||||
"We'll use an OpenAI chat model and an `\"openai-tools\"` agent, which will use OpenAI's function-calling API to drive the agent's tool selection and invocations."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As we can see, the agent will first choose which tables are relevant and then add the schema for those tables and a few sample rows to the prompt."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -136,7 +144,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 45,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -203,7 +211,7 @@
|
||||
"2\t4\t2009-01-02 00:00:00\tUllevålsveien 14\tOslo\tNone\tNorway\t0171\t3.96\n",
|
||||
"3\t8\t2009-01-03 00:00:00\tGrétrystraat 63\tBrussels\tNone\tBelgium\t1000\t5.94\n",
|
||||
"*/\u001b[0m\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `sql_db_query` with `SELECT c.Country, SUM(i.Total) AS TotalSales FROM Invoice i JOIN Customer c ON i.CustomerId = c.CustomerId GROUP BY c.Country ORDER BY TotalSales DESC`\n",
|
||||
"Invoking: `sql_db_query` with `SELECT c.Country, SUM(i.Total) AS TotalSales FROM Invoice i JOIN Customer c ON i.CustomerId = c.CustomerId GROUP BY c.Country ORDER BY TotalSales DESC LIMIT 10;`\n",
|
||||
"responded: To list the total sales per country, I can query the \"Invoice\" and \"Customer\" tables. I will join these tables on the \"CustomerId\" column and group the results by the \"BillingCountry\" column. Then, I will calculate the sum of the \"Total\" column to get the total sales per country. Finally, I will order the results in descending order of the total sales.\n",
|
||||
"\n",
|
||||
"Here is the SQL query:\n",
|
||||
@@ -214,11 +222,12 @@
|
||||
"JOIN Customer c ON i.CustomerId = c.CustomerId\n",
|
||||
"GROUP BY c.Country\n",
|
||||
"ORDER BY TotalSales DESC\n",
|
||||
"LIMIT 10;\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Now, I will execute this query to get the results.\n",
|
||||
"Now, I will execute this query to get the total sales per country.\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[36;1m\u001b[1;3m[('USA', 523.0600000000003), ('Canada', 303.9599999999999), ('France', 195.09999999999994), ('Brazil', 190.09999999999997), ('Germany', 156.48), ('United Kingdom', 112.85999999999999), ('Czech Republic', 90.24000000000001), ('Portugal', 77.23999999999998), ('India', 75.25999999999999), ('Chile', 46.62), ('Ireland', 45.62), ('Hungary', 45.62), ('Austria', 42.62), ('Finland', 41.620000000000005), ('Netherlands', 40.62), ('Norway', 39.62), ('Sweden', 38.620000000000005), ('Poland', 37.620000000000005), ('Italy', 37.620000000000005), ('Denmark', 37.620000000000005), ('Australia', 37.620000000000005), ('Argentina', 37.620000000000005), ('Spain', 37.62), ('Belgium', 37.62)]\u001b[0m\u001b[32;1m\u001b[1;3mThe total sales per country are as follows:\n",
|
||||
"\u001b[0m\u001b[36;1m\u001b[1;3m[('USA', 523.0600000000003), ('Canada', 303.9599999999999), ('France', 195.09999999999994), ('Brazil', 190.09999999999997), ('Germany', 156.48), ('United Kingdom', 112.85999999999999), ('Czech Republic', 90.24000000000001), ('Portugal', 77.23999999999998), ('India', 75.25999999999999), ('Chile', 46.62)]\u001b[0m\u001b[32;1m\u001b[1;3mThe total sales per country are as follows:\n",
|
||||
"\n",
|
||||
"1. USA: $523.06\n",
|
||||
"2. Canada: $303.96\n",
|
||||
@@ -231,7 +240,7 @@
|
||||
"9. India: $75.26\n",
|
||||
"10. Chile: $46.62\n",
|
||||
"\n",
|
||||
"The country whose customers spent the most is the USA, with a total sales of $523.06.\u001b[0m\n",
|
||||
"To answer the second question, the country whose customers spent the most is the USA, with a total sales of $523.06.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
@@ -240,10 +249,10 @@
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'input': \"List the total sales per country. Which country's customers spent the most?\",\n",
|
||||
" 'output': 'The total sales per country are as follows:\\n\\n1. USA: $523.06\\n2. Canada: $303.96\\n3. France: $195.10\\n4. Brazil: $190.10\\n5. Germany: $156.48\\n6. United Kingdom: $112.86\\n7. Czech Republic: $90.24\\n8. Portugal: $77.24\\n9. India: $75.26\\n10. Chile: $46.62\\n\\nThe country whose customers spent the most is the USA, with a total sales of $523.06.'}"
|
||||
" 'output': 'The total sales per country are as follows:\\n\\n1. USA: $523.06\\n2. Canada: $303.96\\n3. France: $195.10\\n4. Brazil: $190.10\\n5. Germany: $156.48\\n6. United Kingdom: $112.86\\n7. Czech Republic: $90.24\\n8. Portugal: $77.24\\n9. India: $75.26\\n10. Chile: $46.62\\n\\nTo answer the second question, the country whose customers spent the most is the USA, with a total sales of $523.06.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"execution_count": 45,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -256,7 +265,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 46,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -326,7 +335,7 @@
|
||||
" 'output': 'The `PlaylistTrack` table has two columns: `PlaylistId` and `TrackId`. It is a junction table that represents the many-to-many relationship between playlists and tracks. \\n\\nHere is the schema of the `PlaylistTrack` table:\\n\\n```\\nCREATE TABLE \"PlaylistTrack\" (\\n\\t\"PlaylistId\" INTEGER NOT NULL, \\n\\t\"TrackId\" INTEGER NOT NULL, \\n\\tPRIMARY KEY (\"PlaylistId\", \"TrackId\"), \\n\\tFOREIGN KEY(\"TrackId\") REFERENCES \"Track\" (\"TrackId\"), \\n\\tFOREIGN KEY(\"PlaylistId\") REFERENCES \"Playlist\" (\"PlaylistId\")\\n)\\n```\\n\\nThe `PlaylistId` column is a foreign key referencing the `PlaylistId` column in the `Playlist` table. The `TrackId` column is a foreign key referencing the `TrackId` column in the `Track` table.\\n\\nHere are three sample rows from the `PlaylistTrack` table:\\n\\n```\\nPlaylistId TrackId\\n1 3402\\n1 3389\\n1 3390\\n```\\n\\nPlease let me know if there is anything else I can help with.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"execution_count": 46,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -341,14 +350,14 @@
|
||||
"source": [
|
||||
"## Using a dynamic few-shot prompt\n",
|
||||
"\n",
|
||||
"To optimize agent performance, we can provide a custom prompt with domain-specific knowledge. In this case we'll create a few shot prompt with an example selector, that will dynamically build the few shot prompt based on the user input.\n",
|
||||
"To optimize agent performance, we can provide a custom prompt with domain-specific knowledge. In this case we'll create a few shot prompt with an example selector, that will dynamically build the few shot prompt based on the user input. This will help the model make better queries by inserting relevant queries in the prompt that the model can use as reference.\n",
|
||||
"\n",
|
||||
"First we need some user input <> SQL query examples:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -406,7 +415,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -628,20 +637,20 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 47,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"['For Those About To Rock We Salute You',\n",
|
||||
" 'Balls to the Wall',\n",
|
||||
" 'Restless and Wild',\n",
|
||||
" 'Let There Be Rock',\n",
|
||||
" 'Big Ones']"
|
||||
"['Os Cães Ladram Mas A Caravana Não Pára',\n",
|
||||
" 'War',\n",
|
||||
" 'Mais Do Mesmo',\n",
|
||||
" \"Up An' Atom\",\n",
|
||||
" 'Riot Act']"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"execution_count": 47,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -655,7 +664,7 @@
|
||||
" res = db.run(query)\n",
|
||||
" res = [el for sub in ast.literal_eval(res) for el in sub if el]\n",
|
||||
" res = [re.sub(r\"\\b\\d+\\b\", \"\", string).strip() for string in res]\n",
|
||||
" return res\n",
|
||||
" return list(set(res))\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"artists = query_as_list(db, \"SELECT Name FROM Artist\")\n",
|
||||
@@ -672,7 +681,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": 48,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -691,7 +700,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"execution_count": 49,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -706,7 +715,7 @@
|
||||
"\n",
|
||||
"DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n",
|
||||
"\n",
|
||||
"If you need to filter on a proper noun, you must ALWAYS first look up the filter value using the \"search_proper_nouns\" tool!\n",
|
||||
"If you need to filter on a proper noun, you must ALWAYS first look up the filter value using the \"search_proper_nouns\" tool! \n",
|
||||
"\n",
|
||||
"You have access to the following tables: {table_names}\n",
|
||||
"\n",
|
||||
@@ -725,7 +734,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"execution_count": 52,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -736,19 +745,19 @@
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `search_proper_nouns` with `{'query': 'alice in chains'}`\n",
|
||||
"Invoking: `search_proper_nouns` with `{'query': 'alis in chain'}`\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[36;1m\u001b[1;3mAlice In Chains\n",
|
||||
"\n",
|
||||
"Metallica\n",
|
||||
"Aisha Duo\n",
|
||||
"\n",
|
||||
"Pearl Jam\n",
|
||||
"Xis\n",
|
||||
"\n",
|
||||
"Pearl Jam\n",
|
||||
"Da Lama Ao Caos\n",
|
||||
"\n",
|
||||
"Smashing Pumpkins\u001b[0m\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `sql_db_query` with `{'query': \"SELECT COUNT(*) FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'Alice In Chains')\"}`\n",
|
||||
"A-Sides\u001b[0m\u001b[32;1m\u001b[1;3m\n",
|
||||
"Invoking: `sql_db_query` with `SELECT COUNT(*) FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'Alice In Chains')`\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[0m\u001b[36;1m\u001b[1;3m[(1,)]\u001b[0m\u001b[32;1m\u001b[1;3mAlice In Chains has 1 album.\u001b[0m\n",
|
||||
@@ -759,17 +768,17 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'input': 'How many albums does alice in chains have?',\n",
|
||||
"{'input': 'How many albums does alis in chain have?',\n",
|
||||
" 'output': 'Alice In Chains has 1 album.'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"execution_count": 52,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent.invoke({\"input\": \"How many albums does alice in chains have?\"})"
|
||||
"agent.invoke({\"input\": \"How many albums does alis in chain have?\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -793,9 +802,9 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "poetry-venv",
|
||||
"display_name": "pampa-labs",
|
||||
"language": "python",
|
||||
"name": "poetry-venv"
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
@@ -807,7 +816,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -189,13 +189,13 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can look at the [LangSmith trace](https://smith.langchain.com/public/c8fa52ea-be46-4829-bde2-52894970b830/r) to get a better understanding of what this chain is doing. We can also inpect the chain directly for its prompts. Looking at the prompt (below), we can see that it is:\n",
|
||||
"We can look at the [LangSmith trace](https://smith.langchain.com/public/c8fa52ea-be46-4829-bde2-52894970b830/r) to get a better understanding of what this chain is doing. We can also inspect the chain directly for its prompts. Looking at the prompt (below), we can see that it is:\n",
|
||||
"\n",
|
||||
"* Dialect-specific. In this case it references SQLite explicitly.\n",
|
||||
"* Has definitions for all the available tables.\n",
|
||||
"* Has three examples rows for each table.\n",
|
||||
"\n",
|
||||
"This technique is inspired by papers like [this](https://arxiv.org/pdf/2204.00498.pdf), which suggest showing examples rows and being explicit about tables improves performance. We can also in"
|
||||
"This technique is inspired by papers like [this](https://arxiv.org/pdf/2204.00498.pdf), which suggest showing examples rows and being explicit about tables improves performance. We can also inspect the full prompt like so:"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -343,6 +343,7 @@
|
||||
"- It can answer questions based on the databases' schema as well as on the databases' content (like describing a specific table).\n",
|
||||
"- It can recover from errors by running a generated query, catching the traceback and regenerating it correctly.\n",
|
||||
"- It can answer questions that require multiple dependent queries.\n",
|
||||
"- It will save tokens by only considering the schema from relevant tables.\n",
|
||||
"\n",
|
||||
"To initialize the agent, we use `create_sql_agent` function. This agent contains the `SQLDatabaseToolkit` which contains tools to: \n",
|
||||
"\n",
|
||||
|
||||
@@ -17,7 +17,7 @@
|
||||
"source": [
|
||||
"# Quickstart\n",
|
||||
"\n",
|
||||
"In this guide, we will go over the basic ways to create Chains and Agents that call Tools. Tools can be just about anything — APIs, functions, databases, etc. Tools allow us to extend the capabilities of a model beyond just outputting text/messages. The key to using models with tools is correctly prompting a model and parsing its response so that it chooses the right ools and provides the right inputs for them."
|
||||
"In this guide, we will go over the basic ways to create Chains and Agents that call Tools. Tools can be just about anything — APIs, functions, databases, etc. Tools allow us to extend the capabilities of a model beyond just outputting text/messages. The key to using models with tools is correctly prompting a model and parsing its response so that it chooses the right tools and provides the right inputs for them."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -142,13 +142,8 @@ const config = {
|
||||
},
|
||||
image: "img/parrot-chainlink-icon.png",
|
||||
navbar: {
|
||||
title: "🦜️🔗 LangChain",
|
||||
title: "🦜️🔗 LangChain Docs",
|
||||
items: [
|
||||
{
|
||||
to: "/docs/get_started/introduction",
|
||||
label: "Docs",
|
||||
position: "left",
|
||||
},
|
||||
{
|
||||
type: "docSidebar",
|
||||
position: "left",
|
||||
|
||||
BIN
docs/static/img/ollama_example_img.jpg
vendored
Normal file
BIN
docs/static/img/ollama_example_img.jpg
vendored
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 64 KiB |
@@ -6,7 +6,9 @@ all: help
|
||||
# Define a variable for the test file path.
|
||||
TEST_FILE ?= tests/unit_tests/
|
||||
|
||||
test:
|
||||
integration_tests: TEST_FILE = tests/integration_tests/
|
||||
|
||||
test integration_tests:
|
||||
poetry run pytest $(TEST_FILE)
|
||||
|
||||
tests:
|
||||
|
||||
@@ -1 +1,45 @@
|
||||
# __package_name__
|
||||
|
||||
This package contains the LangChain integration with __ModuleName__
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install -U __package_name__
|
||||
```
|
||||
|
||||
And you should configure credentials by setting the following environment variables:
|
||||
|
||||
* TODO: fill this out
|
||||
|
||||
## Chat Models
|
||||
|
||||
`Chat__ModuleName__` class exposes chat models from __ModuleName__.
|
||||
|
||||
```python
|
||||
from __module_name__ import Chat__ModuleName__
|
||||
|
||||
llm = Chat__ModuleName__()
|
||||
llm.invoke("Sing a ballad of LangChain.")
|
||||
```
|
||||
|
||||
## Embeddings
|
||||
|
||||
`__ModuleName__Embeddings` class exposes embeddings from __ModuleName__.
|
||||
|
||||
```python
|
||||
from __module_name__ import __ModuleName__Embeddings
|
||||
|
||||
embeddings = __ModuleName__Embeddings()
|
||||
embeddings.embed_query("What is the meaning of life?")
|
||||
```
|
||||
|
||||
## LLMs
|
||||
`__ModuleName__LLM` class exposes LLMs from __ModuleName__.
|
||||
|
||||
```python
|
||||
from __module_name__ import __ModuleName__LLM
|
||||
|
||||
llm = __ModuleName__LLM()
|
||||
llm.invoke("The meaning of life is")
|
||||
```
|
||||
|
||||
@@ -4,6 +4,11 @@ version = "0.0.1"
|
||||
description = "An integration package connecting __ModuleName__ and LangChain"
|
||||
authors = []
|
||||
readme = "README.md"
|
||||
repository = "https://github.com/langchain-ai/langchain"
|
||||
license = "MIT"
|
||||
|
||||
[tool.poetry.urls]
|
||||
"Source Code" = "https://github.com/langchain-ai/langchain/tree/master/libs/partners/__package_name_short__"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.8.1,<4.0"
|
||||
|
||||
@@ -1,10 +1,14 @@
|
||||
[tool.poetry]
|
||||
name = "langchain-cli"
|
||||
version = "0.0.20"
|
||||
version = "0.0.21"
|
||||
description = "CLI for interacting with LangChain"
|
||||
authors = ["Erick Friis <erick@langchain.dev>"]
|
||||
license = "MIT"
|
||||
readme = "README.md"
|
||||
repository = "https://github.com/langchain-ai/langchain"
|
||||
license = "MIT"
|
||||
|
||||
[tool.poetry.urls]
|
||||
"Source Code" = "https://github.com/langchain-ai/langchain/tree/master/libs/cli"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.8.1,<4.0"
|
||||
|
||||
@@ -11,6 +11,7 @@ from typing import (
|
||||
from langchain_core.tracers.context import register_configure_hook
|
||||
|
||||
from langchain_community.callbacks.openai_info import OpenAICallbackHandler
|
||||
from langchain_community.callbacks.tracers.comet import CometTracer
|
||||
from langchain_community.callbacks.tracers.wandb import WandbTracer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -21,11 +22,17 @@ openai_callback_var: ContextVar[Optional[OpenAICallbackHandler]] = ContextVar(
|
||||
wandb_tracing_callback_var: ContextVar[Optional[WandbTracer]] = ContextVar( # noqa: E501
|
||||
"tracing_wandb_callback", default=None
|
||||
)
|
||||
comet_tracing_callback_var: ContextVar[Optional[CometTracer]] = ContextVar( # noqa: E501
|
||||
"tracing_comet_callback", default=None
|
||||
)
|
||||
|
||||
register_configure_hook(openai_callback_var, True)
|
||||
register_configure_hook(
|
||||
wandb_tracing_callback_var, True, WandbTracer, "LANGCHAIN_WANDB_TRACING"
|
||||
)
|
||||
register_configure_hook(
|
||||
comet_tracing_callback_var, True, CometTracer, "LANGCHAIN_COMET_TRACING"
|
||||
)
|
||||
|
||||
|
||||
@contextmanager
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import logging
|
||||
import os
|
||||
import random
|
||||
import string
|
||||
@@ -5,10 +6,11 @@ import tempfile
|
||||
import traceback
|
||||
from copy import deepcopy
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
from typing import Any, Dict, List, Optional, Sequence, Union
|
||||
|
||||
from langchain_core.agents import AgentAction, AgentFinish
|
||||
from langchain_core.callbacks import BaseCallbackHandler
|
||||
from langchain_core.documents import Document
|
||||
from langchain_core.outputs import LLMResult
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
|
||||
@@ -21,6 +23,8 @@ from langchain_community.callbacks.utils import (
|
||||
import_textstat,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def import_mlflow() -> Any:
|
||||
"""Import the mlflow python package and raise an error if it is not installed."""
|
||||
@@ -34,6 +38,47 @@ def import_mlflow() -> Any:
|
||||
return mlflow
|
||||
|
||||
|
||||
def mlflow_callback_metrics() -> List[str]:
|
||||
return [
|
||||
"step",
|
||||
"starts",
|
||||
"ends",
|
||||
"errors",
|
||||
"text_ctr",
|
||||
"chain_starts",
|
||||
"chain_ends",
|
||||
"llm_starts",
|
||||
"llm_ends",
|
||||
"llm_streams",
|
||||
"tool_starts",
|
||||
"tool_ends",
|
||||
"agent_ends",
|
||||
"retriever_starts",
|
||||
"retriever_ends",
|
||||
]
|
||||
|
||||
|
||||
def get_text_complexity_metrics() -> List[str]:
|
||||
return [
|
||||
"flesch_reading_ease",
|
||||
"flesch_kincaid_grade",
|
||||
"smog_index",
|
||||
"coleman_liau_index",
|
||||
"automated_readability_index",
|
||||
"dale_chall_readability_score",
|
||||
"difficult_words",
|
||||
"linsear_write_formula",
|
||||
"gunning_fog",
|
||||
# "text_standard"
|
||||
"fernandez_huerta",
|
||||
"szigriszt_pazos",
|
||||
"gutierrez_polini",
|
||||
"crawford",
|
||||
"gulpease_index",
|
||||
"osman",
|
||||
]
|
||||
|
||||
|
||||
def analyze_text(
|
||||
text: str,
|
||||
nlp: Any = None,
|
||||
@@ -52,22 +97,7 @@ def analyze_text(
|
||||
textstat = import_textstat()
|
||||
spacy = import_spacy()
|
||||
text_complexity_metrics = {
|
||||
"flesch_reading_ease": textstat.flesch_reading_ease(text),
|
||||
"flesch_kincaid_grade": textstat.flesch_kincaid_grade(text),
|
||||
"smog_index": textstat.smog_index(text),
|
||||
"coleman_liau_index": textstat.coleman_liau_index(text),
|
||||
"automated_readability_index": textstat.automated_readability_index(text),
|
||||
"dale_chall_readability_score": textstat.dale_chall_readability_score(text),
|
||||
"difficult_words": textstat.difficult_words(text),
|
||||
"linsear_write_formula": textstat.linsear_write_formula(text),
|
||||
"gunning_fog": textstat.gunning_fog(text),
|
||||
# "text_standard": textstat.text_standard(text),
|
||||
"fernandez_huerta": textstat.fernandez_huerta(text),
|
||||
"szigriszt_pazos": textstat.szigriszt_pazos(text),
|
||||
"gutierrez_polini": textstat.gutierrez_polini(text),
|
||||
"crawford": textstat.crawford(text),
|
||||
"gulpease_index": textstat.gulpease_index(text),
|
||||
"osman": textstat.osman(text),
|
||||
key: getattr(textstat, key)(text) for key in get_text_complexity_metrics()
|
||||
}
|
||||
resp.update({"text_complexity_metrics": text_complexity_metrics})
|
||||
resp.update(text_complexity_metrics)
|
||||
@@ -140,58 +170,64 @@ class MlflowLogger:
|
||||
)
|
||||
self.mlflow.set_tracking_uri(tracking_uri)
|
||||
|
||||
# User can set other env variables described here
|
||||
# > https://www.mlflow.org/docs/latest/tracking.html#logging-to-a-tracking-server
|
||||
|
||||
experiment_name = get_from_dict_or_env(
|
||||
kwargs, "experiment_name", "MLFLOW_EXPERIMENT_NAME"
|
||||
)
|
||||
self.mlf_exp = self.mlflow.get_experiment_by_name(experiment_name)
|
||||
if self.mlf_exp is not None:
|
||||
self.mlf_expid = self.mlf_exp.experiment_id
|
||||
if run_id := kwargs.get("run_id"):
|
||||
self.mlf_expid = self.mlflow.get_run(run_id).info.experiment_id
|
||||
else:
|
||||
self.mlf_expid = self.mlflow.create_experiment(experiment_name)
|
||||
# User can set other env variables described here
|
||||
# > https://www.mlflow.org/docs/latest/tracking.html#logging-to-a-tracking-server
|
||||
|
||||
self.start_run(kwargs["run_name"], kwargs["run_tags"])
|
||||
experiment_name = get_from_dict_or_env(
|
||||
kwargs, "experiment_name", "MLFLOW_EXPERIMENT_NAME"
|
||||
)
|
||||
self.mlf_exp = self.mlflow.get_experiment_by_name(experiment_name)
|
||||
if self.mlf_exp is not None:
|
||||
self.mlf_expid = self.mlf_exp.experiment_id
|
||||
else:
|
||||
self.mlf_expid = self.mlflow.create_experiment(experiment_name)
|
||||
|
||||
def start_run(self, name: str, tags: Dict[str, str]) -> None:
|
||||
"""To start a new run, auto generates the random suffix for name"""
|
||||
if name.endswith("-%"):
|
||||
rname = "".join(random.choices(string.ascii_uppercase + string.digits, k=7))
|
||||
name = name.replace("%", rname)
|
||||
self.run = self.mlflow.MlflowClient().create_run(
|
||||
self.mlf_expid, run_name=name, tags=tags
|
||||
self.start_run(
|
||||
kwargs["run_name"], kwargs["run_tags"], kwargs.get("run_id", None)
|
||||
)
|
||||
self.dir = kwargs.get("artifacts_dir", "")
|
||||
|
||||
def start_run(
|
||||
self, name: str, tags: Dict[str, str], run_id: Optional[str] = None
|
||||
) -> None:
|
||||
"""
|
||||
If run_id is provided, it will reuse the run with the given run_id.
|
||||
Otherwise, it starts a new run, auto generates the random suffix for name.
|
||||
"""
|
||||
if run_id is None:
|
||||
if name.endswith("-%"):
|
||||
rname = "".join(
|
||||
random.choices(string.ascii_uppercase + string.digits, k=7)
|
||||
)
|
||||
name = name[:-1] + rname
|
||||
run = self.mlflow.MlflowClient().create_run(
|
||||
self.mlf_expid, run_name=name, tags=tags
|
||||
)
|
||||
run_id = run.info.run_id
|
||||
self.run_id = run_id
|
||||
|
||||
def finish_run(self) -> None:
|
||||
"""To finish the run."""
|
||||
with self.mlflow.start_run(
|
||||
run_id=self.run.info.run_id, experiment_id=self.mlf_expid
|
||||
):
|
||||
self.mlflow.end_run()
|
||||
self.mlflow.end_run()
|
||||
|
||||
def metric(self, key: str, value: float) -> None:
|
||||
"""To log metric to mlflow server."""
|
||||
with self.mlflow.start_run(
|
||||
run_id=self.run.info.run_id, experiment_id=self.mlf_expid
|
||||
):
|
||||
self.mlflow.log_metric(key, value)
|
||||
self.mlflow.log_metric(key, value, run_id=self.run_id)
|
||||
|
||||
def metrics(
|
||||
self, data: Union[Dict[str, float], Dict[str, int]], step: Optional[int] = 0
|
||||
) -> None:
|
||||
"""To log all metrics in the input dict."""
|
||||
with self.mlflow.start_run(
|
||||
run_id=self.run.info.run_id, experiment_id=self.mlf_expid
|
||||
):
|
||||
self.mlflow.log_metrics(data)
|
||||
self.mlflow.log_metrics(data, run_id=self.run_id)
|
||||
|
||||
def jsonf(self, data: Dict[str, Any], filename: str) -> None:
|
||||
"""To log the input data as json file artifact."""
|
||||
with self.mlflow.start_run(
|
||||
run_id=self.run.info.run_id, experiment_id=self.mlf_expid
|
||||
):
|
||||
self.mlflow.log_dict(data, f"{filename}.json")
|
||||
self.mlflow.log_dict(
|
||||
data, os.path.join(self.dir, f"{filename}.json"), run_id=self.run_id
|
||||
)
|
||||
|
||||
def table(self, name: str, dataframe) -> None: # type: ignore
|
||||
"""To log the input pandas dataframe as a html table"""
|
||||
@@ -199,30 +235,22 @@ class MlflowLogger:
|
||||
|
||||
def html(self, html: str, filename: str) -> None:
|
||||
"""To log the input html string as html file artifact."""
|
||||
with self.mlflow.start_run(
|
||||
run_id=self.run.info.run_id, experiment_id=self.mlf_expid
|
||||
):
|
||||
self.mlflow.log_text(html, f"{filename}.html")
|
||||
self.mlflow.log_text(
|
||||
html, os.path.join(self.dir, f"{filename}.html"), run_id=self.run_id
|
||||
)
|
||||
|
||||
def text(self, text: str, filename: str) -> None:
|
||||
"""To log the input text as text file artifact."""
|
||||
with self.mlflow.start_run(
|
||||
run_id=self.run.info.run_id, experiment_id=self.mlf_expid
|
||||
):
|
||||
self.mlflow.log_text(text, f"{filename}.txt")
|
||||
self.mlflow.log_text(
|
||||
text, os.path.join(self.dir, f"{filename}.txt"), run_id=self.run_id
|
||||
)
|
||||
|
||||
def artifact(self, path: str) -> None:
|
||||
"""To upload the file from given path as artifact."""
|
||||
with self.mlflow.start_run(
|
||||
run_id=self.run.info.run_id, experiment_id=self.mlf_expid
|
||||
):
|
||||
self.mlflow.log_artifact(path)
|
||||
self.mlflow.log_artifact(path, run_id=self.run_id)
|
||||
|
||||
def langchain_artifact(self, chain: Any) -> None:
|
||||
with self.mlflow.start_run(
|
||||
run_id=self.run.info.run_id, experiment_id=self.mlf_expid
|
||||
):
|
||||
self.mlflow.langchain.log_model(chain, "langchain-model")
|
||||
self.mlflow.langchain.log_model(chain, "langchain-model", run_id=self.run_id)
|
||||
|
||||
|
||||
class MlflowCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):
|
||||
@@ -246,6 +274,8 @@ class MlflowCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):
|
||||
experiment: Optional[str] = "langchain",
|
||||
tags: Optional[Dict] = None,
|
||||
tracking_uri: Optional[str] = None,
|
||||
run_id: Optional[str] = None,
|
||||
artifacts_dir: str = "",
|
||||
) -> None:
|
||||
"""Initialize callback handler."""
|
||||
import_pandas()
|
||||
@@ -258,6 +288,8 @@ class MlflowCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):
|
||||
self.experiment = experiment
|
||||
self.tags = tags or {}
|
||||
self.tracking_uri = tracking_uri
|
||||
self.run_id = run_id
|
||||
self.artifacts_dir = artifacts_dir
|
||||
|
||||
self.temp_dir = tempfile.TemporaryDirectory()
|
||||
|
||||
@@ -266,26 +298,21 @@ class MlflowCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):
|
||||
experiment_name=self.experiment,
|
||||
run_name=self.name,
|
||||
run_tags=self.tags,
|
||||
run_id=self.run_id,
|
||||
artifacts_dir=self.artifacts_dir,
|
||||
)
|
||||
|
||||
self.action_records: list = []
|
||||
self.nlp = spacy.load("en_core_web_sm")
|
||||
try:
|
||||
self.nlp = spacy.load("en_core_web_sm")
|
||||
except OSError:
|
||||
logger.warning(
|
||||
"Run `python -m spacy download en_core_web_sm` "
|
||||
"to download en_core_web_sm model for text visualization."
|
||||
)
|
||||
self.nlp = None
|
||||
|
||||
self.metrics = {
|
||||
"step": 0,
|
||||
"starts": 0,
|
||||
"ends": 0,
|
||||
"errors": 0,
|
||||
"text_ctr": 0,
|
||||
"chain_starts": 0,
|
||||
"chain_ends": 0,
|
||||
"llm_starts": 0,
|
||||
"llm_ends": 0,
|
||||
"llm_streams": 0,
|
||||
"tool_starts": 0,
|
||||
"tool_ends": 0,
|
||||
"agent_ends": 0,
|
||||
}
|
||||
self.metrics = {key: 0 for key in mlflow_callback_metrics()}
|
||||
|
||||
self.records: Dict[str, Any] = {
|
||||
"on_llm_start_records": [],
|
||||
@@ -298,6 +325,8 @@ class MlflowCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):
|
||||
"on_text_records": [],
|
||||
"on_agent_finish_records": [],
|
||||
"on_agent_action_records": [],
|
||||
"on_retriever_start_records": [],
|
||||
"on_retriever_end_records": [],
|
||||
"action_records": [],
|
||||
}
|
||||
|
||||
@@ -383,10 +412,14 @@ class MlflowCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):
|
||||
self.records["on_llm_end_records"].append(generation_resp)
|
||||
self.records["action_records"].append(generation_resp)
|
||||
self.mlflg.jsonf(resp, f"llm_end_{llm_ends}_generation_{idx}")
|
||||
dependency_tree = generation_resp["dependency_tree"]
|
||||
entities = generation_resp["entities"]
|
||||
self.mlflg.html(dependency_tree, "dep-" + hash_string(generation.text))
|
||||
self.mlflg.html(entities, "ent-" + hash_string(generation.text))
|
||||
if "dependency_tree" in generation_resp:
|
||||
dependency_tree = generation_resp["dependency_tree"]
|
||||
self.mlflg.html(
|
||||
dependency_tree, "dep-" + hash_string(generation.text)
|
||||
)
|
||||
if "entities" in generation_resp:
|
||||
entities = generation_resp["entities"]
|
||||
self.mlflg.html(entities, "ent-" + hash_string(generation.text))
|
||||
|
||||
def on_llm_error(self, error: BaseException, **kwargs: Any) -> None:
|
||||
"""Run when LLM errors."""
|
||||
@@ -410,14 +443,21 @@ class MlflowCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):
|
||||
|
||||
self.mlflg.metrics(self.metrics, step=self.metrics["step"])
|
||||
|
||||
chain_input = ",".join([f"{k}={v}" for k, v in inputs.items()])
|
||||
if isinstance(inputs, dict):
|
||||
chain_input = ",".join([f"{k}={v}" for k, v in inputs.items()])
|
||||
elif isinstance(inputs, list):
|
||||
chain_input = ",".join([str(input) for input in inputs])
|
||||
else:
|
||||
chain_input = str(inputs)
|
||||
input_resp = deepcopy(resp)
|
||||
input_resp["inputs"] = chain_input
|
||||
self.records["on_chain_start_records"].append(input_resp)
|
||||
self.records["action_records"].append(input_resp)
|
||||
self.mlflg.jsonf(input_resp, f"chain_start_{chain_starts}")
|
||||
|
||||
def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
|
||||
def on_chain_end(
|
||||
self, outputs: Union[Dict[str, Any], str, List[str]], **kwargs: Any
|
||||
) -> None:
|
||||
"""Run when chain ends running."""
|
||||
self.metrics["step"] += 1
|
||||
self.metrics["chain_ends"] += 1
|
||||
@@ -426,7 +466,12 @@ class MlflowCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):
|
||||
chain_ends = self.metrics["chain_ends"]
|
||||
|
||||
resp: Dict[str, Any] = {}
|
||||
chain_output = ",".join([f"{k}={v}" for k, v in outputs.items()])
|
||||
if isinstance(outputs, dict):
|
||||
chain_output = ",".join([f"{k}={v}" for k, v in outputs.items()])
|
||||
elif isinstance(outputs, list):
|
||||
chain_output = ",".join(map(str, outputs))
|
||||
else:
|
||||
chain_output = str(outputs)
|
||||
resp.update({"action": "on_chain_end", "outputs": chain_output})
|
||||
resp.update(self.metrics)
|
||||
|
||||
@@ -487,7 +532,7 @@ class MlflowCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):
|
||||
|
||||
def on_text(self, text: str, **kwargs: Any) -> None:
|
||||
"""
|
||||
Run when agent is ending.
|
||||
Run when text is received.
|
||||
"""
|
||||
self.metrics["step"] += 1
|
||||
self.metrics["text_ctr"] += 1
|
||||
@@ -549,6 +594,69 @@ class MlflowCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):
|
||||
self.records["action_records"].append(resp)
|
||||
self.mlflg.jsonf(resp, f"agent_action_{tool_starts}")
|
||||
|
||||
def on_retriever_start(
|
||||
self,
|
||||
serialized: Dict[str, Any],
|
||||
query: str,
|
||||
**kwargs: Any,
|
||||
) -> Any:
|
||||
"""Run when Retriever starts running."""
|
||||
self.metrics["step"] += 1
|
||||
self.metrics["retriever_starts"] += 1
|
||||
self.metrics["starts"] += 1
|
||||
|
||||
retriever_starts = self.metrics["retriever_starts"]
|
||||
|
||||
resp: Dict[str, Any] = {}
|
||||
resp.update({"action": "on_retriever_start", "query": query})
|
||||
resp.update(flatten_dict(serialized))
|
||||
resp.update(self.metrics)
|
||||
|
||||
self.mlflg.metrics(self.metrics, step=self.metrics["step"])
|
||||
|
||||
self.records["on_retriever_start_records"].append(resp)
|
||||
self.records["action_records"].append(resp)
|
||||
self.mlflg.jsonf(resp, f"retriever_start_{retriever_starts}")
|
||||
|
||||
def on_retriever_end(
|
||||
self,
|
||||
documents: Sequence[Document],
|
||||
**kwargs: Any,
|
||||
) -> Any:
|
||||
"""Run when Retriever ends running."""
|
||||
self.metrics["step"] += 1
|
||||
self.metrics["retriever_ends"] += 1
|
||||
self.metrics["ends"] += 1
|
||||
|
||||
retriever_ends = self.metrics["retriever_ends"]
|
||||
|
||||
resp: Dict[str, Any] = {}
|
||||
retriever_documents = [
|
||||
{
|
||||
"page_content": doc.page_content,
|
||||
"metadata": {
|
||||
k: str(v)
|
||||
if not isinstance(v, list)
|
||||
else ",".join(str(x) for x in v)
|
||||
for k, v in doc.metadata.items()
|
||||
},
|
||||
}
|
||||
for doc in documents
|
||||
]
|
||||
resp.update({"action": "on_retriever_end", "documents": retriever_documents})
|
||||
resp.update(self.metrics)
|
||||
|
||||
self.mlflg.metrics(self.metrics, step=self.metrics["step"])
|
||||
|
||||
self.records["on_retriever_end_records"].append(resp)
|
||||
self.records["action_records"].append(resp)
|
||||
self.mlflg.jsonf(resp, f"retriever_end_{retriever_ends}")
|
||||
|
||||
def on_retriever_error(self, error: BaseException, **kwargs: Any) -> Any:
|
||||
"""Run when Retriever errors."""
|
||||
self.metrics["step"] += 1
|
||||
self.metrics["errors"] += 1
|
||||
|
||||
def _create_session_analysis_df(self) -> Any:
|
||||
"""Create a dataframe with all the information from the session."""
|
||||
pd = import_pandas()
|
||||
@@ -570,39 +678,27 @@ class MlflowCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):
|
||||
.dropna(axis=1)
|
||||
.rename({"step": "prompt_step"}, axis=1)
|
||||
)
|
||||
complexity_metrics_columns = []
|
||||
visualizations_columns = []
|
||||
complexity_metrics_columns = get_text_complexity_metrics()
|
||||
visualizations_columns = (
|
||||
["dependency_tree", "entities"] if self.nlp is not None else []
|
||||
)
|
||||
|
||||
complexity_metrics_columns = [
|
||||
"flesch_reading_ease",
|
||||
"flesch_kincaid_grade",
|
||||
"smog_index",
|
||||
"coleman_liau_index",
|
||||
"automated_readability_index",
|
||||
"dale_chall_readability_score",
|
||||
"difficult_words",
|
||||
"linsear_write_formula",
|
||||
"gunning_fog",
|
||||
# "text_standard",
|
||||
"fernandez_huerta",
|
||||
"szigriszt_pazos",
|
||||
"gutierrez_polini",
|
||||
"crawford",
|
||||
"gulpease_index",
|
||||
"osman",
|
||||
token_usage_columns = [
|
||||
"token_usage_total_tokens",
|
||||
"token_usage_prompt_tokens",
|
||||
"token_usage_completion_tokens",
|
||||
]
|
||||
token_usage_columns = [
|
||||
x for x in token_usage_columns if x in on_llm_end_records_df.columns
|
||||
]
|
||||
|
||||
visualizations_columns = ["dependency_tree", "entities"]
|
||||
|
||||
llm_outputs_df = (
|
||||
on_llm_end_records_df[
|
||||
[
|
||||
"step",
|
||||
"text",
|
||||
"token_usage_total_tokens",
|
||||
"token_usage_prompt_tokens",
|
||||
"token_usage_completion_tokens",
|
||||
]
|
||||
+ token_usage_columns
|
||||
+ complexity_metrics_columns
|
||||
+ visualizations_columns
|
||||
]
|
||||
@@ -620,14 +716,18 @@ class MlflowCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):
|
||||
)
|
||||
return session_analysis_df
|
||||
|
||||
def _contain_llm_records(self):
|
||||
return bool(self.records["on_llm_start_records"])
|
||||
|
||||
def flush_tracker(self, langchain_asset: Any = None, finish: bool = False) -> None:
|
||||
pd = import_pandas()
|
||||
self.mlflg.table("action_records", pd.DataFrame(self.records["action_records"]))
|
||||
session_analysis_df = self._create_session_analysis_df()
|
||||
chat_html = session_analysis_df.pop("chat_html")
|
||||
chat_html = chat_html.replace("\n", "", regex=True)
|
||||
self.mlflg.table("session_analysis", pd.DataFrame(session_analysis_df))
|
||||
self.mlflg.html("".join(chat_html.tolist()), "chat_html")
|
||||
if self._contain_llm_records():
|
||||
session_analysis_df = self._create_session_analysis_df()
|
||||
chat_html = session_analysis_df.pop("chat_html")
|
||||
chat_html = chat_html.replace("\n", "", regex=True)
|
||||
self.mlflg.table("session_analysis", pd.DataFrame(session_analysis_df))
|
||||
self.mlflg.html("".join(chat_html.tolist()), "chat_html")
|
||||
|
||||
if langchain_asset:
|
||||
# To avoid circular import error
|
||||
|
||||
@@ -35,6 +35,7 @@ from langchain_community.chat_message_histories.sql import SQLChatMessageHistory
|
||||
from langchain_community.chat_message_histories.streamlit import (
|
||||
StreamlitChatMessageHistory,
|
||||
)
|
||||
from langchain_community.chat_message_histories.tidb import TiDBChatMessageHistory
|
||||
from langchain_community.chat_message_histories.upstash_redis import (
|
||||
UpstashRedisChatMessageHistory,
|
||||
)
|
||||
@@ -62,4 +63,5 @@ __all__ = [
|
||||
"ZepChatMessageHistory",
|
||||
"UpstashRedisChatMessageHistory",
|
||||
"Neo4jChatMessageHistory",
|
||||
"TiDBChatMessageHistory",
|
||||
]
|
||||
|
||||
@@ -0,0 +1,148 @@
|
||||
import json
|
||||
import logging
|
||||
from datetime import datetime
|
||||
from typing import List, Optional
|
||||
|
||||
from langchain_core.chat_history import BaseChatMessageHistory
|
||||
from langchain_core.messages import BaseMessage, message_to_dict, messages_from_dict
|
||||
from sqlalchemy import create_engine, text
|
||||
from sqlalchemy.exc import SQLAlchemyError
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class TiDBChatMessageHistory(BaseChatMessageHistory):
|
||||
"""
|
||||
Represents a chat message history stored in a TiDB database.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
session_id: str,
|
||||
connection_string: str,
|
||||
table_name: str = "langchain_message_store",
|
||||
earliest_time: Optional[datetime] = None,
|
||||
):
|
||||
"""
|
||||
Initializes a new instance of the TiDBChatMessageHistory class.
|
||||
|
||||
Args:
|
||||
session_id (str): The ID of the chat session.
|
||||
connection_string (str): The connection string for the TiDB database.
|
||||
format: mysql+pymysql://<host>:<PASSWORD>@<host>:4000/<db>?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true
|
||||
table_name (str, optional): the table name to store the chat messages.
|
||||
Defaults to "langchain_message_store".
|
||||
earliest_time (Optional[datetime], optional): The earliest time to retrieve messages from.
|
||||
Defaults to None.
|
||||
""" # noqa
|
||||
|
||||
self.session_id = session_id
|
||||
self.table_name = table_name
|
||||
self.earliest_time = earliest_time
|
||||
self.cache = []
|
||||
|
||||
# Set up SQLAlchemy engine and session
|
||||
self.engine = create_engine(connection_string)
|
||||
Session = sessionmaker(bind=self.engine)
|
||||
self.session = Session()
|
||||
|
||||
self._create_table_if_not_exists()
|
||||
self._load_messages_to_cache()
|
||||
|
||||
def _create_table_if_not_exists(self) -> None:
|
||||
"""
|
||||
Creates a table if it does not already exist in the database.
|
||||
"""
|
||||
|
||||
create_table_query = text(
|
||||
f"""
|
||||
CREATE TABLE IF NOT EXISTS {self.table_name} (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
session_id VARCHAR(255) NOT NULL,
|
||||
message JSON NOT NULL,
|
||||
create_time TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
INDEX session_idx (session_id)
|
||||
);"""
|
||||
)
|
||||
try:
|
||||
self.session.execute(create_table_query)
|
||||
self.session.commit()
|
||||
except SQLAlchemyError as e:
|
||||
logger.error(f"Error creating table: {e}")
|
||||
self.session.rollback()
|
||||
|
||||
def _load_messages_to_cache(self) -> None:
|
||||
"""
|
||||
Loads messages from the database into the cache.
|
||||
|
||||
This method retrieves messages from the database table. The retrieved messages
|
||||
are then stored in the cache for faster access.
|
||||
|
||||
Raises:
|
||||
SQLAlchemyError: If there is an error executing the database query.
|
||||
|
||||
"""
|
||||
time_condition = (
|
||||
f"AND create_time >= '{self.earliest_time}'" if self.earliest_time else ""
|
||||
)
|
||||
query = text(
|
||||
f"""
|
||||
SELECT message FROM {self.table_name}
|
||||
WHERE session_id = :session_id {time_condition}
|
||||
ORDER BY id;
|
||||
"""
|
||||
)
|
||||
try:
|
||||
result = self.session.execute(query, {"session_id": self.session_id})
|
||||
for record in result.fetchall():
|
||||
message_dict = json.loads(record[0])
|
||||
self.cache.append(messages_from_dict([message_dict])[0])
|
||||
except SQLAlchemyError as e:
|
||||
logger.error(f"Error loading messages to cache: {e}")
|
||||
|
||||
@property
|
||||
def messages(self) -> List[BaseMessage]:
|
||||
"""returns all messages"""
|
||||
if len(self.cache) == 0:
|
||||
self.reload_cache()
|
||||
return self.cache
|
||||
|
||||
def add_message(self, message: BaseMessage) -> None:
|
||||
"""adds a message to the database and cache"""
|
||||
query = text(
|
||||
f"INSERT INTO {self.table_name} (session_id, message) VALUES (:session_id, :message);" # noqa
|
||||
)
|
||||
try:
|
||||
self.session.execute(
|
||||
query,
|
||||
{
|
||||
"session_id": self.session_id,
|
||||
"message": json.dumps(message_to_dict(message)),
|
||||
},
|
||||
)
|
||||
self.session.commit()
|
||||
self.cache.append(message)
|
||||
except SQLAlchemyError as e:
|
||||
logger.error(f"Error adding message: {e}")
|
||||
self.session.rollback()
|
||||
|
||||
def clear(self) -> None:
|
||||
"""clears all messages"""
|
||||
query = text(f"DELETE FROM {self.table_name} WHERE session_id = :session_id;")
|
||||
try:
|
||||
self.session.execute(query, {"session_id": self.session_id})
|
||||
self.session.commit()
|
||||
self.cache.clear()
|
||||
except SQLAlchemyError as e:
|
||||
logger.error(f"Error clearing messages: {e}")
|
||||
self.session.rollback()
|
||||
|
||||
def reload_cache(self) -> None:
|
||||
"""reloads messages from database to cache"""
|
||||
self.cache.clear()
|
||||
self._load_messages_to_cache()
|
||||
|
||||
def __del__(self) -> None:
|
||||
"""closes the session"""
|
||||
self.session.close()
|
||||
@@ -25,6 +25,7 @@ from langchain_community.chat_models.baidu_qianfan_endpoint import QianfanChatEn
|
||||
from langchain_community.chat_models.bedrock import BedrockChat
|
||||
from langchain_community.chat_models.cohere import ChatCohere
|
||||
from langchain_community.chat_models.databricks import ChatDatabricks
|
||||
from langchain_community.chat_models.deepinfra import ChatDeepInfra
|
||||
from langchain_community.chat_models.ernie import ErnieBotChat
|
||||
from langchain_community.chat_models.everlyai import ChatEverlyAI
|
||||
from langchain_community.chat_models.fake import FakeListChatModel
|
||||
@@ -47,6 +48,7 @@ from langchain_community.chat_models.ollama import ChatOllama
|
||||
from langchain_community.chat_models.openai import ChatOpenAI
|
||||
from langchain_community.chat_models.pai_eas_endpoint import PaiEasChatEndpoint
|
||||
from langchain_community.chat_models.promptlayer_openai import PromptLayerChatOpenAI
|
||||
from langchain_community.chat_models.sparkllm import ChatSparkLLM
|
||||
from langchain_community.chat_models.tongyi import ChatTongyi
|
||||
from langchain_community.chat_models.vertexai import ChatVertexAI
|
||||
from langchain_community.chat_models.volcengine_maas import VolcEngineMaasChat
|
||||
@@ -61,6 +63,7 @@ __all__ = [
|
||||
"FakeListChatModel",
|
||||
"PromptLayerChatOpenAI",
|
||||
"ChatDatabricks",
|
||||
"ChatDeepInfra",
|
||||
"ChatEverlyAI",
|
||||
"ChatAnthropic",
|
||||
"ChatCohere",
|
||||
@@ -86,6 +89,7 @@ __all__ = [
|
||||
"ChatBaichuan",
|
||||
"ChatHunyuan",
|
||||
"GigaChat",
|
||||
"ChatSparkLLM",
|
||||
"VolcEngineMaasChat",
|
||||
"GPTRouter",
|
||||
"ChatZhipuAI",
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
import json
|
||||
from typing import Any, Dict, List, Optional, cast
|
||||
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.chat_models import SimpleChatModel
|
||||
from langchain_core.callbacks.manager import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.chat_models import BaseChatModel
|
||||
from langchain_core.messages import (
|
||||
AIMessage,
|
||||
BaseMessage,
|
||||
@@ -10,16 +10,24 @@ from langchain_core.messages import (
|
||||
HumanMessage,
|
||||
SystemMessage,
|
||||
)
|
||||
from langchain_core.pydantic_v1 import SecretStr, validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from langchain_core.outputs import ChatGeneration, ChatResult
|
||||
|
||||
from langchain_community.llms.azureml_endpoint import (
|
||||
AzureMLEndpointClient,
|
||||
AzureMLBaseEndpoint,
|
||||
AzureMLEndpointApiType,
|
||||
ContentFormatterBase,
|
||||
)
|
||||
|
||||
|
||||
class LlamaContentFormatter(ContentFormatterBase):
|
||||
def __init__(self):
|
||||
raise TypeError(
|
||||
"`LlamaContentFormatter` is deprecated for chat models. Use "
|
||||
"`LlamaChatContentFormatter` instead."
|
||||
)
|
||||
|
||||
|
||||
class LlamaChatContentFormatter(ContentFormatterBase):
|
||||
"""Content formatter for `LLaMA`."""
|
||||
|
||||
SUPPORTED_ROLES: List[str] = ["user", "assistant", "system"]
|
||||
@@ -45,7 +53,7 @@ class LlamaContentFormatter(ContentFormatterBase):
|
||||
}
|
||||
elif (
|
||||
isinstance(message, ChatMessage)
|
||||
and message.role in LlamaContentFormatter.SUPPORTED_ROLES
|
||||
and message.role in LlamaChatContentFormatter.SUPPORTED_ROLES
|
||||
):
|
||||
return {
|
||||
"role": message.role,
|
||||
@@ -53,79 +61,96 @@ class LlamaContentFormatter(ContentFormatterBase):
|
||||
}
|
||||
else:
|
||||
supported = ",".join(
|
||||
[role for role in LlamaContentFormatter.SUPPORTED_ROLES]
|
||||
[role for role in LlamaChatContentFormatter.SUPPORTED_ROLES]
|
||||
)
|
||||
raise ValueError(
|
||||
f"""Received unsupported role.
|
||||
Supported roles for the LLaMa Foundation Model: {supported}"""
|
||||
)
|
||||
|
||||
def _format_request_payload(
|
||||
self, messages: List[BaseMessage], model_kwargs: Dict
|
||||
) -> bytes:
|
||||
@property
|
||||
def supported_api_types(self) -> List[AzureMLEndpointApiType]:
|
||||
return [AzureMLEndpointApiType.realtime, AzureMLEndpointApiType.serverless]
|
||||
|
||||
def format_request_payload(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
model_kwargs: Dict,
|
||||
api_type: AzureMLEndpointApiType,
|
||||
) -> str:
|
||||
"""Formats the request according to the chosen api"""
|
||||
chat_messages = [
|
||||
LlamaContentFormatter._convert_message_to_dict(message)
|
||||
LlamaChatContentFormatter._convert_message_to_dict(message)
|
||||
for message in messages
|
||||
]
|
||||
prompt = json.dumps(
|
||||
{"input_data": {"input_string": chat_messages, "parameters": model_kwargs}}
|
||||
)
|
||||
return self.format_request_payload(prompt=prompt, model_kwargs=model_kwargs)
|
||||
if api_type == AzureMLEndpointApiType.realtime:
|
||||
request_payload = json.dumps(
|
||||
{
|
||||
"input_data": {
|
||||
"input_string": chat_messages,
|
||||
"parameters": model_kwargs,
|
||||
}
|
||||
}
|
||||
)
|
||||
elif api_type == AzureMLEndpointApiType.serverless:
|
||||
request_payload = json.dumps({"messages": chat_messages, **model_kwargs})
|
||||
else:
|
||||
raise ValueError(
|
||||
f"`api_type` {api_type} is not supported by this formatter"
|
||||
)
|
||||
return str.encode(request_payload)
|
||||
|
||||
def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:
|
||||
"""Formats the request according to the chosen api"""
|
||||
return str.encode(prompt)
|
||||
|
||||
def format_response_payload(self, output: bytes) -> str:
|
||||
def format_response_payload(
|
||||
self, output: bytes, api_type: AzureMLEndpointApiType
|
||||
) -> ChatGeneration:
|
||||
"""Formats response"""
|
||||
return json.loads(output)["output"]
|
||||
if api_type == AzureMLEndpointApiType.realtime:
|
||||
try:
|
||||
choice = json.loads(output)["output"]
|
||||
except (KeyError, IndexError, TypeError) as e:
|
||||
raise ValueError(self.format_error_msg.format(api_type=api_type)) from e
|
||||
return ChatGeneration(
|
||||
message=BaseMessage(
|
||||
content=choice.strip(),
|
||||
type="assistant",
|
||||
),
|
||||
generation_info=None,
|
||||
)
|
||||
if api_type == AzureMLEndpointApiType.serverless:
|
||||
try:
|
||||
choice = json.loads(output)["choices"][0]
|
||||
if not isinstance(choice, dict):
|
||||
raise TypeError(
|
||||
"Endpoint response is not well formed for a chat "
|
||||
"model. Expected `dict` but `{type(choice)}` was received."
|
||||
)
|
||||
except (KeyError, IndexError, TypeError) as e:
|
||||
raise ValueError(self.format_error_msg.format(api_type=api_type)) from e
|
||||
return ChatGeneration(
|
||||
message=BaseMessage(
|
||||
content=choice["message"]["content"].strip(),
|
||||
type=choice["message"]["role"],
|
||||
),
|
||||
generation_info=dict(
|
||||
finish_reason=choice.get("finish_reason"),
|
||||
logprobs=choice.get("logprobs"),
|
||||
),
|
||||
)
|
||||
raise ValueError(f"`api_type` {api_type} is not supported by this formatter")
|
||||
|
||||
|
||||
class AzureMLChatOnlineEndpoint(SimpleChatModel):
|
||||
"""`AzureML` Chat models API.
|
||||
class AzureMLChatOnlineEndpoint(BaseChatModel, AzureMLBaseEndpoint):
|
||||
"""Azure ML Online Endpoint chat models.
|
||||
|
||||
Example:
|
||||
.. code-block:: python
|
||||
|
||||
azure_chat = AzureMLChatOnlineEndpoint(
|
||||
azure_llm = AzureMLOnlineEndpoint(
|
||||
endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score",
|
||||
endpoint_api_type=AzureMLApiType.realtime,
|
||||
endpoint_api_key="my-api-key",
|
||||
content_formatter=content_formatter,
|
||||
content_formatter=chat_content_formatter,
|
||||
)
|
||||
"""
|
||||
|
||||
endpoint_url: str = ""
|
||||
"""URL of pre-existing Endpoint. Should be passed to constructor or specified as
|
||||
env var `AZUREML_ENDPOINT_URL`."""
|
||||
|
||||
endpoint_api_key: SecretStr = convert_to_secret_str("")
|
||||
"""Authentication Key for Endpoint. Should be passed to constructor or specified as
|
||||
env var `AZUREML_ENDPOINT_API_KEY`."""
|
||||
|
||||
http_client: Any = None #: :meta private:
|
||||
|
||||
content_formatter: Any = None
|
||||
"""The content formatter that provides an input and output
|
||||
transform function to handle formats between the LLM and
|
||||
the endpoint"""
|
||||
|
||||
model_kwargs: Optional[dict] = None
|
||||
"""Keyword arguments to pass to the model."""
|
||||
|
||||
@validator("http_client", always=True, allow_reuse=True)
|
||||
@classmethod
|
||||
def validate_client(cls, field_value: Any, values: Dict) -> AzureMLEndpointClient:
|
||||
"""Validate that api key and python package exist in environment."""
|
||||
values["endpoint_api_key"] = convert_to_secret_str(
|
||||
get_from_dict_or_env(values, "endpoint_api_key", "AZUREML_ENDPOINT_API_KEY")
|
||||
)
|
||||
endpoint_url = get_from_dict_or_env(
|
||||
values, "endpoint_url", "AZUREML_ENDPOINT_URL"
|
||||
)
|
||||
http_client = AzureMLEndpointClient(
|
||||
endpoint_url, values["endpoint_api_key"].get_secret_value()
|
||||
)
|
||||
return http_client
|
||||
""" # noqa: E501
|
||||
|
||||
@property
|
||||
def _identifying_params(self) -> Dict[str, Any]:
|
||||
@@ -140,13 +165,13 @@ class AzureMLChatOnlineEndpoint(SimpleChatModel):
|
||||
"""Return type of llm."""
|
||||
return "azureml_chat_endpoint"
|
||||
|
||||
def _call(
|
||||
def _generate(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> str:
|
||||
) -> ChatResult:
|
||||
"""Call out to an AzureML Managed Online endpoint.
|
||||
Args:
|
||||
messages: The messages in the conversation with the chat model.
|
||||
@@ -158,12 +183,17 @@ class AzureMLChatOnlineEndpoint(SimpleChatModel):
|
||||
response = azureml_model("Tell me a joke.")
|
||||
"""
|
||||
_model_kwargs = self.model_kwargs or {}
|
||||
_model_kwargs.update(kwargs)
|
||||
if stop:
|
||||
_model_kwargs["stop"] = stop
|
||||
|
||||
request_payload = self.content_formatter._format_request_payload(
|
||||
messages, _model_kwargs
|
||||
request_payload = self.content_formatter.format_request_payload(
|
||||
messages, _model_kwargs, self.endpoint_api_type
|
||||
)
|
||||
response_payload = self.http_client.call(request_payload, **kwargs)
|
||||
generated_text = self.content_formatter.format_response_payload(
|
||||
response_payload
|
||||
response_payload = self.http_client.call(
|
||||
body=request_payload, run_manager=run_manager
|
||||
)
|
||||
return generated_text
|
||||
generations = self.content_formatter.format_response_payload(
|
||||
response_payload, self.endpoint_api_type
|
||||
)
|
||||
return ChatResult(generations=[generations])
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
import hashlib
|
||||
import json
|
||||
import logging
|
||||
import time
|
||||
from typing import Any, Dict, Iterator, List, Mapping, Optional, Type
|
||||
|
||||
import requests
|
||||
@@ -30,7 +28,7 @@ from langchain_core.utils import (
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DEFAULT_API_BASE = "https://api.baichuan-ai.com/v1"
|
||||
DEFAULT_API_BASE = "https://api.baichuan-ai.com/v1/chat/completions"
|
||||
|
||||
|
||||
def _convert_message_to_dict(message: BaseMessage) -> dict:
|
||||
@@ -73,14 +71,6 @@ def _convert_delta_to_message_chunk(
|
||||
return default_class(content=content)
|
||||
|
||||
|
||||
# signature generation
|
||||
def _signature(secret_key: SecretStr, payload: Dict[str, Any], timestamp: int) -> str:
|
||||
input_str = secret_key.get_secret_value() + json.dumps(payload) + str(timestamp)
|
||||
md5 = hashlib.md5()
|
||||
md5.update(input_str.encode("utf-8"))
|
||||
return md5.hexdigest()
|
||||
|
||||
|
||||
class ChatBaichuan(BaseChatModel):
|
||||
"""Baichuan chat models API by Baichuan Intelligent Technology.
|
||||
|
||||
@@ -91,7 +81,6 @@ class ChatBaichuan(BaseChatModel):
|
||||
def lc_secrets(self) -> Dict[str, str]:
|
||||
return {
|
||||
"baichuan_api_key": "BAICHUAN_API_KEY",
|
||||
"baichuan_secret_key": "BAICHUAN_SECRET_KEY",
|
||||
}
|
||||
|
||||
@property
|
||||
@@ -103,14 +92,14 @@ class ChatBaichuan(BaseChatModel):
|
||||
baichuan_api_key: Optional[SecretStr] = None
|
||||
"""Baichuan API Key"""
|
||||
baichuan_secret_key: Optional[SecretStr] = None
|
||||
"""Baichuan Secret Key"""
|
||||
"""[DEPRECATED, keeping it for for backward compatibility] Baichuan Secret Key"""
|
||||
streaming: bool = False
|
||||
"""Whether to stream the results or not."""
|
||||
request_timeout: int = 60
|
||||
"""request timeout for chat http requests"""
|
||||
|
||||
model = "Baichuan2-53B"
|
||||
"""model name of Baichuan, default is `Baichuan2-53B`."""
|
||||
model = "Baichuan2-Turbo-192K"
|
||||
"""model name of Baichuan, default is `Baichuan2-Turbo-192K`,
|
||||
other options include `Baichuan2-Turbo`"""
|
||||
temperature: float = 0.3
|
||||
"""What sampling temperature to use."""
|
||||
top_k: int = 5
|
||||
@@ -168,13 +157,6 @@ class ChatBaichuan(BaseChatModel):
|
||||
"BAICHUAN_API_KEY",
|
||||
)
|
||||
)
|
||||
values["baichuan_secret_key"] = convert_to_secret_str(
|
||||
get_from_dict_or_env(
|
||||
values,
|
||||
"baichuan_secret_key",
|
||||
"BAICHUAN_SECRET_KEY",
|
||||
)
|
||||
)
|
||||
|
||||
return values
|
||||
|
||||
@@ -187,6 +169,7 @@ class ChatBaichuan(BaseChatModel):
|
||||
"top_p": self.top_p,
|
||||
"top_k": self.top_k,
|
||||
"with_search_enhance": self.with_search_enhance,
|
||||
"stream": self.streaming,
|
||||
}
|
||||
|
||||
return {**normal_params, **self.model_kwargs}
|
||||
@@ -205,12 +188,9 @@ class ChatBaichuan(BaseChatModel):
|
||||
return generate_from_stream(stream_iter)
|
||||
|
||||
res = self._chat(messages, **kwargs)
|
||||
|
||||
if res.status_code != 200:
|
||||
raise ValueError(f"Error from Baichuan api response: {res}")
|
||||
response = res.json()
|
||||
|
||||
if response.get("code") != 0:
|
||||
raise ValueError(f"Error from Baichuan api response: {response}")
|
||||
|
||||
return self._create_chat_result(response)
|
||||
|
||||
def _stream(
|
||||
@@ -221,43 +201,49 @@ class ChatBaichuan(BaseChatModel):
|
||||
**kwargs: Any,
|
||||
) -> Iterator[ChatGenerationChunk]:
|
||||
res = self._chat(messages, **kwargs)
|
||||
|
||||
if res.status_code != 200:
|
||||
raise ValueError(f"Error from Baichuan api response: {res}")
|
||||
default_chunk_class = AIMessageChunk
|
||||
for chunk in res.iter_lines():
|
||||
chunk = chunk.decode("utf-8").strip("\r\n")
|
||||
parts = chunk.split("data: ", 1)
|
||||
chunk = parts[1] if len(parts) > 1 else None
|
||||
if chunk is None:
|
||||
continue
|
||||
if chunk == "[DONE]":
|
||||
break
|
||||
response = json.loads(chunk)
|
||||
if response.get("code") != 0:
|
||||
raise ValueError(f"Error from Baichuan api response: {response}")
|
||||
|
||||
data = response.get("data")
|
||||
for m in data.get("messages"):
|
||||
chunk = _convert_delta_to_message_chunk(m, default_chunk_class)
|
||||
for m in response.get("choices"):
|
||||
chunk = _convert_delta_to_message_chunk(
|
||||
m.get("delta"), default_chunk_class
|
||||
)
|
||||
default_chunk_class = chunk.__class__
|
||||
yield ChatGenerationChunk(message=chunk)
|
||||
if run_manager:
|
||||
run_manager.on_llm_new_token(chunk.content)
|
||||
|
||||
def _chat(self, messages: List[BaseMessage], **kwargs: Any) -> requests.Response:
|
||||
if self.baichuan_secret_key is None:
|
||||
raise ValueError("Baichuan secret key is not set.")
|
||||
|
||||
parameters = {**self._default_params, **kwargs}
|
||||
|
||||
model = parameters.pop("model")
|
||||
headers = parameters.pop("headers", {})
|
||||
temperature = parameters.pop("temperature", 0.3)
|
||||
top_k = parameters.pop("top_k", 5)
|
||||
top_p = parameters.pop("top_p", 0.85)
|
||||
with_search_enhance = parameters.pop("with_search_enhance", False)
|
||||
stream = parameters.pop("stream", False)
|
||||
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": [_convert_message_to_dict(m) for m in messages],
|
||||
"parameters": parameters,
|
||||
"top_k": top_k,
|
||||
"top_p": top_p,
|
||||
"temperature": temperature,
|
||||
"with_search_enhance": with_search_enhance,
|
||||
"stream": stream,
|
||||
}
|
||||
|
||||
timestamp = int(time.time())
|
||||
|
||||
url = self.baichuan_api_base
|
||||
if self.streaming:
|
||||
url = f"{url}/stream"
|
||||
url = f"{url}/chat"
|
||||
|
||||
api_key = ""
|
||||
if self.baichuan_api_key:
|
||||
api_key = self.baichuan_api_key.get_secret_value()
|
||||
@@ -268,13 +254,6 @@ class ChatBaichuan(BaseChatModel):
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"X-BC-Timestamp": str(timestamp),
|
||||
"X-BC-Signature": _signature(
|
||||
secret_key=self.baichuan_secret_key,
|
||||
payload=payload,
|
||||
timestamp=timestamp,
|
||||
),
|
||||
"X-BC-Sign-Algo": "MD5",
|
||||
**headers,
|
||||
},
|
||||
json=payload,
|
||||
@@ -284,8 +263,8 @@ class ChatBaichuan(BaseChatModel):
|
||||
|
||||
def _create_chat_result(self, response: Mapping[str, Any]) -> ChatResult:
|
||||
generations = []
|
||||
for m in response["data"]["messages"]:
|
||||
message = _convert_dict_to_message(m)
|
||||
for c in response["choices"]:
|
||||
message = _convert_dict_to_message(c["message"])
|
||||
gen = ChatGeneration(message=message)
|
||||
generations.append(gen)
|
||||
|
||||
|
||||
@@ -32,6 +32,12 @@ class ChatPromptAdapter:
|
||||
prompt = convert_messages_to_prompt_anthropic(messages=messages)
|
||||
elif provider == "meta":
|
||||
prompt = convert_messages_to_prompt_llama(messages=messages)
|
||||
elif provider == "amazon":
|
||||
prompt = convert_messages_to_prompt_anthropic(
|
||||
messages=messages,
|
||||
human_prompt="\n\nUser:",
|
||||
ai_prompt="\n\nBot:",
|
||||
)
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
f"Provider {provider} model does not support chat."
|
||||
|
||||
451
libs/community/langchain_community/chat_models/deepinfra.py
Normal file
451
libs/community/langchain_community/chat_models/deepinfra.py
Normal file
@@ -0,0 +1,451 @@
|
||||
"""deepinfra.com chat models wrapper"""
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
from typing import (
|
||||
Any,
|
||||
AsyncIterator,
|
||||
Callable,
|
||||
Dict,
|
||||
Iterator,
|
||||
List,
|
||||
Mapping,
|
||||
Optional,
|
||||
Tuple,
|
||||
Type,
|
||||
Union,
|
||||
)
|
||||
|
||||
import aiohttp
|
||||
import requests
|
||||
from langchain_core.callbacks.manager import (
|
||||
AsyncCallbackManagerForLLMRun,
|
||||
CallbackManagerForLLMRun,
|
||||
)
|
||||
from langchain_core.language_models.chat_models import (
|
||||
BaseChatModel,
|
||||
agenerate_from_stream,
|
||||
generate_from_stream,
|
||||
)
|
||||
from langchain_core.language_models.llms import create_base_retry_decorator
|
||||
from langchain_core.messages import (
|
||||
AIMessage,
|
||||
AIMessageChunk,
|
||||
BaseMessage,
|
||||
BaseMessageChunk,
|
||||
ChatMessage,
|
||||
ChatMessageChunk,
|
||||
FunctionMessage,
|
||||
FunctionMessageChunk,
|
||||
HumanMessage,
|
||||
HumanMessageChunk,
|
||||
SystemMessage,
|
||||
SystemMessageChunk,
|
||||
)
|
||||
from langchain_core.outputs import (
|
||||
ChatGeneration,
|
||||
ChatGenerationChunk,
|
||||
ChatResult,
|
||||
)
|
||||
from langchain_core.pydantic_v1 import Field, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
|
||||
# from langchain.llms.base import create_base_retry_decorator
|
||||
from langchain_community.utilities.requests import Requests
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ChatDeepInfraException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def _create_retry_decorator(
|
||||
llm: ChatDeepInfra,
|
||||
run_manager: Optional[
|
||||
Union[AsyncCallbackManagerForLLMRun, CallbackManagerForLLMRun]
|
||||
] = None,
|
||||
) -> Callable[[Any], Any]:
|
||||
"""Returns a tenacity retry decorator, preconfigured to handle PaLM exceptions"""
|
||||
return create_base_retry_decorator(
|
||||
error_types=[requests.exceptions.ConnectTimeout, ChatDeepInfraException],
|
||||
max_retries=llm.max_retries,
|
||||
run_manager=run_manager,
|
||||
)
|
||||
|
||||
|
||||
def _convert_dict_to_message(_dict: Mapping[str, Any]) -> BaseMessage:
|
||||
role = _dict["role"]
|
||||
if role == "user":
|
||||
return HumanMessage(content=_dict["content"])
|
||||
elif role == "assistant":
|
||||
# Fix for azure
|
||||
# Also OpenAI returns None for tool invocations
|
||||
content = _dict.get("content", "") or ""
|
||||
if _dict.get("function_call"):
|
||||
additional_kwargs = {"function_call": dict(_dict["function_call"])}
|
||||
else:
|
||||
additional_kwargs = {}
|
||||
return AIMessage(content=content, additional_kwargs=additional_kwargs)
|
||||
elif role == "system":
|
||||
return SystemMessage(content=_dict["content"])
|
||||
elif role == "function":
|
||||
return FunctionMessage(content=_dict["content"], name=_dict["name"])
|
||||
else:
|
||||
return ChatMessage(content=_dict["content"], role=role)
|
||||
|
||||
|
||||
def _convert_delta_to_message_chunk(
|
||||
_dict: Mapping[str, Any], default_class: Type[BaseMessageChunk]
|
||||
) -> BaseMessageChunk:
|
||||
role = _dict.get("role")
|
||||
content = _dict.get("content") or ""
|
||||
if _dict.get("function_call"):
|
||||
additional_kwargs = {"function_call": dict(_dict["function_call"])}
|
||||
else:
|
||||
additional_kwargs = {}
|
||||
|
||||
if role == "user" or default_class == HumanMessageChunk:
|
||||
return HumanMessageChunk(content=content)
|
||||
elif role == "assistant" or default_class == AIMessageChunk:
|
||||
return AIMessageChunk(content=content, additional_kwargs=additional_kwargs)
|
||||
elif role == "system" or default_class == SystemMessageChunk:
|
||||
return SystemMessageChunk(content=content)
|
||||
elif role == "function" or default_class == FunctionMessageChunk:
|
||||
return FunctionMessageChunk(content=content, name=_dict["name"])
|
||||
elif role or default_class == ChatMessageChunk:
|
||||
return ChatMessageChunk(content=content, role=role)
|
||||
else:
|
||||
return default_class(content=content)
|
||||
|
||||
|
||||
def _convert_message_to_dict(message: BaseMessage) -> dict:
|
||||
if isinstance(message, ChatMessage):
|
||||
message_dict = {"role": message.role, "content": message.content}
|
||||
elif isinstance(message, HumanMessage):
|
||||
message_dict = {"role": "user", "content": message.content}
|
||||
elif isinstance(message, AIMessage):
|
||||
message_dict = {"role": "assistant", "content": message.content}
|
||||
if "function_call" in message.additional_kwargs:
|
||||
message_dict["function_call"] = message.additional_kwargs["function_call"]
|
||||
elif isinstance(message, SystemMessage):
|
||||
message_dict = {"role": "system", "content": message.content}
|
||||
elif isinstance(message, FunctionMessage):
|
||||
message_dict = {
|
||||
"role": "function",
|
||||
"content": message.content,
|
||||
"name": message.name,
|
||||
}
|
||||
else:
|
||||
raise ValueError(f"Got unknown type {message}")
|
||||
if "name" in message.additional_kwargs:
|
||||
message_dict["name"] = message.additional_kwargs["name"]
|
||||
return message_dict
|
||||
|
||||
|
||||
class ChatDeepInfra(BaseChatModel):
|
||||
"""A chat model that uses the DeepInfra API."""
|
||||
|
||||
# client: Any #: :meta private:
|
||||
model_name: str = Field(default="meta-llama/Llama-2-70b-chat-hf", alias="model")
|
||||
"""Model name to use."""
|
||||
deepinfra_api_token: Optional[str] = None
|
||||
request_timeout: Optional[float] = Field(default=None, alias="timeout")
|
||||
temperature: Optional[float] = 1
|
||||
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
|
||||
"""Run inference with this temperature. Must by in the closed
|
||||
interval [0.0, 1.0]."""
|
||||
top_p: Optional[float] = None
|
||||
"""Decode using nucleus sampling: consider the smallest set of tokens whose
|
||||
probability sum is at least top_p. Must be in the closed interval [0.0, 1.0]."""
|
||||
top_k: Optional[int] = None
|
||||
"""Decode using top-k sampling: consider the set of top_k most probable tokens.
|
||||
Must be positive."""
|
||||
n: int = 1
|
||||
"""Number of chat completions to generate for each prompt. Note that the API may
|
||||
not return the full n completions if duplicates are generated."""
|
||||
max_tokens: int = 256
|
||||
streaming: bool = False
|
||||
max_retries: int = 1
|
||||
|
||||
@property
|
||||
def _default_params(self) -> Dict[str, Any]:
|
||||
"""Get the default parameters for calling OpenAI API."""
|
||||
return {
|
||||
"model": self.model_name,
|
||||
"max_tokens": self.max_tokens,
|
||||
"stream": self.streaming,
|
||||
"n": self.n,
|
||||
"temperature": self.temperature,
|
||||
"request_timeout": self.request_timeout,
|
||||
**self.model_kwargs,
|
||||
}
|
||||
|
||||
@property
|
||||
def _client_params(self) -> Dict[str, Any]:
|
||||
"""Get the parameters used for the openai client."""
|
||||
return {**self._default_params}
|
||||
|
||||
def completion_with_retry(
|
||||
self, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any
|
||||
) -> Any:
|
||||
"""Use tenacity to retry the completion call."""
|
||||
retry_decorator = _create_retry_decorator(self, run_manager=run_manager)
|
||||
|
||||
@retry_decorator
|
||||
def _completion_with_retry(**kwargs: Any) -> Any:
|
||||
try:
|
||||
request_timeout = kwargs.pop("request_timeout")
|
||||
request = Requests(headers=self._headers())
|
||||
response = request.post(
|
||||
url=self._url(), data=self._body(kwargs), timeout=request_timeout
|
||||
)
|
||||
self._handle_status(response.status_code, response.text)
|
||||
return response
|
||||
except Exception as e:
|
||||
# import pdb; pdb.set_trace()
|
||||
print("EX", e)
|
||||
raise
|
||||
|
||||
return _completion_with_retry(**kwargs)
|
||||
|
||||
async def acompletion_with_retry(
|
||||
self,
|
||||
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> Any:
|
||||
"""Use tenacity to retry the async completion call."""
|
||||
retry_decorator = _create_retry_decorator(self, run_manager=run_manager)
|
||||
|
||||
@retry_decorator
|
||||
async def _completion_with_retry(**kwargs: Any) -> Any:
|
||||
try:
|
||||
request_timeout = kwargs.pop("request_timeout")
|
||||
request = Requests(headers=self._headers())
|
||||
async with request.apost(
|
||||
url=self._url(), data=self._body(kwargs), timeout=request_timeout
|
||||
) as response:
|
||||
self._handle_status(response.status, response.text)
|
||||
return await response.json()
|
||||
except Exception as e:
|
||||
print("EX", e)
|
||||
raise
|
||||
|
||||
return await _completion_with_retry(**kwargs)
|
||||
|
||||
@root_validator()
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate api key, python package exists, temperature, top_p, and top_k."""
|
||||
# For compatibility with LiteLLM
|
||||
api_key = get_from_dict_or_env(
|
||||
values,
|
||||
"deepinfra_api_key",
|
||||
"DEEPINFRA_API_KEY",
|
||||
default="",
|
||||
)
|
||||
values["deepinfra_api_token"] = get_from_dict_or_env(
|
||||
values,
|
||||
"deepinfra_api_token",
|
||||
"DEEPINFRA_API_TOKEN",
|
||||
default=api_key,
|
||||
)
|
||||
|
||||
if values["temperature"] is not None and not 0 <= values["temperature"] <= 1:
|
||||
raise ValueError("temperature must be in the range [0.0, 1.0]")
|
||||
|
||||
if values["top_p"] is not None and not 0 <= values["top_p"] <= 1:
|
||||
raise ValueError("top_p must be in the range [0.0, 1.0]")
|
||||
|
||||
if values["top_k"] is not None and values["top_k"] <= 0:
|
||||
raise ValueError("top_k must be positive")
|
||||
|
||||
return values
|
||||
|
||||
def _generate(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
stream: Optional[bool] = None,
|
||||
**kwargs: Any,
|
||||
) -> ChatResult:
|
||||
should_stream = stream if stream is not None else self.streaming
|
||||
if should_stream:
|
||||
stream_iter = self._stream(
|
||||
messages, stop=stop, run_manager=run_manager, **kwargs
|
||||
)
|
||||
return generate_from_stream(stream_iter)
|
||||
|
||||
message_dicts, params = self._create_message_dicts(messages, stop)
|
||||
params = {**params, **kwargs}
|
||||
response = self.completion_with_retry(
|
||||
messages=message_dicts, run_manager=run_manager, **params
|
||||
)
|
||||
return self._create_chat_result(response.json())
|
||||
|
||||
def _create_chat_result(self, response: Mapping[str, Any]) -> ChatResult:
|
||||
generations = []
|
||||
for res in response["choices"]:
|
||||
message = _convert_dict_to_message(res["message"])
|
||||
gen = ChatGeneration(
|
||||
message=message,
|
||||
generation_info=dict(finish_reason=res.get("finish_reason")),
|
||||
)
|
||||
generations.append(gen)
|
||||
token_usage = response.get("usage", {})
|
||||
llm_output = {"token_usage": token_usage, "model": self.model_name}
|
||||
res = ChatResult(generations=generations, llm_output=llm_output)
|
||||
return res
|
||||
|
||||
def _create_message_dicts(
|
||||
self, messages: List[BaseMessage], stop: Optional[List[str]]
|
||||
) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:
|
||||
params = self._client_params
|
||||
if stop is not None:
|
||||
if "stop" in params:
|
||||
raise ValueError("`stop` found in both the input and default params.")
|
||||
params["stop"] = stop
|
||||
message_dicts = [_convert_message_to_dict(m) for m in messages]
|
||||
return message_dicts, params
|
||||
|
||||
def _stream(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> Iterator[ChatGenerationChunk]:
|
||||
message_dicts, params = self._create_message_dicts(messages, stop)
|
||||
params = {**params, **kwargs, "stream": True}
|
||||
|
||||
response = self.completion_with_retry(
|
||||
messages=message_dicts, run_manager=run_manager, **params
|
||||
)
|
||||
for line in _parse_stream(response.iter_lines()):
|
||||
chunk = _handle_sse_line(line)
|
||||
if chunk:
|
||||
yield ChatGenerationChunk(message=chunk, generation_info=None)
|
||||
if run_manager:
|
||||
run_manager.on_llm_new_token(chunk.content) # type: ignore[arg-type]
|
||||
|
||||
async def _astream(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> AsyncIterator[ChatGenerationChunk]:
|
||||
message_dicts, params = self._create_message_dicts(messages, stop)
|
||||
params = {"messages": message_dicts, "stream": True, **params, **kwargs}
|
||||
|
||||
request_timeout = params.pop("request_timeout")
|
||||
request = Requests(headers=self._headers())
|
||||
async with request.apost(
|
||||
url=self._url(), data=self._body(params), timeout=request_timeout
|
||||
) as response:
|
||||
async for line in _parse_stream_async(response.content):
|
||||
chunk = _handle_sse_line(line)
|
||||
if chunk:
|
||||
yield ChatGenerationChunk(message=chunk, generation_info=None)
|
||||
if run_manager:
|
||||
await run_manager.on_llm_new_token(chunk.content) # type: ignore[arg-type]
|
||||
|
||||
async def _agenerate(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
|
||||
stream: Optional[bool] = None,
|
||||
**kwargs: Any,
|
||||
) -> ChatResult:
|
||||
should_stream = stream if stream is not None else self.streaming
|
||||
if should_stream:
|
||||
stream_iter = self._astream(
|
||||
messages, stop=stop, run_manager=run_manager, **kwargs
|
||||
)
|
||||
return await agenerate_from_stream(stream_iter)
|
||||
|
||||
message_dicts, params = self._create_message_dicts(messages, stop)
|
||||
params = {"messages": message_dicts, **params, **kwargs}
|
||||
|
||||
res = await self.acompletion_with_retry(run_manager=run_manager, **params)
|
||||
return self._create_chat_result(res)
|
||||
|
||||
@property
|
||||
def _identifying_params(self) -> Dict[str, Any]:
|
||||
"""Get the identifying parameters."""
|
||||
return {
|
||||
"model": self.model_name,
|
||||
"temperature": self.temperature,
|
||||
"top_p": self.top_p,
|
||||
"top_k": self.top_k,
|
||||
"n": self.n,
|
||||
}
|
||||
|
||||
@property
|
||||
def _llm_type(self) -> str:
|
||||
return "deepinfra-chat"
|
||||
|
||||
def _handle_status(self, code: int, text: Any) -> None:
|
||||
if code >= 500:
|
||||
raise ChatDeepInfraException(f"DeepInfra Server: Error {code}")
|
||||
elif code >= 400:
|
||||
raise ValueError(f"DeepInfra received an invalid payload: {text}")
|
||||
elif code != 200:
|
||||
raise Exception(
|
||||
f"DeepInfra returned an unexpected response with status "
|
||||
f"{code}: {text}"
|
||||
)
|
||||
|
||||
def _url(self) -> str:
|
||||
return "https://stage.api.deepinfra.com/v1/openai/chat/completions"
|
||||
|
||||
def _headers(self) -> Dict:
|
||||
return {
|
||||
"Authorization": f"bearer {self.deepinfra_api_token}",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
|
||||
def _body(self, kwargs: Any) -> Dict:
|
||||
return kwargs
|
||||
|
||||
|
||||
def _parse_stream(rbody: Iterator[bytes]) -> Iterator[str]:
|
||||
for line in rbody:
|
||||
_line = _parse_stream_helper(line)
|
||||
if _line is not None:
|
||||
yield _line
|
||||
|
||||
|
||||
async def _parse_stream_async(rbody: aiohttp.StreamReader) -> AsyncIterator[str]:
|
||||
async for line in rbody:
|
||||
_line = _parse_stream_helper(line)
|
||||
if _line is not None:
|
||||
yield _line
|
||||
|
||||
|
||||
def _parse_stream_helper(line: bytes) -> Optional[str]:
|
||||
if line and line.startswith(b"data:"):
|
||||
if line.startswith(b"data: "):
|
||||
# SSE event may be valid when it contain whitespace
|
||||
line = line[len(b"data: ") :]
|
||||
else:
|
||||
line = line[len(b"data:") :]
|
||||
if line.strip() == b"[DONE]":
|
||||
# return here will cause GeneratorExit exception in urllib3
|
||||
# and it will close http connection with TCP Reset
|
||||
return None
|
||||
else:
|
||||
return line.decode("utf-8")
|
||||
return None
|
||||
|
||||
|
||||
def _handle_sse_line(line: str) -> Optional[BaseMessageChunk]:
|
||||
try:
|
||||
obj = json.loads(line)
|
||||
default_chunk_class = AIMessageChunk
|
||||
delta = obj.get("choices", [{}])[0].get("delta", {})
|
||||
return _convert_delta_to_message_chunk(delta, default_chunk_class)
|
||||
except Exception:
|
||||
return None
|
||||
@@ -3,12 +3,12 @@ from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import os
|
||||
import warnings
|
||||
from typing import (
|
||||
Any,
|
||||
Dict,
|
||||
Iterator,
|
||||
List,
|
||||
Mapping,
|
||||
Optional,
|
||||
Set,
|
||||
Tuple,
|
||||
@@ -19,20 +19,20 @@ import requests
|
||||
from langchain_core.callbacks import (
|
||||
CallbackManagerForLLMRun,
|
||||
)
|
||||
from langchain_core.language_models.chat_models import (
|
||||
BaseChatModel,
|
||||
generate_from_stream,
|
||||
)
|
||||
from langchain_core.messages import AIMessageChunk, BaseMessage
|
||||
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
|
||||
from langchain_core.outputs import ChatGenerationChunk, ChatResult
|
||||
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
|
||||
from langchain_community.adapters.openai import (
|
||||
convert_dict_to_message,
|
||||
convert_message_to_dict,
|
||||
)
|
||||
from langchain_community.chat_models.openai import _convert_delta_to_message_chunk
|
||||
from langchain_community.chat_models.openai import (
|
||||
ChatOpenAI,
|
||||
_convert_delta_to_message_chunk,
|
||||
generate_from_stream,
|
||||
)
|
||||
from langchain_community.utils.openai import is_openai_v1
|
||||
|
||||
DEFAULT_API_BASE = "https://api.konko.ai/v1"
|
||||
DEFAULT_MODEL = "meta-llama/Llama-2-13b-chat-hf"
|
||||
@@ -40,7 +40,7 @@ DEFAULT_MODEL = "meta-llama/Llama-2-13b-chat-hf"
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ChatKonko(BaseChatModel):
|
||||
class ChatKonko(ChatOpenAI):
|
||||
"""`ChatKonko` Chat large language models API.
|
||||
|
||||
To use, you should have the ``konko`` python package installed, and the
|
||||
@@ -72,10 +72,8 @@ class ChatKonko(BaseChatModel):
|
||||
"""What sampling temperature to use."""
|
||||
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
|
||||
"""Holds any model parameters valid for `create` call not explicitly specified."""
|
||||
openai_api_key: Optional[SecretStr] = None
|
||||
konko_api_key: Optional[SecretStr] = None
|
||||
request_timeout: Optional[Union[float, Tuple[float, float]]] = None
|
||||
"""Timeout for requests to Konko completion API."""
|
||||
openai_api_key: Optional[str] = None
|
||||
konko_api_key: Optional[str] = None
|
||||
max_retries: int = 6
|
||||
"""Maximum number of retries to make when generating."""
|
||||
streaming: bool = False
|
||||
@@ -100,13 +98,23 @@ class ChatKonko(BaseChatModel):
|
||||
"Please install it with `pip install konko`."
|
||||
)
|
||||
try:
|
||||
values["client"] = konko.ChatCompletion
|
||||
if is_openai_v1():
|
||||
values["client"] = konko.chat.completions
|
||||
else:
|
||||
values["client"] = konko.ChatCompletion
|
||||
except AttributeError:
|
||||
raise ValueError(
|
||||
"`konko` has no `ChatCompletion` attribute, this is likely "
|
||||
"due to an old version of the konko package. Try upgrading it "
|
||||
"with `pip install --upgrade konko`."
|
||||
)
|
||||
|
||||
if not hasattr(konko, "_is_legacy_openai"):
|
||||
warnings.warn(
|
||||
"You are using an older version of the 'konko' package. "
|
||||
"Please consider upgrading to access new features."
|
||||
)
|
||||
|
||||
if values["n"] < 1:
|
||||
raise ValueError("n must be at least 1.")
|
||||
if values["n"] > 1 and values["streaming"]:
|
||||
@@ -118,7 +126,6 @@ class ChatKonko(BaseChatModel):
|
||||
"""Get the default parameters for calling Konko API."""
|
||||
return {
|
||||
"model": self.model,
|
||||
"request_timeout": self.request_timeout,
|
||||
"max_tokens": self.max_tokens,
|
||||
"stream": self.streaming,
|
||||
"n": self.n,
|
||||
@@ -182,20 +189,6 @@ class ChatKonko(BaseChatModel):
|
||||
|
||||
return _completion_with_retry(**kwargs)
|
||||
|
||||
def _combine_llm_outputs(self, llm_outputs: List[Optional[dict]]) -> dict:
|
||||
overall_token_usage: dict = {}
|
||||
for output in llm_outputs:
|
||||
if output is None:
|
||||
# Happens in streaming
|
||||
continue
|
||||
token_usage = output["token_usage"]
|
||||
for k, v in token_usage.items():
|
||||
if k in overall_token_usage:
|
||||
overall_token_usage[k] += v
|
||||
else:
|
||||
overall_token_usage[k] = v
|
||||
return {"token_usage": overall_token_usage, "model_name": self.model}
|
||||
|
||||
def _stream(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
@@ -259,19 +252,6 @@ class ChatKonko(BaseChatModel):
|
||||
message_dicts = [convert_message_to_dict(m) for m in messages]
|
||||
return message_dicts, params
|
||||
|
||||
def _create_chat_result(self, response: Mapping[str, Any]) -> ChatResult:
|
||||
generations = []
|
||||
for res in response["choices"]:
|
||||
message = convert_dict_to_message(res["message"])
|
||||
gen = ChatGeneration(
|
||||
message=message,
|
||||
generation_info=dict(finish_reason=res.get("finish_reason")),
|
||||
)
|
||||
generations.append(gen)
|
||||
token_usage = response.get("usage", {})
|
||||
llm_output = {"token_usage": token_usage, "model_name": self.model}
|
||||
return ChatResult(generations=generations, llm_output=llm_output)
|
||||
|
||||
@property
|
||||
def _identifying_params(self) -> Dict[str, Any]:
|
||||
"""Get the identifying parameters."""
|
||||
|
||||
@@ -246,8 +246,8 @@ class ChatLiteLLM(BaseChatModel):
|
||||
import litellm
|
||||
except ImportError:
|
||||
raise ChatLiteLLMException(
|
||||
"Could not import google.generativeai python package. "
|
||||
"Please install it with `pip install google-generativeai`"
|
||||
"Could not import litellm python package. "
|
||||
"Please install it with `pip install litellm`"
|
||||
)
|
||||
|
||||
values["openai_api_key"] = get_from_dict_or_env(
|
||||
|
||||
473
libs/community/langchain_community/chat_models/sparkllm.py
Normal file
473
libs/community/langchain_community/chat_models/sparkllm.py
Normal file
@@ -0,0 +1,473 @@
|
||||
import base64
|
||||
import hashlib
|
||||
import hmac
|
||||
import json
|
||||
import logging
|
||||
import queue
|
||||
import threading
|
||||
from datetime import datetime
|
||||
from queue import Queue
|
||||
from time import mktime
|
||||
from typing import Any, Dict, Generator, Iterator, List, Mapping, Optional, Type
|
||||
from urllib.parse import urlencode, urlparse, urlunparse
|
||||
from wsgiref.handlers import format_date_time
|
||||
|
||||
from langchain_core.callbacks import (
|
||||
CallbackManagerForLLMRun,
|
||||
)
|
||||
from langchain_core.language_models.chat_models import (
|
||||
BaseChatModel,
|
||||
generate_from_stream,
|
||||
)
|
||||
from langchain_core.messages import (
|
||||
AIMessage,
|
||||
AIMessageChunk,
|
||||
BaseMessage,
|
||||
BaseMessageChunk,
|
||||
ChatMessage,
|
||||
ChatMessageChunk,
|
||||
HumanMessage,
|
||||
HumanMessageChunk,
|
||||
SystemMessage,
|
||||
)
|
||||
from langchain_core.outputs import (
|
||||
ChatGeneration,
|
||||
ChatGenerationChunk,
|
||||
ChatResult,
|
||||
)
|
||||
from langchain_core.pydantic_v1 import Field, root_validator
|
||||
from langchain_core.utils import (
|
||||
get_from_dict_or_env,
|
||||
get_pydantic_field_names,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _convert_message_to_dict(message: BaseMessage) -> dict:
|
||||
if isinstance(message, ChatMessage):
|
||||
message_dict = {"role": "user", "content": message.content}
|
||||
elif isinstance(message, HumanMessage):
|
||||
message_dict = {"role": "user", "content": message.content}
|
||||
elif isinstance(message, AIMessage):
|
||||
message_dict = {"role": "assistant", "content": message.content}
|
||||
elif isinstance(message, SystemMessage):
|
||||
message_dict = {"role": "system", "content": message.content}
|
||||
else:
|
||||
raise ValueError(f"Got unknown type {message}")
|
||||
|
||||
return message_dict
|
||||
|
||||
|
||||
def _convert_dict_to_message(_dict: Mapping[str, Any]) -> BaseMessage:
|
||||
msg_role = _dict["role"]
|
||||
msg_content = _dict["content"]
|
||||
if msg_role == "user":
|
||||
return HumanMessage(content=msg_content)
|
||||
elif msg_role == "assistant":
|
||||
content = msg_content or ""
|
||||
return AIMessage(content=content)
|
||||
elif msg_role == "system":
|
||||
return SystemMessage(content=msg_content)
|
||||
else:
|
||||
return ChatMessage(content=msg_content, role=msg_role)
|
||||
|
||||
|
||||
def _convert_delta_to_message_chunk(
|
||||
_dict: Mapping[str, Any], default_class: Type[BaseMessageChunk]
|
||||
) -> BaseMessageChunk:
|
||||
msg_role = _dict["role"]
|
||||
msg_content = _dict.get("content", "")
|
||||
if msg_role == "user" or default_class == HumanMessageChunk:
|
||||
return HumanMessageChunk(content=msg_content)
|
||||
elif msg_role == "assistant" or default_class == AIMessageChunk:
|
||||
return AIMessageChunk(content=msg_content)
|
||||
elif msg_role or default_class == ChatMessageChunk:
|
||||
return ChatMessageChunk(content=msg_content, role=msg_role)
|
||||
else:
|
||||
return default_class(content=msg_content)
|
||||
|
||||
|
||||
class ChatSparkLLM(BaseChatModel):
|
||||
"""Wrapper around iFlyTek's Spark large language model.
|
||||
|
||||
To use, you should pass `app_id`, `api_key`, `api_secret`
|
||||
as a named parameter to the constructor OR set environment
|
||||
variables ``IFLYTEK_SPARK_APP_ID``, ``IFLYTEK_SPARK_API_KEY`` and
|
||||
``IFLYTEK_SPARK_API_SECRET``
|
||||
|
||||
Example:
|
||||
.. code-block:: python
|
||||
|
||||
client = ChatSparkLLM(
|
||||
spark_app_id="<app_id>",
|
||||
spark_api_key="<api_key>",
|
||||
spark_api_secret="<api_secret>"
|
||||
)
|
||||
"""
|
||||
|
||||
@classmethod
|
||||
def is_lc_serializable(cls) -> bool:
|
||||
"""Return whether this model can be serialized by Langchain."""
|
||||
return False
|
||||
|
||||
@property
|
||||
def lc_secrets(self) -> Dict[str, str]:
|
||||
return {
|
||||
"spark_app_id": "IFLYTEK_SPARK_APP_ID",
|
||||
"spark_api_key": "IFLYTEK_SPARK_API_KEY",
|
||||
"spark_api_secret": "IFLYTEK_SPARK_API_SECRET",
|
||||
"spark_api_url": "IFLYTEK_SPARK_API_URL",
|
||||
"spark_llm_domain": "IFLYTEK_SPARK_LLM_DOMAIN",
|
||||
}
|
||||
|
||||
client: Any = None #: :meta private:
|
||||
spark_app_id: Optional[str] = None
|
||||
spark_api_key: Optional[str] = None
|
||||
spark_api_secret: Optional[str] = None
|
||||
spark_api_url: Optional[str] = None
|
||||
spark_llm_domain: Optional[str] = None
|
||||
spark_user_id: str = "lc_user"
|
||||
streaming: bool = False
|
||||
request_timeout: int = 30
|
||||
temperature: float = 0.5
|
||||
top_k: int = 4
|
||||
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
|
||||
|
||||
@root_validator(pre=True)
|
||||
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Build extra kwargs from additional params that were passed in."""
|
||||
all_required_field_names = get_pydantic_field_names(cls)
|
||||
extra = values.get("model_kwargs", {})
|
||||
for field_name in list(values):
|
||||
if field_name in extra:
|
||||
raise ValueError(f"Found {field_name} supplied twice.")
|
||||
if field_name not in all_required_field_names:
|
||||
logger.warning(
|
||||
f"""WARNING! {field_name} is not default parameter.
|
||||
{field_name} was transferred to model_kwargs.
|
||||
Please confirm that {field_name} is what you intended."""
|
||||
)
|
||||
extra[field_name] = values.pop(field_name)
|
||||
|
||||
invalid_model_kwargs = all_required_field_names.intersection(extra.keys())
|
||||
if invalid_model_kwargs:
|
||||
raise ValueError(
|
||||
f"Parameters {invalid_model_kwargs} should be specified explicitly. "
|
||||
f"Instead they were passed in as part of `model_kwargs` parameter."
|
||||
)
|
||||
|
||||
values["model_kwargs"] = extra
|
||||
|
||||
return values
|
||||
|
||||
@root_validator()
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
values["spark_app_id"] = get_from_dict_or_env(
|
||||
values,
|
||||
"spark_app_id",
|
||||
"IFLYTEK_SPARK_APP_ID",
|
||||
)
|
||||
values["spark_api_key"] = get_from_dict_or_env(
|
||||
values,
|
||||
"spark_api_key",
|
||||
"IFLYTEK_SPARK_API_KEY",
|
||||
)
|
||||
values["spark_api_secret"] = get_from_dict_or_env(
|
||||
values,
|
||||
"spark_api_secret",
|
||||
"IFLYTEK_SPARK_API_SECRET",
|
||||
)
|
||||
values["spark_app_url"] = get_from_dict_or_env(
|
||||
values,
|
||||
"spark_app_url",
|
||||
"IFLYTEK_SPARK_APP_URL",
|
||||
"wss://spark-api.xf-yun.com/v3.1/chat",
|
||||
)
|
||||
values["spark_llm_domain"] = get_from_dict_or_env(
|
||||
values,
|
||||
"spark_llm_domain",
|
||||
"IFLYTEK_SPARK_LLM_DOMAIN",
|
||||
"generalv3",
|
||||
)
|
||||
# put extra params into model_kwargs
|
||||
values["model_kwargs"]["temperature"] = values["temperature"] or cls.temperature
|
||||
values["model_kwargs"]["top_k"] = values["top_k"] or cls.top_k
|
||||
|
||||
values["client"] = _SparkLLMClient(
|
||||
app_id=values["spark_app_id"],
|
||||
api_key=values["spark_api_key"],
|
||||
api_secret=values["spark_api_secret"],
|
||||
api_url=values["spark_api_url"],
|
||||
spark_domain=values["spark_llm_domain"],
|
||||
model_kwargs=values["model_kwargs"],
|
||||
)
|
||||
return values
|
||||
|
||||
def _stream(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> Iterator[ChatGenerationChunk]:
|
||||
default_chunk_class = AIMessageChunk
|
||||
|
||||
self.client.arun(
|
||||
[_convert_message_to_dict(m) for m in messages],
|
||||
self.spark_user_id,
|
||||
self.model_kwargs,
|
||||
self.streaming,
|
||||
)
|
||||
for content in self.client.subscribe(timeout=self.request_timeout):
|
||||
if "data" not in content:
|
||||
continue
|
||||
delta = content["data"]
|
||||
chunk = _convert_delta_to_message_chunk(delta, default_chunk_class)
|
||||
yield ChatGenerationChunk(message=chunk)
|
||||
if run_manager:
|
||||
run_manager.on_llm_new_token(str(chunk.content))
|
||||
|
||||
def _generate(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> ChatResult:
|
||||
if self.streaming:
|
||||
stream_iter = self._stream(
|
||||
messages=messages, stop=stop, run_manager=run_manager, **kwargs
|
||||
)
|
||||
return generate_from_stream(stream_iter)
|
||||
|
||||
self.client.arun(
|
||||
[_convert_message_to_dict(m) for m in messages],
|
||||
self.spark_user_id,
|
||||
self.model_kwargs,
|
||||
False,
|
||||
)
|
||||
completion = {}
|
||||
llm_output = {}
|
||||
for content in self.client.subscribe(timeout=self.request_timeout):
|
||||
if "usage" in content:
|
||||
llm_output["token_usage"] = content["usage"]
|
||||
if "data" not in content:
|
||||
continue
|
||||
completion = content["data"]
|
||||
message = _convert_dict_to_message(completion)
|
||||
generations = [ChatGeneration(message=message)]
|
||||
return ChatResult(generations=generations, llm_output=llm_output)
|
||||
|
||||
@property
|
||||
def _llm_type(self) -> str:
|
||||
return "spark-llm-chat"
|
||||
|
||||
|
||||
class _SparkLLMClient:
|
||||
"""
|
||||
Use websocket-client to call the SparkLLM interface provided by Xfyun,
|
||||
which is the iFlyTek's open platform for AI capabilities
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
app_id: str,
|
||||
api_key: str,
|
||||
api_secret: str,
|
||||
api_url: Optional[str] = None,
|
||||
spark_domain: Optional[str] = None,
|
||||
model_kwargs: Optional[dict] = None,
|
||||
):
|
||||
try:
|
||||
import websocket
|
||||
|
||||
self.websocket_client = websocket
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Could not import websocket client python package. "
|
||||
"Please install it with `pip install websocket-client`."
|
||||
)
|
||||
|
||||
self.api_url = (
|
||||
"wss://spark-api.xf-yun.com/v3.1/chat" if not api_url else api_url
|
||||
)
|
||||
self.app_id = app_id
|
||||
self.ws_url = _SparkLLMClient._create_url(
|
||||
self.api_url,
|
||||
api_key,
|
||||
api_secret,
|
||||
)
|
||||
self.model_kwargs = model_kwargs
|
||||
self.spark_domain = spark_domain or "generalv3"
|
||||
self.queue: Queue[Dict] = Queue()
|
||||
self.blocking_message = {"content": "", "role": "assistant"}
|
||||
|
||||
@staticmethod
|
||||
def _create_url(api_url: str, api_key: str, api_secret: str) -> str:
|
||||
"""
|
||||
Generate a request url with an api key and an api secret.
|
||||
"""
|
||||
# generate timestamp by RFC1123
|
||||
date = format_date_time(mktime(datetime.now().timetuple()))
|
||||
|
||||
# urlparse
|
||||
parsed_url = urlparse(api_url)
|
||||
host = parsed_url.netloc
|
||||
path = parsed_url.path
|
||||
|
||||
signature_origin = f"host: {host}\ndate: {date}\nGET {path} HTTP/1.1"
|
||||
|
||||
# encrypt using hmac-sha256
|
||||
signature_sha = hmac.new(
|
||||
api_secret.encode("utf-8"),
|
||||
signature_origin.encode("utf-8"),
|
||||
digestmod=hashlib.sha256,
|
||||
).digest()
|
||||
|
||||
signature_sha_base64 = base64.b64encode(signature_sha).decode(encoding="utf-8")
|
||||
|
||||
authorization_origin = f'api_key="{api_key}", algorithm="hmac-sha256", \
|
||||
headers="host date request-line", signature="{signature_sha_base64}"'
|
||||
authorization = base64.b64encode(authorization_origin.encode("utf-8")).decode(
|
||||
encoding="utf-8"
|
||||
)
|
||||
|
||||
# generate url
|
||||
params_dict = {"authorization": authorization, "date": date, "host": host}
|
||||
encoded_params = urlencode(params_dict)
|
||||
url = urlunparse(
|
||||
(
|
||||
parsed_url.scheme,
|
||||
parsed_url.netloc,
|
||||
parsed_url.path,
|
||||
parsed_url.params,
|
||||
encoded_params,
|
||||
parsed_url.fragment,
|
||||
)
|
||||
)
|
||||
return url
|
||||
|
||||
def run(
|
||||
self,
|
||||
messages: List[Dict],
|
||||
user_id: str,
|
||||
model_kwargs: Optional[dict] = None,
|
||||
streaming: bool = False,
|
||||
) -> None:
|
||||
self.websocket_client.enableTrace(False)
|
||||
ws = self.websocket_client.WebSocketApp(
|
||||
self.ws_url,
|
||||
on_message=self.on_message,
|
||||
on_error=self.on_error,
|
||||
on_close=self.on_close,
|
||||
on_open=self.on_open,
|
||||
)
|
||||
ws.messages = messages
|
||||
ws.user_id = user_id
|
||||
ws.model_kwargs = self.model_kwargs if model_kwargs is None else model_kwargs
|
||||
ws.streaming = streaming
|
||||
ws.run_forever()
|
||||
|
||||
def arun(
|
||||
self,
|
||||
messages: List[Dict],
|
||||
user_id: str,
|
||||
model_kwargs: Optional[dict] = None,
|
||||
streaming: bool = False,
|
||||
) -> threading.Thread:
|
||||
ws_thread = threading.Thread(
|
||||
target=self.run,
|
||||
args=(
|
||||
messages,
|
||||
user_id,
|
||||
model_kwargs,
|
||||
streaming,
|
||||
),
|
||||
)
|
||||
ws_thread.start()
|
||||
return ws_thread
|
||||
|
||||
def on_error(self, ws: Any, error: Optional[Any]) -> None:
|
||||
self.queue.put({"error": error})
|
||||
ws.close()
|
||||
|
||||
def on_close(self, ws: Any, close_status_code: int, close_reason: str) -> None:
|
||||
logger.debug(
|
||||
{
|
||||
"log": {
|
||||
"close_status_code": close_status_code,
|
||||
"close_reason": close_reason,
|
||||
}
|
||||
}
|
||||
)
|
||||
self.queue.put({"done": True})
|
||||
|
||||
def on_open(self, ws: Any) -> None:
|
||||
self.blocking_message = {"content": "", "role": "assistant"}
|
||||
data = json.dumps(
|
||||
self.gen_params(
|
||||
messages=ws.messages, user_id=ws.user_id, model_kwargs=ws.model_kwargs
|
||||
)
|
||||
)
|
||||
ws.send(data)
|
||||
|
||||
def on_message(self, ws: Any, message: str) -> None:
|
||||
data = json.loads(message)
|
||||
code = data["header"]["code"]
|
||||
if code != 0:
|
||||
self.queue.put(
|
||||
{"error": f"Code: {code}, Error: {data['header']['message']}"}
|
||||
)
|
||||
ws.close()
|
||||
else:
|
||||
choices = data["payload"]["choices"]
|
||||
status = choices["status"]
|
||||
content = choices["text"][0]["content"]
|
||||
if ws.streaming:
|
||||
self.queue.put({"data": choices["text"][0]})
|
||||
else:
|
||||
self.blocking_message["content"] += content
|
||||
if status == 2:
|
||||
if not ws.streaming:
|
||||
self.queue.put({"data": self.blocking_message})
|
||||
usage_data = (
|
||||
data.get("payload", {}).get("usage", {}).get("text", {})
|
||||
if data
|
||||
else {}
|
||||
)
|
||||
self.queue.put({"usage": usage_data})
|
||||
ws.close()
|
||||
|
||||
def gen_params(
|
||||
self, messages: list, user_id: str, model_kwargs: Optional[dict] = None
|
||||
) -> dict:
|
||||
data: Dict = {
|
||||
"header": {"app_id": self.app_id, "uid": user_id},
|
||||
"parameter": {"chat": {"domain": self.spark_domain}},
|
||||
"payload": {"message": {"text": messages}},
|
||||
}
|
||||
|
||||
if model_kwargs:
|
||||
data["parameter"]["chat"].update(model_kwargs)
|
||||
logger.debug(f"Spark Request Parameters: {data}")
|
||||
return data
|
||||
|
||||
def subscribe(self, timeout: Optional[int] = 30) -> Generator[Dict, None, None]:
|
||||
while True:
|
||||
try:
|
||||
content = self.queue.get(timeout=timeout)
|
||||
except queue.Empty as _:
|
||||
raise TimeoutError(
|
||||
f"SparkLLMClient wait LLM api response timeout {timeout} seconds"
|
||||
)
|
||||
if "error" in content:
|
||||
raise ConnectionError(content["error"])
|
||||
if "usage" in content:
|
||||
yield content
|
||||
continue
|
||||
if "done" in content:
|
||||
break
|
||||
if "data" not in content:
|
||||
break
|
||||
yield content
|
||||
@@ -207,6 +207,7 @@ from langchain_community.document_loaders.unstructured import (
|
||||
from langchain_community.document_loaders.url import UnstructuredURLLoader
|
||||
from langchain_community.document_loaders.url_playwright import PlaywrightURLLoader
|
||||
from langchain_community.document_loaders.url_selenium import SeleniumURLLoader
|
||||
from langchain_community.document_loaders.vsdx import VsdxLoader
|
||||
from langchain_community.document_loaders.weather import WeatherDataLoader
|
||||
from langchain_community.document_loaders.web_base import WebBaseLoader
|
||||
from langchain_community.document_loaders.whatsapp_chat import WhatsAppChatLoader
|
||||
@@ -394,6 +395,7 @@ __all__ = [
|
||||
"UnstructuredURLLoader",
|
||||
"UnstructuredWordDocumentLoader",
|
||||
"UnstructuredXMLLoader",
|
||||
"VsdxLoader",
|
||||
"WeatherDataLoader",
|
||||
"WebBaseLoader",
|
||||
"WhatsAppChatLoader",
|
||||
|
||||
@@ -13,6 +13,7 @@ from langchain_community.document_loaders.parsers.pdf import (
|
||||
PyPDFium2Parser,
|
||||
PyPDFParser,
|
||||
)
|
||||
from langchain_community.document_loaders.parsers.vsdx import VsdxParser
|
||||
|
||||
__all__ = [
|
||||
"AzureAIDocumentIntelligenceParser",
|
||||
@@ -26,4 +27,5 @@ __all__ = [
|
||||
"PyMuPDFParser",
|
||||
"PyPDFium2Parser",
|
||||
"PyPDFParser",
|
||||
"VsdxParser",
|
||||
]
|
||||
|
||||
@@ -59,19 +59,20 @@ class GrobidParser(BaseBlobParser):
|
||||
for i, sentence in enumerate(paragraph.find_all("s")):
|
||||
paragraph_text.append(sentence.text)
|
||||
sbboxes = []
|
||||
for bbox in sentence.get("coords").split(";"):
|
||||
box = bbox.split(",")
|
||||
sbboxes.append(
|
||||
{
|
||||
"page": box[0],
|
||||
"x": box[1],
|
||||
"y": box[2],
|
||||
"h": box[3],
|
||||
"w": box[4],
|
||||
}
|
||||
)
|
||||
chunk_bboxes.append(sbboxes)
|
||||
if segment_sentences is True:
|
||||
if sentence.get("coords") is not None:
|
||||
for bbox in sentence.get("coords").split(";"):
|
||||
box = bbox.split(",")
|
||||
sbboxes.append(
|
||||
{
|
||||
"page": box[0],
|
||||
"x": box[1],
|
||||
"y": box[2],
|
||||
"h": box[3],
|
||||
"w": box[4],
|
||||
}
|
||||
)
|
||||
chunk_bboxes.append(sbboxes)
|
||||
if (segment_sentences is True) and (len(sbboxes) > 0):
|
||||
fpage, lpage = sbboxes[0]["page"], sbboxes[-1]["page"]
|
||||
sentence_dict = {
|
||||
"text": sentence.text,
|
||||
|
||||
@@ -97,6 +97,8 @@ class LanguageParser(BaseBlobParser):
|
||||
language: If None (default), it will try to infer language from source.
|
||||
parser_threshold: Minimum lines needed to activate parsing (0 by default).
|
||||
"""
|
||||
if language and language not in LANGUAGE_SEGMENTERS:
|
||||
raise Exception(f"No parser available for {language}")
|
||||
self.language = language
|
||||
self.parser_threshold = parser_threshold
|
||||
|
||||
|
||||
@@ -0,0 +1,205 @@
|
||||
import json
|
||||
import re
|
||||
import zipfile
|
||||
from abc import ABC
|
||||
from pathlib import Path
|
||||
from typing import Iterator, List, Set, Tuple
|
||||
|
||||
from langchain_community.docstore.document import Document
|
||||
from langchain_community.document_loaders.base import BaseBlobParser
|
||||
from langchain_community.document_loaders.blob_loaders import Blob
|
||||
|
||||
|
||||
class VsdxParser(BaseBlobParser, ABC):
|
||||
def parse(self, blob: Blob) -> Iterator[Document]:
|
||||
"""Parse a vsdx file."""
|
||||
return self.lazy_parse(blob)
|
||||
|
||||
def lazy_parse(self, blob: Blob) -> Iterator[Document]:
|
||||
"""Retrieve the contents of pages from a .vsdx file
|
||||
and insert them into documents, one document per page."""
|
||||
|
||||
with blob.as_bytes_io() as pdf_file_obj:
|
||||
with zipfile.ZipFile(pdf_file_obj, "r") as zfile:
|
||||
pages = self.get_pages_content(zfile, blob.source)
|
||||
|
||||
yield from [
|
||||
Document(
|
||||
page_content=page_content,
|
||||
metadata={
|
||||
"source": blob.source,
|
||||
"page": page_number,
|
||||
"page_name": page_name,
|
||||
},
|
||||
)
|
||||
for page_number, page_name, page_content in pages
|
||||
]
|
||||
|
||||
def get_pages_content(
|
||||
self, zfile: zipfile.ZipFile, source: str
|
||||
) -> List[Tuple[int, str, str]]:
|
||||
"""Get the content of the pages of a vsdx file.
|
||||
|
||||
Attributes:
|
||||
zfile (zipfile.ZipFile): The vsdx file under zip format.
|
||||
source (str): The path of the vsdx file.
|
||||
|
||||
Returns:
|
||||
list[tuple[int, str, str]]: A list of tuples containing the page number,
|
||||
the name of the page and the content of the page
|
||||
for each page of the vsdx file.
|
||||
"""
|
||||
|
||||
try:
|
||||
import xmltodict
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"The xmltodict library is required to parse vsdx files. "
|
||||
"Please install it with `pip install xmltodict`."
|
||||
)
|
||||
|
||||
if "visio/pages/pages.xml" not in zfile.namelist():
|
||||
print("WARNING - No pages.xml file found in {}".format(source))
|
||||
return
|
||||
if "visio/pages/_rels/pages.xml.rels" not in zfile.namelist():
|
||||
print("WARNING - No pages.xml.rels file found in {}".format(source))
|
||||
return
|
||||
if "docProps/app.xml" not in zfile.namelist():
|
||||
print("WARNING - No app.xml file found in {}".format(source))
|
||||
return
|
||||
|
||||
pagesxml_content: dict = xmltodict.parse(zfile.read("visio/pages/pages.xml"))
|
||||
appxml_content: dict = xmltodict.parse(zfile.read("docProps/app.xml"))
|
||||
pagesxmlrels_content: dict = xmltodict.parse(
|
||||
zfile.read("visio/pages/_rels/pages.xml.rels")
|
||||
)
|
||||
|
||||
if isinstance(pagesxml_content["Pages"]["Page"], list):
|
||||
disordered_names: List[str] = [
|
||||
rel["@Name"].strip() for rel in pagesxml_content["Pages"]["Page"]
|
||||
]
|
||||
else:
|
||||
disordered_names: List[str] = [
|
||||
pagesxml_content["Pages"]["Page"]["@Name"].strip()
|
||||
]
|
||||
if isinstance(pagesxmlrels_content["Relationships"]["Relationship"], list):
|
||||
disordered_paths: List[str] = [
|
||||
"visio/pages/" + rel["@Target"]
|
||||
for rel in pagesxmlrels_content["Relationships"]["Relationship"]
|
||||
]
|
||||
else:
|
||||
disordered_paths: List[str] = [
|
||||
"visio/pages/"
|
||||
+ pagesxmlrels_content["Relationships"]["Relationship"]["@Target"]
|
||||
]
|
||||
ordered_names: List[str] = appxml_content["Properties"]["TitlesOfParts"][
|
||||
"vt:vector"
|
||||
]["vt:lpstr"][: len(disordered_names)]
|
||||
ordered_names = [name.strip() for name in ordered_names]
|
||||
ordered_paths = [
|
||||
disordered_paths[disordered_names.index(name.strip())]
|
||||
for name in ordered_names
|
||||
]
|
||||
|
||||
# Pages out of order and without content of their relationships
|
||||
disordered_pages = []
|
||||
for path in ordered_paths:
|
||||
content = zfile.read(path)
|
||||
string_content = json.dumps(xmltodict.parse(content))
|
||||
|
||||
samples = re.findall(
|
||||
r'"#text"\s*:\s*"([^\\"]*(?:\\.[^\\"]*)*)"', string_content
|
||||
)
|
||||
if len(samples) > 0:
|
||||
page_content = "\n".join(samples)
|
||||
map_symboles = {
|
||||
"\\n": "\n",
|
||||
"\\t": "\t",
|
||||
"\\u2013": "-",
|
||||
"\\u2019": "'",
|
||||
"\\u00e9r": "é",
|
||||
"\\u00f4me": "ô",
|
||||
}
|
||||
for key, value in map_symboles.items():
|
||||
page_content = page_content.replace(key, value)
|
||||
|
||||
disordered_pages.append({"page": path, "page_content": page_content})
|
||||
|
||||
# Direct relationships of each page in a dict format
|
||||
pagexml_rels = [
|
||||
{
|
||||
"path": page_path,
|
||||
"content": xmltodict.parse(
|
||||
zfile.read(f"visio/pages/_rels/{Path(page_path).stem}.xml.rels")
|
||||
),
|
||||
}
|
||||
for page_path in ordered_paths
|
||||
if f"visio/pages/_rels/{Path(page_path).stem}.xml.rels" in zfile.namelist()
|
||||
]
|
||||
|
||||
# Pages in order and with content of their relationships (direct and indirect)
|
||||
ordered_pages: List[Tuple[int, str, str]] = []
|
||||
for page_number, (path, page_name) in enumerate(
|
||||
zip(ordered_paths, ordered_names)
|
||||
):
|
||||
relationships = self.get_relationships(
|
||||
path, zfile, ordered_paths, pagexml_rels
|
||||
)
|
||||
page_content = "\n".join(
|
||||
[
|
||||
page_["page_content"]
|
||||
for page_ in disordered_pages
|
||||
if page_["page"] in relationships
|
||||
]
|
||||
+ [
|
||||
page_["page_content"]
|
||||
for page_ in disordered_pages
|
||||
if page_["page"] == path
|
||||
]
|
||||
)
|
||||
ordered_pages.append((page_number, page_name, page_content))
|
||||
|
||||
return ordered_pages
|
||||
|
||||
def get_relationships(
|
||||
self,
|
||||
page: str,
|
||||
zfile: zipfile.ZipFile,
|
||||
filelist: List[str],
|
||||
pagexml_rels: List[dict],
|
||||
) -> Set[str]:
|
||||
"""Get the relationships of a page and the relationships of its relationships,
|
||||
etc... recursively.
|
||||
Pages are based on other pages (ex: background page),
|
||||
so we need to get all the relationships to get all the content of a single page.
|
||||
"""
|
||||
|
||||
name_path = Path(page).name
|
||||
parent_path = Path(page).parent
|
||||
rels_path = parent_path / f"_rels/{name_path}.rels"
|
||||
|
||||
if str(rels_path) not in zfile.namelist():
|
||||
return set()
|
||||
|
||||
pagexml_rels_content = next(
|
||||
page_["content"] for page_ in pagexml_rels if page_["path"] == page
|
||||
)
|
||||
|
||||
if isinstance(pagexml_rels_content["Relationships"]["Relationship"], list):
|
||||
targets = [
|
||||
rel["@Target"]
|
||||
for rel in pagexml_rels_content["Relationships"]["Relationship"]
|
||||
]
|
||||
else:
|
||||
targets = [pagexml_rels_content["Relationships"]["Relationship"]["@Target"]]
|
||||
|
||||
relationships = set(
|
||||
[str(parent_path / target) for target in targets]
|
||||
).intersection(filelist)
|
||||
|
||||
for rel in relationships:
|
||||
relationships = relationships | self.get_relationships(
|
||||
rel, zfile, filelist, pagexml_rels
|
||||
)
|
||||
|
||||
return relationships
|
||||
@@ -46,8 +46,6 @@ class SurrealDBLoader(BaseLoader):
|
||||
self.sdb = Surreal(self.dburl)
|
||||
self.kwargs = kwargs
|
||||
|
||||
asyncio.run(self.initialize())
|
||||
|
||||
async def initialize(self) -> None:
|
||||
"""
|
||||
Initialize connection to surrealdb database
|
||||
|
||||
@@ -170,7 +170,13 @@ class UnstructuredFileLoader(UnstructuredBaseLoader):
|
||||
def _get_elements(self) -> List:
|
||||
from unstructured.partition.auto import partition
|
||||
|
||||
return partition(filename=self.file_path, **self.unstructured_kwargs)
|
||||
if isinstance(self.file_path, list):
|
||||
elements = []
|
||||
for file in self.file_path:
|
||||
elements.extend(partition(filename=file, **self.unstructured_kwargs))
|
||||
return elements
|
||||
else:
|
||||
return partition(filename=self.file_path, **self.unstructured_kwargs)
|
||||
|
||||
def _get_metadata(self) -> dict:
|
||||
return {"source": self.file_path}
|
||||
|
||||
53
libs/community/langchain_community/document_loaders/vsdx.py
Normal file
53
libs/community/langchain_community/document_loaders/vsdx.py
Normal file
@@ -0,0 +1,53 @@
|
||||
import os
|
||||
import tempfile
|
||||
from abc import ABC
|
||||
from typing import List
|
||||
from urllib.parse import urlparse
|
||||
|
||||
import requests
|
||||
|
||||
from langchain_community.docstore.document import Document
|
||||
from langchain_community.document_loaders.base import BaseLoader
|
||||
from langchain_community.document_loaders.blob_loaders import Blob
|
||||
from langchain_community.document_loaders.parsers import VsdxParser
|
||||
|
||||
|
||||
class VsdxLoader(BaseLoader, ABC):
|
||||
def __init__(self, file_path: str):
|
||||
"""Initialize with file path."""
|
||||
self.file_path = file_path
|
||||
if "~" in self.file_path:
|
||||
self.file_path = os.path.expanduser(self.file_path)
|
||||
|
||||
# If the file is a web path, download it to a temporary file, and use that
|
||||
if not os.path.isfile(self.file_path) and self._is_valid_url(self.file_path):
|
||||
r = requests.get(self.file_path)
|
||||
|
||||
if r.status_code != 200:
|
||||
raise ValueError(
|
||||
"Check the url of your file; returned status code %s"
|
||||
% r.status_code
|
||||
)
|
||||
|
||||
self.web_path = self.file_path
|
||||
self.temp_file = tempfile.NamedTemporaryFile()
|
||||
self.temp_file.write(r.content)
|
||||
self.file_path = self.temp_file.name
|
||||
elif not os.path.isfile(self.file_path):
|
||||
raise ValueError("File path %s is not a valid file or url" % self.file_path)
|
||||
|
||||
self.parser = VsdxParser()
|
||||
|
||||
def __del__(self) -> None:
|
||||
if hasattr(self, "temp_file"):
|
||||
self.temp_file.close()
|
||||
|
||||
@staticmethod
|
||||
def _is_valid_url(url: str) -> bool:
|
||||
"""Check if the url is valid."""
|
||||
parsed = urlparse(url)
|
||||
return bool(parsed.netloc) and bool(parsed.scheme)
|
||||
|
||||
def load(self) -> List[Document]:
|
||||
blob = Blob.from_path(self.file_path)
|
||||
return list(self.parser.parse(blob))
|
||||
@@ -57,7 +57,10 @@ from langchain_community.embeddings.llamacpp import LlamaCppEmbeddings
|
||||
from langchain_community.embeddings.llm_rails import LLMRailsEmbeddings
|
||||
from langchain_community.embeddings.localai import LocalAIEmbeddings
|
||||
from langchain_community.embeddings.minimax import MiniMaxEmbeddings
|
||||
from langchain_community.embeddings.mlflow import MlflowEmbeddings
|
||||
from langchain_community.embeddings.mlflow import (
|
||||
MlflowCohereEmbeddings,
|
||||
MlflowEmbeddings,
|
||||
)
|
||||
from langchain_community.embeddings.mlflow_gateway import MlflowAIGatewayEmbeddings
|
||||
from langchain_community.embeddings.modelscope_hub import ModelScopeEmbeddings
|
||||
from langchain_community.embeddings.mosaicml import MosaicMLInstructorEmbeddings
|
||||
@@ -102,6 +105,7 @@ __all__ = [
|
||||
"LLMRailsEmbeddings",
|
||||
"HuggingFaceHubEmbeddings",
|
||||
"MlflowEmbeddings",
|
||||
"MlflowCohereEmbeddings",
|
||||
"MlflowAIGatewayEmbeddings",
|
||||
"ModelScopeEmbeddings",
|
||||
"TensorflowHubEmbeddings",
|
||||
|
||||
@@ -3,6 +3,7 @@ import json
|
||||
import os
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
import numpy as np
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, root_validator
|
||||
from langchain_core.runnables.config import run_in_executor
|
||||
@@ -64,6 +65,9 @@ class BedrockEmbeddings(BaseModel, Embeddings):
|
||||
endpoint_url: Optional[str] = None
|
||||
"""Needed if you don't want to default to us-east-1 endpoint"""
|
||||
|
||||
normalize: bool = False
|
||||
"""Whether the embeddings should be normalized to unit vectors"""
|
||||
|
||||
class Config:
|
||||
"""Configuration for this pydantic object."""
|
||||
|
||||
@@ -145,6 +149,12 @@ class BedrockEmbeddings(BaseModel, Embeddings):
|
||||
except Exception as e:
|
||||
raise ValueError(f"Error raised by inference endpoint: {e}")
|
||||
|
||||
def _normalize_vector(self, embeddings: List[float]) -> List[float]:
|
||||
"""Normalize the embedding to a unit vector."""
|
||||
emb = np.array(embeddings)
|
||||
norm_emb = emb / np.linalg.norm(emb)
|
||||
return norm_emb.tolist()
|
||||
|
||||
def embed_documents(self, texts: List[str]) -> List[List[float]]:
|
||||
"""Compute doc embeddings using a Bedrock model.
|
||||
|
||||
@@ -157,7 +167,12 @@ class BedrockEmbeddings(BaseModel, Embeddings):
|
||||
results = []
|
||||
for text in texts:
|
||||
response = self._embedding_func(text)
|
||||
|
||||
if self.normalize:
|
||||
response = self._normalize_vector(response)
|
||||
|
||||
results.append(response)
|
||||
|
||||
return results
|
||||
|
||||
def embed_query(self, text: str) -> List[float]:
|
||||
@@ -169,7 +184,12 @@ class BedrockEmbeddings(BaseModel, Embeddings):
|
||||
Returns:
|
||||
Embeddings for the text.
|
||||
"""
|
||||
return self._embedding_func(text)
|
||||
embedding = self._embedding_func(text)
|
||||
|
||||
if self.normalize:
|
||||
return self._normalize_vector(embedding)
|
||||
|
||||
return embedding
|
||||
|
||||
async def aembed_query(self, text: str) -> List[float]:
|
||||
"""Asynchronous compute query embeddings using a Bedrock model.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any, Iterator, List
|
||||
from typing import Any, Dict, Iterator, List
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from langchain_core.embeddings import Embeddings
|
||||
@@ -34,6 +34,10 @@ class MlflowEmbeddings(Embeddings, BaseModel):
|
||||
target_uri: str
|
||||
"""The target URI to use."""
|
||||
_client: Any = PrivateAttr()
|
||||
"""The parameters to use for queries."""
|
||||
query_params: Dict[str, str] = {}
|
||||
"""The parameters to use for documents."""
|
||||
documents_params: Dict[str, str] = {}
|
||||
|
||||
def __init__(self, **kwargs: Any):
|
||||
super().__init__(**kwargs)
|
||||
@@ -63,12 +67,22 @@ class MlflowEmbeddings(Embeddings, BaseModel):
|
||||
f"The scheme must be one of {allowed}."
|
||||
)
|
||||
|
||||
def embed_documents(self, texts: List[str]) -> List[List[float]]:
|
||||
def embed(self, texts: List[str], params: Dict[str, str]) -> List[List[float]]:
|
||||
embeddings: List[List[float]] = []
|
||||
for txt in _chunk(texts, 20):
|
||||
resp = self._client.predict(endpoint=self.endpoint, inputs={"input": txt})
|
||||
resp = self._client.predict(
|
||||
endpoint=self.endpoint, inputs={"input": txt, **params}
|
||||
)
|
||||
embeddings.extend(r["embedding"] for r in resp["data"])
|
||||
return embeddings
|
||||
|
||||
def embed_documents(self, texts: List[str]) -> List[List[float]]:
|
||||
return self.embed(texts, params=self.documents_params)
|
||||
|
||||
def embed_query(self, text: str) -> List[float]:
|
||||
return self.embed_documents([text])[0]
|
||||
return self.embed([text], params=self.query_params)[0]
|
||||
|
||||
|
||||
class MlflowCohereEmbeddings(MlflowEmbeddings):
|
||||
query_params: Dict[str, str] = {"input_type": "search_query"}
|
||||
documents_params: Dict[str, str] = {"input_type": "search_document"}
|
||||
|
||||
@@ -30,6 +30,8 @@ class VertexAIEmbeddings(_VertexAICommon, Embeddings):
|
||||
|
||||
# Instance context
|
||||
instance: Dict[str, Any] = {} #: :meta private:
|
||||
show_progress_bar: bool = False
|
||||
"""Whether to show a tqdm progress bar. Must have `tqdm` installed."""
|
||||
|
||||
@root_validator()
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
@@ -302,7 +304,20 @@ class VertexAIEmbeddings(_VertexAICommon, Embeddings):
|
||||
# In such case, batches have texts that were not processed yet.
|
||||
embeddings.extend(first_batch_result)
|
||||
tasks = []
|
||||
for batch in batches:
|
||||
if self.show_progress_bar:
|
||||
try:
|
||||
from tqdm import tqdm
|
||||
|
||||
iter_ = tqdm(batches, desc="VertexAIEmbeddings")
|
||||
except ImportError:
|
||||
logger.warning(
|
||||
"Unable to show progress bar because tqdm could not be imported. "
|
||||
"Please install with `pip install tqdm`."
|
||||
)
|
||||
iter_ = batches
|
||||
else:
|
||||
iter_ = batches
|
||||
for batch in iter_:
|
||||
tasks.append(
|
||||
self.instance["task_executor"].submit(
|
||||
self._get_embeddings_with_retry,
|
||||
|
||||
@@ -5,8 +5,8 @@ import logging
|
||||
from typing import Any, Callable, Dict, List
|
||||
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.pydantic_v1 import BaseModel, root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import BaseModel, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from tenacity import (
|
||||
before_sleep_log,
|
||||
retry,
|
||||
@@ -41,18 +41,16 @@ class YandexGPTEmbeddings(BaseModel, Embeddings):
|
||||
embeddings = YandexGPTEmbeddings(iam_token="t1.9eu...", model_uri="emb://<folder-id>/text-search-query/latest")
|
||||
"""
|
||||
|
||||
iam_token: str = ""
|
||||
iam_token: SecretStr = ""
|
||||
"""Yandex Cloud IAM token for service account
|
||||
with the `ai.languageModels.user` role"""
|
||||
api_key: str = ""
|
||||
api_key: SecretStr = ""
|
||||
"""Yandex Cloud Api Key for service account
|
||||
with the `ai.languageModels.user` role"""
|
||||
model_uri: str = ""
|
||||
"""Model uri to use."""
|
||||
folder_id: str = ""
|
||||
"""Yandex Cloud folder ID"""
|
||||
model_uri: str = ""
|
||||
"""Model uri to use."""
|
||||
model_name: str = "text-search-query"
|
||||
"""Model name to use."""
|
||||
model_version: str = "latest"
|
||||
@@ -66,23 +64,27 @@ class YandexGPTEmbeddings(BaseModel, Embeddings):
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that iam token exists in environment."""
|
||||
|
||||
iam_token = get_from_dict_or_env(values, "iam_token", "YC_IAM_TOKEN", "")
|
||||
iam_token = convert_to_secret_str(
|
||||
get_from_dict_or_env(values, "iam_token", "YC_IAM_TOKEN", "")
|
||||
)
|
||||
values["iam_token"] = iam_token
|
||||
api_key = get_from_dict_or_env(values, "api_key", "YC_API_KEY", "")
|
||||
api_key = convert_to_secret_str(
|
||||
get_from_dict_or_env(values, "api_key", "YC_API_KEY", "")
|
||||
)
|
||||
values["api_key"] = api_key
|
||||
folder_id = get_from_dict_or_env(values, "folder_id", "YC_FOLDER_ID", "")
|
||||
values["folder_id"] = folder_id
|
||||
if api_key == "" and iam_token == "":
|
||||
if api_key.get_secret_value() == "" and iam_token.get_secret_value() == "":
|
||||
raise ValueError("Either 'YC_API_KEY' or 'YC_IAM_TOKEN' must be provided.")
|
||||
if values["iam_token"]:
|
||||
values["_grpc_metadata"] = [
|
||||
("authorization", f"Bearer {values['iam_token']}")
|
||||
("authorization", f"Bearer {values['iam_token'].get_secret_value()}")
|
||||
]
|
||||
if values["folder_id"]:
|
||||
values["_grpc_metadata"].append(("x-folder-id", values["folder_id"]))
|
||||
else:
|
||||
values["_grpc_metadata"] = (
|
||||
("authorization", f"Api-Key {values['api_key']}"),
|
||||
("authorization", f"Api-Key {values['api_key'].get_secret_value()}"),
|
||||
)
|
||||
if values["model_uri"] == "" and values["folder_id"] == "":
|
||||
raise ValueError("Either 'model_uri' or 'folder_id' must be provided.")
|
||||
|
||||
@@ -10,6 +10,7 @@ from langchain_community.graphs.neo4j_graph import Neo4jGraph
|
||||
from langchain_community.graphs.neptune_graph import NeptuneGraph
|
||||
from langchain_community.graphs.networkx_graph import NetworkxEntityGraph
|
||||
from langchain_community.graphs.rdf_graph import RdfGraph
|
||||
from langchain_community.graphs.tigergraph_graph import TigerGraph
|
||||
|
||||
__all__ = [
|
||||
"MemgraphGraph",
|
||||
@@ -22,4 +23,5 @@ __all__ = [
|
||||
"RdfGraph",
|
||||
"ArangoGraph",
|
||||
"FalkorDBGraph",
|
||||
"TigerGraph",
|
||||
]
|
||||
|
||||
@@ -1,12 +1,6 @@
|
||||
from langchain_community.graphs.neo4j_graph import Neo4jGraph
|
||||
|
||||
SCHEMA_QUERY = """
|
||||
CALL llm_util.schema("prompt_ready")
|
||||
YIELD *
|
||||
RETURN *
|
||||
"""
|
||||
|
||||
RAW_SCHEMA_QUERY = """
|
||||
CALL llm_util.schema("raw")
|
||||
YIELD *
|
||||
RETURN *
|
||||
@@ -39,10 +33,39 @@ class MemgraphGraph(Neo4jGraph):
|
||||
Refreshes the Memgraph graph schema information.
|
||||
"""
|
||||
|
||||
db_schema = self.query(SCHEMA_QUERY)[0].get("schema")
|
||||
assert db_schema is not None
|
||||
self.schema = db_schema
|
||||
|
||||
db_structured_schema = self.query(RAW_SCHEMA_QUERY)[0].get("schema")
|
||||
db_structured_schema = self.query(SCHEMA_QUERY)[0].get("schema")
|
||||
assert db_structured_schema is not None
|
||||
self.structured_schema = db_structured_schema
|
||||
|
||||
# Format node properties
|
||||
formatted_node_props = []
|
||||
|
||||
for node_name, properties in db_structured_schema["node_props"].items():
|
||||
formatted_node_props.append(
|
||||
f"Node name: '{node_name}', Node properties: {properties}"
|
||||
)
|
||||
|
||||
# Format relationship properties
|
||||
formatted_rel_props = []
|
||||
for rel_name, properties in db_structured_schema["rel_props"].items():
|
||||
formatted_rel_props.append(
|
||||
f"Relationship name: '{rel_name}', "
|
||||
f"Relationship properties: {properties}"
|
||||
)
|
||||
|
||||
# Format relationships
|
||||
formatted_rels = [
|
||||
f"(:{rel['start']})-[:{rel['type']}]->(:{rel['end']})"
|
||||
for rel in db_structured_schema["relationships"]
|
||||
]
|
||||
|
||||
self.schema = "\n".join(
|
||||
[
|
||||
"Node properties are the following:",
|
||||
*formatted_node_props,
|
||||
"Relationship properties are the following:",
|
||||
*formatted_rel_props,
|
||||
"The relationships are the following:",
|
||||
*formatted_rels,
|
||||
]
|
||||
)
|
||||
|
||||
@@ -160,7 +160,7 @@ class Neo4jGraph(GraphStore):
|
||||
data = session.run(Query(text=query, timeout=self.timeout), params)
|
||||
json_data = [r.data() for r in data]
|
||||
if self.sanitize:
|
||||
json_data = value_sanitize(json_data)
|
||||
json_data = [value_sanitize(el) for el in json_data]
|
||||
return json_data
|
||||
except CypherSyntaxError as e:
|
||||
raise ValueError(f"Generated Cypher Statement is not valid\n{e}")
|
||||
|
||||
@@ -0,0 +1,94 @@
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from langchain_community.graphs.graph_store import GraphStore
|
||||
|
||||
|
||||
class TigerGraph(GraphStore):
|
||||
"""TigerGraph wrapper for graph operations.
|
||||
|
||||
*Security note*: Make sure that the database connection uses credentials
|
||||
that are narrowly-scoped to only include necessary permissions.
|
||||
Failure to do so may result in data corruption or loss, since the calling
|
||||
code may attempt commands that would result in deletion, mutation
|
||||
of data if appropriately prompted or reading sensitive data if such
|
||||
data is present in the database.
|
||||
The best way to guard against such negative outcomes is to (as appropriate)
|
||||
limit the permissions granted to the credentials used with this tool.
|
||||
|
||||
See https://python.langchain.com/docs/security for more information.
|
||||
"""
|
||||
|
||||
def __init__(self, conn: Any) -> None:
|
||||
"""Create a new TigerGraph graph wrapper instance."""
|
||||
self.set_connection(conn)
|
||||
self.set_schema()
|
||||
|
||||
@property
|
||||
def conn(self) -> Any:
|
||||
return self._conn
|
||||
|
||||
@property
|
||||
def schema(self) -> Dict[str, Any]:
|
||||
return self._schema
|
||||
|
||||
def get_schema(self) -> str:
|
||||
if self._schema:
|
||||
return str(self._schema)
|
||||
else:
|
||||
self.set_schema()
|
||||
return str(self._schema)
|
||||
|
||||
def set_connection(self, conn: Any) -> None:
|
||||
from pyTigerGraph import TigerGraphConnection
|
||||
|
||||
if not isinstance(conn, TigerGraphConnection):
|
||||
msg = "**conn** parameter must inherit from TigerGraphConnection"
|
||||
raise TypeError(msg)
|
||||
|
||||
if conn.ai.nlqs_host is None:
|
||||
msg = """**conn** parameter does not have nlqs_host parameter defined.
|
||||
Define hostname of NLQS service."""
|
||||
raise ConnectionError(msg)
|
||||
|
||||
self._conn: TigerGraphConnection = conn
|
||||
self.set_schema()
|
||||
|
||||
def set_schema(self, schema: Optional[Dict[str, Any]] = None) -> None:
|
||||
"""
|
||||
Set the schema of the TigerGraph Database.
|
||||
Auto-generates Schema if **schema** is None.
|
||||
"""
|
||||
self._schema = self.generate_schema() if schema is None else schema
|
||||
|
||||
def generate_schema(
|
||||
self,
|
||||
) -> Dict[str, List[Dict[str, Any]]]:
|
||||
"""
|
||||
Generates the schema of the TigerGraph Database and returns it
|
||||
User can specify a **sample_ratio** (0 to 1) to determine the
|
||||
ratio of documents/edges used (in relation to the Collection size)
|
||||
to render each Collection Schema.
|
||||
"""
|
||||
return self._conn.getSchema(force=True)
|
||||
|
||||
def refresh_schema(self):
|
||||
self.generate_schema()
|
||||
|
||||
def query(self, query: str) -> Dict[str, Any]:
|
||||
"""Query the TigerGraph database."""
|
||||
answer = self._conn.ai.query(query)
|
||||
return answer
|
||||
|
||||
def register_query(
|
||||
self,
|
||||
function_header: str,
|
||||
description: str,
|
||||
docstring: str,
|
||||
param_types: dict = {},
|
||||
) -> List[str]:
|
||||
"""
|
||||
Wrapper function to register a custom GSQL query to the TigerGraph NLQS.
|
||||
"""
|
||||
return self._conn.ai.registerCustomQuery(
|
||||
function_header, description, docstring, param_types
|
||||
)
|
||||
@@ -270,6 +270,12 @@ def _import_koboldai() -> Any:
|
||||
return KoboldApiLLM
|
||||
|
||||
|
||||
def _import_konko() -> Any:
|
||||
from langchain_community.llms.konko import Konko
|
||||
|
||||
return Konko
|
||||
|
||||
|
||||
def _import_llamacpp() -> Any:
|
||||
from langchain_community.llms.llamacpp import LlamaCpp
|
||||
|
||||
@@ -639,6 +645,8 @@ def __getattr__(name: str) -> Any:
|
||||
return _import_javelin_ai_gateway()
|
||||
elif name == "KoboldApiLLM":
|
||||
return _import_koboldai()
|
||||
elif name == "Konko":
|
||||
return _import_konko()
|
||||
elif name == "LlamaCpp":
|
||||
return _import_llamacpp()
|
||||
elif name == "ManifestWrapper":
|
||||
@@ -780,6 +788,7 @@ __all__ = [
|
||||
"HuggingFaceTextGenInference",
|
||||
"HumanInputLLM",
|
||||
"KoboldApiLLM",
|
||||
"Konko",
|
||||
"LlamaCpp",
|
||||
"TextGen",
|
||||
"ManifestWrapper",
|
||||
@@ -868,6 +877,7 @@ def get_type_to_cls_dict() -> Dict[str, Callable[[], Type[BaseLLM]]]:
|
||||
"huggingface_textgen_inference": _import_huggingface_text_gen_inference,
|
||||
"human-input": _import_human,
|
||||
"koboldai": _import_koboldai,
|
||||
"konko": _import_konko,
|
||||
"llamacpp": _import_llamacpp,
|
||||
"textgen": _import_textgen,
|
||||
"minimax": _import_minimax,
|
||||
|
||||
@@ -2,12 +2,14 @@ import json
|
||||
import urllib.request
|
||||
import warnings
|
||||
from abc import abstractmethod
|
||||
from enum import Enum
|
||||
from typing import Any, Dict, List, Mapping, Optional
|
||||
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import BaseModel, validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.callbacks.manager import CallbackManagerForLLMRun
|
||||
from langchain_core.language_models.llms import BaseLLM
|
||||
from langchain_core.outputs import Generation, LLMResult
|
||||
from langchain_core.pydantic_v1 import BaseModel, SecretStr, root_validator, validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
|
||||
|
||||
class AzureMLEndpointClient(object):
|
||||
@@ -26,7 +28,12 @@ class AzureMLEndpointClient(object):
|
||||
self.endpoint_api_key = endpoint_api_key
|
||||
self.deployment_name = deployment_name
|
||||
|
||||
def call(self, body: bytes, **kwargs: Any) -> bytes:
|
||||
def call(
|
||||
self,
|
||||
body: bytes,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> bytes:
|
||||
"""call."""
|
||||
|
||||
# The azureml-model-deployment header will force the request to go to a
|
||||
@@ -45,6 +52,16 @@ class AzureMLEndpointClient(object):
|
||||
return result
|
||||
|
||||
|
||||
class AzureMLEndpointApiType(str, Enum):
|
||||
"""Azure ML endpoints API types. Use `realtime` for models deployed in hosted
|
||||
infrastructure, or `serverless` for models deployed as a service with a
|
||||
pay-as-you-go billing or PTU.
|
||||
"""
|
||||
|
||||
realtime = "realtime"
|
||||
serverless = "serverless"
|
||||
|
||||
|
||||
class ContentFormatterBase:
|
||||
"""Transform request and response of AzureML endpoint to match with
|
||||
required schema.
|
||||
@@ -61,7 +78,8 @@ class ContentFormatterBase:
|
||||
def format_request_payload(
|
||||
self,
|
||||
prompt: str,
|
||||
model_kwargs: Dict
|
||||
model_kwargs: Dict,
|
||||
api_type: AzureMLEndpointApiType,
|
||||
) -> bytes:
|
||||
input_str = json.dumps(
|
||||
{
|
||||
@@ -71,7 +89,9 @@ class ContentFormatterBase:
|
||||
)
|
||||
return str.encode(input_str)
|
||||
|
||||
def format_response_payload(self, output: str) -> str:
|
||||
def format_response_payload(
|
||||
self, output: str, api_type: AzureMLEndpointApiType
|
||||
) -> str:
|
||||
response_json = json.loads(output)
|
||||
return response_json[0]["0"]
|
||||
"""
|
||||
@@ -81,6 +101,12 @@ class ContentFormatterBase:
|
||||
accepts: Optional[str] = "application/json"
|
||||
"""The MIME type of the response data returned from the endpoint"""
|
||||
|
||||
format_error_msg: Optional[str] = (
|
||||
"Error while formatting response payload for chat model of type "
|
||||
" `{api_type}`. Are you using the right formatter for the deployed "
|
||||
" model and endpoint type?"
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def escape_special_characters(prompt: str) -> str:
|
||||
"""Escapes any special characters in `prompt`"""
|
||||
@@ -100,15 +126,32 @@ class ContentFormatterBase:
|
||||
|
||||
return prompt
|
||||
|
||||
@property
|
||||
def supported_api_types(self) -> List[AzureMLEndpointApiType]:
|
||||
"""Supported APIs for the given formatter. Azure ML supports
|
||||
deploying models using different hosting methods. Each method may have
|
||||
a different API structure."""
|
||||
|
||||
return [AzureMLEndpointApiType.realtime]
|
||||
|
||||
@abstractmethod
|
||||
def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:
|
||||
def format_request_payload(
|
||||
self,
|
||||
prompt: str,
|
||||
model_kwargs: Dict,
|
||||
api_type: AzureMLEndpointApiType = AzureMLEndpointApiType.realtime,
|
||||
) -> bytes:
|
||||
"""Formats the request body according to the input schema of
|
||||
the model. Returns bytes or seekable file like object in the
|
||||
format specified in the content_type request header.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def format_response_payload(self, output: bytes) -> str:
|
||||
def format_response_payload(
|
||||
self,
|
||||
output: bytes,
|
||||
api_type: AzureMLEndpointApiType = AzureMLEndpointApiType.realtime,
|
||||
) -> Generation:
|
||||
"""Formats the response body according to the output
|
||||
schema of the model. Returns the data type that is
|
||||
received from the response.
|
||||
@@ -118,15 +161,27 @@ class ContentFormatterBase:
|
||||
class GPT2ContentFormatter(ContentFormatterBase):
|
||||
"""Content handler for GPT2"""
|
||||
|
||||
def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:
|
||||
@property
|
||||
def supported_api_types(self) -> List[AzureMLEndpointApiType]:
|
||||
return [AzureMLEndpointApiType.realtime]
|
||||
|
||||
def format_request_payload(
|
||||
self, prompt: str, model_kwargs: Dict, api_type: AzureMLEndpointApiType
|
||||
) -> bytes:
|
||||
prompt = ContentFormatterBase.escape_special_characters(prompt)
|
||||
request_payload = json.dumps(
|
||||
{"inputs": {"input_string": [f'"{prompt}"']}, "parameters": model_kwargs}
|
||||
)
|
||||
return str.encode(request_payload)
|
||||
|
||||
def format_response_payload(self, output: bytes) -> str:
|
||||
return json.loads(output)[0]["0"]
|
||||
def format_response_payload(
|
||||
self, output: bytes, api_type: AzureMLEndpointApiType
|
||||
) -> Generation:
|
||||
try:
|
||||
choice = json.loads(output)[0]["0"]
|
||||
except (KeyError, IndexError, TypeError) as e:
|
||||
raise ValueError(self.format_error_msg.format(api_type=api_type)) from e
|
||||
return Generation(text=choice)
|
||||
|
||||
|
||||
class OSSContentFormatter(GPT2ContentFormatter):
|
||||
@@ -148,21 +203,39 @@ class OSSContentFormatter(GPT2ContentFormatter):
|
||||
class HFContentFormatter(ContentFormatterBase):
|
||||
"""Content handler for LLMs from the HuggingFace catalog."""
|
||||
|
||||
def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:
|
||||
@property
|
||||
def supported_api_types(self) -> List[AzureMLEndpointApiType]:
|
||||
return [AzureMLEndpointApiType.realtime]
|
||||
|
||||
def format_request_payload(
|
||||
self, prompt: str, model_kwargs: Dict, api_type: AzureMLEndpointApiType
|
||||
) -> bytes:
|
||||
ContentFormatterBase.escape_special_characters(prompt)
|
||||
request_payload = json.dumps(
|
||||
{"inputs": [f'"{prompt}"'], "parameters": model_kwargs}
|
||||
)
|
||||
return str.encode(request_payload)
|
||||
|
||||
def format_response_payload(self, output: bytes) -> str:
|
||||
return json.loads(output)[0]["generated_text"]
|
||||
def format_response_payload(
|
||||
self, output: bytes, api_type: AzureMLEndpointApiType
|
||||
) -> Generation:
|
||||
try:
|
||||
choice = json.loads(output)[0]["0"]["generated_text"]
|
||||
except (KeyError, IndexError, TypeError) as e:
|
||||
raise ValueError(self.format_error_msg.format(api_type=api_type)) from e
|
||||
return Generation(text=choice)
|
||||
|
||||
|
||||
class DollyContentFormatter(ContentFormatterBase):
|
||||
"""Content handler for the Dolly-v2-12b model"""
|
||||
|
||||
def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:
|
||||
@property
|
||||
def supported_api_types(self) -> List[AzureMLEndpointApiType]:
|
||||
return [AzureMLEndpointApiType.realtime]
|
||||
|
||||
def format_request_payload(
|
||||
self, prompt: str, model_kwargs: Dict, api_type: AzureMLEndpointApiType
|
||||
) -> bytes:
|
||||
prompt = ContentFormatterBase.escape_special_characters(prompt)
|
||||
request_payload = json.dumps(
|
||||
{
|
||||
@@ -172,49 +245,88 @@ class DollyContentFormatter(ContentFormatterBase):
|
||||
)
|
||||
return str.encode(request_payload)
|
||||
|
||||
def format_response_payload(self, output: bytes) -> str:
|
||||
return json.loads(output)[0]
|
||||
def format_response_payload(
|
||||
self, output: bytes, api_type: AzureMLEndpointApiType
|
||||
) -> Generation:
|
||||
try:
|
||||
choice = json.loads(output)[0]
|
||||
except (KeyError, IndexError, TypeError) as e:
|
||||
raise ValueError(self.format_error_msg.format(api_type=api_type)) from e
|
||||
return Generation(text=choice)
|
||||
|
||||
|
||||
class LlamaContentFormatter(ContentFormatterBase):
|
||||
"""Content formatter for LLaMa"""
|
||||
|
||||
def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes:
|
||||
@property
|
||||
def supported_api_types(self) -> List[AzureMLEndpointApiType]:
|
||||
return [AzureMLEndpointApiType.realtime, AzureMLEndpointApiType.serverless]
|
||||
|
||||
def format_request_payload(
|
||||
self, prompt: str, model_kwargs: Dict, api_type: AzureMLEndpointApiType
|
||||
) -> bytes:
|
||||
"""Formats the request according to the chosen api"""
|
||||
prompt = ContentFormatterBase.escape_special_characters(prompt)
|
||||
request_payload = json.dumps(
|
||||
{
|
||||
"input_data": {
|
||||
"input_string": [f'"{prompt}"'],
|
||||
"parameters": model_kwargs,
|
||||
if api_type == AzureMLEndpointApiType.realtime:
|
||||
request_payload = json.dumps(
|
||||
{
|
||||
"input_data": {
|
||||
"input_string": [f'"{prompt}"'],
|
||||
"parameters": model_kwargs,
|
||||
}
|
||||
}
|
||||
}
|
||||
)
|
||||
)
|
||||
elif api_type == AzureMLEndpointApiType.serverless:
|
||||
request_payload = json.dumps({"prompt": prompt, **model_kwargs})
|
||||
else:
|
||||
raise ValueError(
|
||||
f"`api_type` {api_type} is not supported by this formatter"
|
||||
)
|
||||
return str.encode(request_payload)
|
||||
|
||||
def format_response_payload(self, output: bytes) -> str:
|
||||
def format_response_payload(
|
||||
self, output: bytes, api_type: AzureMLEndpointApiType
|
||||
) -> Generation:
|
||||
"""Formats response"""
|
||||
return json.loads(output)[0]["0"]
|
||||
|
||||
|
||||
class AzureMLOnlineEndpoint(LLM, BaseModel):
|
||||
"""Azure ML Online Endpoint models.
|
||||
|
||||
Example:
|
||||
.. code-block:: python
|
||||
|
||||
azure_llm = AzureMLOnlineEndpoint(
|
||||
endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score",
|
||||
endpoint_api_key="my-api-key",
|
||||
content_formatter=content_formatter,
|
||||
if api_type == AzureMLEndpointApiType.realtime:
|
||||
try:
|
||||
choice = json.loads(output)[0]["0"]
|
||||
except (KeyError, IndexError, TypeError) as e:
|
||||
raise ValueError(self.format_error_msg.format(api_type=api_type)) from e
|
||||
return Generation(text=choice)
|
||||
if api_type == AzureMLEndpointApiType.serverless:
|
||||
try:
|
||||
choice = json.loads(output)["choices"][0]
|
||||
if not isinstance(choice, dict):
|
||||
raise TypeError(
|
||||
"Endpoint response is not well formed for a chat "
|
||||
"model. Expected `dict` but `{type(choice)}` was "
|
||||
"received."
|
||||
)
|
||||
except (KeyError, IndexError, TypeError) as e:
|
||||
raise ValueError(self.format_error_msg.format(api_type=api_type)) from e
|
||||
return Generation(
|
||||
text=choice["text"].strip(),
|
||||
generation_info=dict(
|
||||
finish_reason=choice.get("finish_reason"),
|
||||
logprobs=choice.get("logprobs"),
|
||||
),
|
||||
)
|
||||
""" # noqa: E501
|
||||
raise ValueError(f"`api_type` {api_type} is not supported by this formatter")
|
||||
|
||||
|
||||
class AzureMLBaseEndpoint(BaseModel):
|
||||
"""Azure ML Online Endpoint models."""
|
||||
|
||||
endpoint_url: str = ""
|
||||
"""URL of pre-existing Endpoint. Should be passed to constructor or specified as
|
||||
env var `AZUREML_ENDPOINT_URL`."""
|
||||
|
||||
endpoint_api_key: str = ""
|
||||
endpoint_api_type: AzureMLEndpointApiType = AzureMLEndpointApiType.realtime
|
||||
"""Type of the endpoint being consumed. Possible values are `serverless` for
|
||||
pay-as-you-go and `realtime` for real-time endpoints. """
|
||||
|
||||
endpoint_api_key: SecretStr = convert_to_secret_str("")
|
||||
"""Authentication Key for Endpoint. Should be passed to constructor or specified as
|
||||
env var `AZUREML_ENDPOINT_API_KEY`."""
|
||||
|
||||
@@ -232,22 +344,106 @@ class AzureMLOnlineEndpoint(LLM, BaseModel):
|
||||
model_kwargs: Optional[dict] = None
|
||||
"""Keyword arguments to pass to the model."""
|
||||
|
||||
@validator("http_client", always=True, allow_reuse=True)
|
||||
@classmethod
|
||||
def validate_client(cls, field_value: Any, values: Dict) -> AzureMLEndpointClient:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
endpoint_key = get_from_dict_or_env(
|
||||
values, "endpoint_api_key", "AZUREML_ENDPOINT_API_KEY"
|
||||
@root_validator(pre=True)
|
||||
def validate_environ(cls, values: Dict) -> Dict:
|
||||
values["endpoint_api_key"] = convert_to_secret_str(
|
||||
get_from_dict_or_env(values, "endpoint_api_key", "AZUREML_ENDPOINT_API_KEY")
|
||||
)
|
||||
endpoint_url = get_from_dict_or_env(
|
||||
values["endpoint_url"] = get_from_dict_or_env(
|
||||
values, "endpoint_url", "AZUREML_ENDPOINT_URL"
|
||||
)
|
||||
deployment_name = get_from_dict_or_env(
|
||||
values["deployment_name"] = get_from_dict_or_env(
|
||||
values, "deployment_name", "AZUREML_DEPLOYMENT_NAME", ""
|
||||
)
|
||||
http_client = AzureMLEndpointClient(endpoint_url, endpoint_key, deployment_name)
|
||||
values["endpoint_api_type"] = get_from_dict_or_env(
|
||||
values,
|
||||
"endpoint_api_type",
|
||||
"AZUREML_ENDPOINT_API_TYPE",
|
||||
AzureMLEndpointApiType.realtime,
|
||||
)
|
||||
|
||||
return values
|
||||
|
||||
@validator("content_formatter")
|
||||
def validate_content_formatter(
|
||||
cls, field_value: Any, values: Dict
|
||||
) -> ContentFormatterBase:
|
||||
"""Validate that content formatter is supported by endpoint type."""
|
||||
endpoint_api_type = values.get("endpoint_api_type")
|
||||
if endpoint_api_type not in field_value.supported_api_types:
|
||||
raise ValueError(
|
||||
f"Content formatter f{type(field_value)} is not supported by this "
|
||||
f"endpoint. Supported types are {field_value.supported_api_types} "
|
||||
f"but endpoint is {endpoint_api_type}."
|
||||
)
|
||||
return field_value
|
||||
|
||||
@validator("endpoint_url")
|
||||
def validate_endpoint_url(cls, field_value: Any) -> str:
|
||||
"""Validate that endpoint url is complete."""
|
||||
if field_value.endswith("/"):
|
||||
field_value = field_value[:-1]
|
||||
if field_value.endswith("inference.ml.azure.com"):
|
||||
raise ValueError(
|
||||
"`endpoint_url` should contain the full invocation URL including "
|
||||
"`/score` for `endpoint_api_type='realtime'` or `/v1/completions` "
|
||||
"or `/v1/chat/completions` for `endpoint_api_type='serverless'`"
|
||||
)
|
||||
return field_value
|
||||
|
||||
@validator("endpoint_api_type")
|
||||
def validate_endpoint_api_type(
|
||||
cls, field_value: Any, values: Dict
|
||||
) -> AzureMLEndpointApiType:
|
||||
"""Validate that endpoint api type is compatible with the URL format."""
|
||||
endpoint_url = values.get("endpoint_url")
|
||||
if field_value == AzureMLEndpointApiType.realtime and not endpoint_url.endswith(
|
||||
"/score"
|
||||
):
|
||||
raise ValueError(
|
||||
"Endpoints of type `realtime` should follow the format "
|
||||
"`https://<your-endpoint>.<your_region>.inference.ml.azure.com/score`."
|
||||
" If your endpoint URL ends with `/v1/completions` or"
|
||||
"`/v1/chat/completions`, use `endpoint_api_type='serverless'` instead."
|
||||
)
|
||||
if field_value == AzureMLEndpointApiType.serverless and not (
|
||||
endpoint_url.endswith("/v1/completions")
|
||||
or endpoint_url.endswith("/v1/chat/completions")
|
||||
):
|
||||
raise ValueError(
|
||||
"Endpoints of type `serverless` should follow the format "
|
||||
"`https://<your-endpoint>.<your_region>.inference.ml.azure.com/v1/chat/completions`"
|
||||
" or `https://<your-endpoint>.<your_region>.inference.ml.azure.com/v1/chat/completions`"
|
||||
)
|
||||
|
||||
return field_value
|
||||
|
||||
@validator("http_client", always=True)
|
||||
def validate_client(cls, field_value: Any, values: Dict) -> AzureMLEndpointClient:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
endpoint_url = values.get("endpoint_url")
|
||||
endpoint_key = values.get("endpoint_api_key")
|
||||
deployment_name = values.get("deployment_name")
|
||||
|
||||
http_client = AzureMLEndpointClient(
|
||||
endpoint_url, endpoint_key.get_secret_value(), deployment_name
|
||||
)
|
||||
return http_client
|
||||
|
||||
|
||||
class AzureMLOnlineEndpoint(BaseLLM, AzureMLBaseEndpoint):
|
||||
"""Azure ML Online Endpoint models.
|
||||
|
||||
Example:
|
||||
.. code-block:: python
|
||||
azure_llm = AzureMLOnlineEndpoint(
|
||||
endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score",
|
||||
endpoint_api_type=AzureMLApiType.realtime,
|
||||
endpoint_api_key="my-api-key",
|
||||
content_formatter=content_formatter,
|
||||
)
|
||||
""" # noqa: E501
|
||||
|
||||
@property
|
||||
def _identifying_params(self) -> Mapping[str, Any]:
|
||||
"""Get the identifying parameters."""
|
||||
@@ -262,16 +458,17 @@ class AzureMLOnlineEndpoint(LLM, BaseModel):
|
||||
"""Return type of llm."""
|
||||
return "azureml_endpoint"
|
||||
|
||||
def _call(
|
||||
def _generate(
|
||||
self,
|
||||
prompt: str,
|
||||
prompts: List[str],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> str:
|
||||
"""Call out to an AzureML Managed Online endpoint.
|
||||
) -> LLMResult:
|
||||
"""Run the LLM on the given prompts.
|
||||
|
||||
Args:
|
||||
prompt: The prompt to pass into the model.
|
||||
prompts: The prompt to pass into the model.
|
||||
stop: Optional list of stop words to use when generating.
|
||||
Returns:
|
||||
The string generated by the model.
|
||||
@@ -280,12 +477,21 @@ class AzureMLOnlineEndpoint(LLM, BaseModel):
|
||||
response = azureml_model("Tell me a joke.")
|
||||
"""
|
||||
_model_kwargs = self.model_kwargs or {}
|
||||
_model_kwargs.update(kwargs)
|
||||
if stop:
|
||||
_model_kwargs["stop"] = stop
|
||||
generations = []
|
||||
|
||||
request_payload = self.content_formatter.format_request_payload(
|
||||
prompt, _model_kwargs
|
||||
)
|
||||
response_payload = self.http_client.call(request_payload, **kwargs)
|
||||
generated_text = self.content_formatter.format_response_payload(
|
||||
response_payload
|
||||
)
|
||||
return generated_text
|
||||
for prompt in prompts:
|
||||
request_payload = self.content_formatter.format_request_payload(
|
||||
prompt, _model_kwargs, self.endpoint_api_type
|
||||
)
|
||||
response_payload = self.http_client.call(
|
||||
body=request_payload, run_manager=run_manager
|
||||
)
|
||||
generated_text = self.content_formatter.format_response_payload(
|
||||
response_payload, self.endpoint_api_type
|
||||
)
|
||||
generations.append([generated_text])
|
||||
|
||||
return LLMResult(generations=generations)
|
||||
|
||||
@@ -1,11 +1,25 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import warnings
|
||||
from abc import ABC
|
||||
from typing import TYPE_CHECKING, Any, Dict, Iterator, List, Mapping, Optional
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
AsyncGenerator,
|
||||
AsyncIterator,
|
||||
Dict,
|
||||
Iterator,
|
||||
List,
|
||||
Mapping,
|
||||
Optional,
|
||||
)
|
||||
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
from langchain_core.callbacks import (
|
||||
AsyncCallbackManagerForLLMRun,
|
||||
CallbackManagerForLLMRun,
|
||||
)
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.outputs import GenerationChunk
|
||||
from langchain_core.pydantic_v1 import BaseModel, Extra, Field, root_validator
|
||||
@@ -128,26 +142,56 @@ class LLMInputOutputAdapter:
|
||||
if not stream:
|
||||
return
|
||||
|
||||
if provider not in cls.provider_to_output_key_map:
|
||||
output_key = cls.provider_to_output_key_map.get(provider, None)
|
||||
|
||||
if not output_key:
|
||||
raise ValueError(
|
||||
f"Unknown streaming response output key for provider: {provider}"
|
||||
)
|
||||
|
||||
for event in stream:
|
||||
chunk = event.get("chunk")
|
||||
if chunk:
|
||||
chunk_obj = json.loads(chunk.get("bytes").decode())
|
||||
if provider == "cohere" and (
|
||||
chunk_obj["is_finished"]
|
||||
or chunk_obj[cls.provider_to_output_key_map[provider]]
|
||||
== "<EOS_TOKEN>"
|
||||
):
|
||||
return
|
||||
if not chunk:
|
||||
continue
|
||||
|
||||
# chunk obj format varies with provider
|
||||
yield GenerationChunk(
|
||||
text=chunk_obj[cls.provider_to_output_key_map[provider]]
|
||||
)
|
||||
chunk_obj = json.loads(chunk.get("bytes").decode())
|
||||
|
||||
if provider == "cohere" and (
|
||||
chunk_obj["is_finished"] or chunk_obj[output_key] == "<EOS_TOKEN>"
|
||||
):
|
||||
return
|
||||
|
||||
yield GenerationChunk(text=chunk_obj[output_key])
|
||||
|
||||
@classmethod
|
||||
async def aprepare_output_stream(
|
||||
cls, provider: str, response: Any, stop: Optional[List[str]] = None
|
||||
) -> AsyncIterator[GenerationChunk]:
|
||||
stream = response.get("body")
|
||||
|
||||
if not stream:
|
||||
return
|
||||
|
||||
output_key = cls.provider_to_output_key_map.get(provider, None)
|
||||
|
||||
if not output_key:
|
||||
raise ValueError(
|
||||
f"Unknown streaming response output key for provider: {provider}"
|
||||
)
|
||||
|
||||
for event in stream:
|
||||
chunk = event.get("chunk")
|
||||
if not chunk:
|
||||
continue
|
||||
|
||||
chunk_obj = json.loads(chunk.get("bytes").decode())
|
||||
|
||||
if provider == "cohere" and (
|
||||
chunk_obj["is_finished"] or chunk_obj[output_key] == "<EOS_TOKEN>"
|
||||
):
|
||||
return
|
||||
|
||||
yield GenerationChunk(text=chunk_obj[output_key])
|
||||
|
||||
|
||||
class BedrockBase(BaseModel, ABC):
|
||||
@@ -272,10 +316,12 @@ class BedrockBase(BaseModel, ABC):
|
||||
|
||||
try:
|
||||
response = self.client.invoke_model(
|
||||
body=body, modelId=self.model_id, accept=accept, contentType=contentType
|
||||
body=body,
|
||||
modelId=self.model_id,
|
||||
accept=accept,
|
||||
contentType=contentType,
|
||||
)
|
||||
text = LLMInputOutputAdapter.prepare_output(provider, response)
|
||||
|
||||
except Exception as e:
|
||||
raise ValueError(f"Error raised by bedrock service: {e}").with_traceback(
|
||||
e.__traceback__
|
||||
@@ -330,6 +376,51 @@ class BedrockBase(BaseModel, ABC):
|
||||
if run_manager is not None:
|
||||
run_manager.on_llm_new_token(chunk.text, chunk=chunk)
|
||||
|
||||
async def _aprepare_input_and_invoke_stream(
|
||||
self,
|
||||
prompt: str,
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> AsyncIterator[GenerationChunk]:
|
||||
_model_kwargs = self.model_kwargs or {}
|
||||
provider = self._get_provider()
|
||||
|
||||
if stop:
|
||||
if provider not in self.provider_stop_sequence_key_name_map:
|
||||
raise ValueError(
|
||||
f"Stop sequence key name for {provider} is not supported."
|
||||
)
|
||||
_model_kwargs[self.provider_stop_sequence_key_name_map.get(provider)] = stop
|
||||
|
||||
if provider == "cohere":
|
||||
_model_kwargs["stream"] = True
|
||||
|
||||
params = {**_model_kwargs, **kwargs}
|
||||
input_body = LLMInputOutputAdapter.prepare_input(provider, prompt, params)
|
||||
body = json.dumps(input_body)
|
||||
|
||||
response = await asyncio.get_running_loop().run_in_executor(
|
||||
None,
|
||||
lambda: self.client.invoke_model_with_response_stream(
|
||||
body=body,
|
||||
modelId=self.model_id,
|
||||
accept="application/json",
|
||||
contentType="application/json",
|
||||
),
|
||||
)
|
||||
|
||||
async for chunk in LLMInputOutputAdapter.aprepare_output_stream(
|
||||
provider, response, stop
|
||||
):
|
||||
yield chunk
|
||||
if run_manager is not None and asyncio.iscoroutinefunction(
|
||||
run_manager.on_llm_new_token
|
||||
):
|
||||
await run_manager.on_llm_new_token(chunk.text, chunk=chunk)
|
||||
elif run_manager is not None:
|
||||
run_manager.on_llm_new_token(chunk.text, chunk=chunk)
|
||||
|
||||
|
||||
class Bedrock(LLM, BedrockBase):
|
||||
"""Bedrock models.
|
||||
@@ -447,6 +538,65 @@ class Bedrock(LLM, BedrockBase):
|
||||
|
||||
return self._prepare_input_and_invoke(prompt=prompt, stop=stop, **kwargs)
|
||||
|
||||
async def _astream(
|
||||
self,
|
||||
prompt: str,
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> AsyncGenerator[GenerationChunk, None]:
|
||||
"""Call out to Bedrock service with streaming.
|
||||
|
||||
Args:
|
||||
prompt (str): The prompt to pass into the model
|
||||
stop (Optional[List[str]], optional): Stop sequences. These will
|
||||
override any stop sequences in the `model_kwargs` attribute.
|
||||
Defaults to None.
|
||||
run_manager (Optional[CallbackManagerForLLMRun], optional): Callback
|
||||
run managers used to process the output. Defaults to None.
|
||||
|
||||
Yields:
|
||||
AsyncGenerator[GenerationChunk, None]: Generator that asynchronously yields
|
||||
the streamed responses.
|
||||
"""
|
||||
async for chunk in self._aprepare_input_and_invoke_stream(
|
||||
prompt=prompt, stop=stop, run_manager=run_manager, **kwargs
|
||||
):
|
||||
yield chunk
|
||||
|
||||
async def _acall(
|
||||
self,
|
||||
prompt: str,
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> str:
|
||||
"""Call out to Bedrock service model.
|
||||
|
||||
Args:
|
||||
prompt: The prompt to pass into the model.
|
||||
stop: Optional list of stop words to use when generating.
|
||||
|
||||
Returns:
|
||||
The string generated by the model.
|
||||
|
||||
Example:
|
||||
.. code-block:: python
|
||||
|
||||
response = await llm._acall("Tell me a joke.")
|
||||
"""
|
||||
|
||||
if not self.streaming:
|
||||
raise ValueError("Streaming must be set to True for async operations. ")
|
||||
|
||||
chunks = [
|
||||
chunk.text
|
||||
async for chunk in self._astream(
|
||||
prompt=prompt, stop=stop, run_manager=run_manager, **kwargs
|
||||
)
|
||||
]
|
||||
return "".join(chunks)
|
||||
|
||||
def get_num_tokens(self, text: str) -> int:
|
||||
if self._model_is_anthropic:
|
||||
return get_num_tokens_anthropic(text)
|
||||
|
||||
@@ -111,6 +111,7 @@ class GPT4All(LLM):
|
||||
"n_batch",
|
||||
"repeat_penalty",
|
||||
"repeat_last_n",
|
||||
"streaming",
|
||||
}
|
||||
|
||||
def _default_params(self) -> Dict[str, Any]:
|
||||
@@ -123,6 +124,7 @@ class GPT4All(LLM):
|
||||
"n_batch": self.n_batch,
|
||||
"repeat_penalty": self.repeat_penalty,
|
||||
"repeat_last_n": self.repeat_last_n,
|
||||
"streaming": self.streaming,
|
||||
}
|
||||
|
||||
@root_validator()
|
||||
|
||||
@@ -8,7 +8,12 @@ from langchain_core.utils import get_from_dict_or_env
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
VALID_TASKS = ("text2text-generation", "text-generation", "summarization")
|
||||
VALID_TASKS = (
|
||||
"text2text-generation",
|
||||
"text-generation",
|
||||
"summarization",
|
||||
"conversational",
|
||||
)
|
||||
|
||||
|
||||
class HuggingFaceEndpoint(LLM):
|
||||
@@ -144,6 +149,8 @@ class HuggingFaceEndpoint(LLM):
|
||||
text = generated_text[0]["generated_text"]
|
||||
elif self.task == "summarization":
|
||||
text = generated_text[0]["summary_text"]
|
||||
elif self.task == "conversational":
|
||||
text = generated_text["response"][1]
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Got invalid task {self.task}, "
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import json
|
||||
from typing import Any, Dict, List, Mapping, Optional
|
||||
|
||||
from langchain_core.callbacks import CallbackManagerForLLMRun
|
||||
@@ -7,8 +8,15 @@ from langchain_core.utils import get_from_dict_or_env
|
||||
|
||||
from langchain_community.llms.utils import enforce_stop_tokens
|
||||
|
||||
DEFAULT_REPO_ID = "gpt2"
|
||||
VALID_TASKS = ("text2text-generation", "text-generation", "summarization")
|
||||
# key: task
|
||||
# value: key in the output dictionary
|
||||
VALID_TASKS_DICT = {
|
||||
"translation": "translation_text",
|
||||
"summarization": "summary_text",
|
||||
"conversational": "generated_text",
|
||||
"text-generation": "generated_text",
|
||||
"text2text-generation": "generated_text",
|
||||
}
|
||||
|
||||
|
||||
class HuggingFaceHub(LLM):
|
||||
@@ -18,7 +26,8 @@ class HuggingFaceHub(LLM):
|
||||
environment variable ``HUGGINGFACEHUB_API_TOKEN`` set with your API token, or pass
|
||||
it as a named parameter to the constructor.
|
||||
|
||||
Only supports `text-generation`, `text2text-generation` and `summarization` for now.
|
||||
Supports `text-generation`, `text2text-generation`, `conversational`, `translation`,
|
||||
and `summarization`.
|
||||
|
||||
Example:
|
||||
.. code-block:: python
|
||||
@@ -28,11 +37,13 @@ class HuggingFaceHub(LLM):
|
||||
"""
|
||||
|
||||
client: Any #: :meta private:
|
||||
repo_id: str = DEFAULT_REPO_ID
|
||||
"""Model name to use."""
|
||||
repo_id: Optional[str] = None
|
||||
"""Model name to use.
|
||||
If not provided, the default model for the chosen task will be used."""
|
||||
task: Optional[str] = None
|
||||
"""Task to call the model with.
|
||||
Should be a task that returns `generated_text` or `summary_text`."""
|
||||
Should be a task that returns `generated_text`, `summary_text`,
|
||||
or `translation_text`."""
|
||||
model_kwargs: Optional[dict] = None
|
||||
"""Keyword arguments to pass to the model."""
|
||||
|
||||
@@ -50,18 +61,27 @@ class HuggingFaceHub(LLM):
|
||||
values, "huggingfacehub_api_token", "HUGGINGFACEHUB_API_TOKEN"
|
||||
)
|
||||
try:
|
||||
from huggingface_hub.inference_api import InferenceApi
|
||||
from huggingface_hub import HfApi, InferenceClient
|
||||
|
||||
repo_id = values["repo_id"]
|
||||
client = InferenceApi(
|
||||
repo_id=repo_id,
|
||||
client = InferenceClient(
|
||||
model=repo_id,
|
||||
token=huggingfacehub_api_token,
|
||||
task=values.get("task"),
|
||||
)
|
||||
if client.task not in VALID_TASKS:
|
||||
if not values["task"]:
|
||||
if not repo_id:
|
||||
raise ValueError(
|
||||
"Must specify either `repo_id` or `task`, or both."
|
||||
)
|
||||
# Use the recommended task for the chosen model
|
||||
model_info = HfApi(token=huggingfacehub_api_token).model_info(
|
||||
repo_id=repo_id
|
||||
)
|
||||
values["task"] = model_info.pipeline_tag
|
||||
if values["task"] not in VALID_TASKS_DICT:
|
||||
raise ValueError(
|
||||
f"Got invalid task {client.task}, "
|
||||
f"currently only {VALID_TASKS} are supported"
|
||||
f"Got invalid task {values['task']}, "
|
||||
f"currently only {VALID_TASKS_DICT.keys()} are supported"
|
||||
)
|
||||
values["client"] = client
|
||||
except ImportError:
|
||||
@@ -108,23 +128,20 @@ class HuggingFaceHub(LLM):
|
||||
"""
|
||||
_model_kwargs = self.model_kwargs or {}
|
||||
params = {**_model_kwargs, **kwargs}
|
||||
response = self.client(inputs=prompt, params=params)
|
||||
|
||||
response = self.client.post(
|
||||
json={"inputs": prompt, "params": params}, task=self.task
|
||||
)
|
||||
response = json.loads(response.decode())
|
||||
if "error" in response:
|
||||
raise ValueError(f"Error raised by inference API: {response['error']}")
|
||||
if self.client.task == "text-generation":
|
||||
# Text generation sometimes return includes the starter text.
|
||||
text = response[0]["generated_text"]
|
||||
if text.startswith(prompt):
|
||||
text = response[0]["generated_text"][len(prompt) :]
|
||||
elif self.client.task == "text2text-generation":
|
||||
text = response[0]["generated_text"]
|
||||
elif self.client.task == "summarization":
|
||||
text = response[0]["summary_text"]
|
||||
|
||||
response_key = VALID_TASKS_DICT[self.task] # type: ignore
|
||||
if isinstance(response, list):
|
||||
text = response[0][response_key]
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Got invalid task {self.client.task}, "
|
||||
f"currently only {VALID_TASKS} are supported"
|
||||
)
|
||||
text = response[response_key]
|
||||
|
||||
if stop is not None:
|
||||
# This is a bit hacky, but I can't figure out a better way to enforce
|
||||
# stop tokens when making calls to huggingface_hub.
|
||||
|
||||
200
libs/community/langchain_community/llms/konko.py
Normal file
200
libs/community/langchain_community/llms/konko.py
Normal file
@@ -0,0 +1,200 @@
|
||||
"""Wrapper around Konko AI's Completion API."""
|
||||
import logging
|
||||
import warnings
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from langchain_core.callbacks import (
|
||||
AsyncCallbackManagerForLLMRun,
|
||||
CallbackManagerForLLMRun,
|
||||
)
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Extra, SecretStr, root_validator
|
||||
|
||||
from langchain_community.utils.openai import is_openai_v1
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Konko(LLM):
|
||||
"""Wrapper around Konko AI models.
|
||||
|
||||
To use, you'll need an API key. This can be passed in as init param
|
||||
``konko_api_key`` or set as environment variable ``KONKO_API_KEY``.
|
||||
|
||||
Konko AI API reference: https://docs.konko.ai/reference/
|
||||
"""
|
||||
|
||||
base_url: str = "https://api.konko.ai/v1/completions"
|
||||
"""Base inference API URL."""
|
||||
konko_api_key: SecretStr
|
||||
"""Konko AI API key."""
|
||||
model: str
|
||||
"""Model name. Available models listed here:
|
||||
https://docs.konko.ai/reference/get_models
|
||||
"""
|
||||
temperature: Optional[float] = None
|
||||
"""Model temperature."""
|
||||
top_p: Optional[float] = None
|
||||
"""Used to dynamically adjust the number of choices for each predicted token based
|
||||
on the cumulative probabilities. A value of 1 will always yield the same
|
||||
output. A temperature less than 1 favors more correctness and is appropriate
|
||||
for question answering or summarization. A value greater than 1 introduces more
|
||||
randomness in the output.
|
||||
"""
|
||||
top_k: Optional[int] = None
|
||||
"""Used to limit the number of choices for the next predicted word or token. It
|
||||
specifies the maximum number of tokens to consider at each step, based on their
|
||||
probability of occurrence. This technique helps to speed up the generation
|
||||
process and can improve the quality of the generated text by focusing on the
|
||||
most likely options.
|
||||
"""
|
||||
max_tokens: Optional[int] = None
|
||||
"""The maximum number of tokens to generate."""
|
||||
repetition_penalty: Optional[float] = None
|
||||
"""A number that controls the diversity of generated text by reducing the
|
||||
likelihood of repeated sequences. Higher values decrease repetition.
|
||||
"""
|
||||
logprobs: Optional[int] = None
|
||||
"""An integer that specifies how many top token log probabilities are included in
|
||||
the response for each token generation step.
|
||||
"""
|
||||
|
||||
class Config:
|
||||
"""Configuration for this pydantic object."""
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator(pre=True)
|
||||
def validate_environment(cls, values: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Validate that python package exists in environment."""
|
||||
try:
|
||||
import konko
|
||||
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import konko python package. "
|
||||
"Please install it with `pip install konko`."
|
||||
)
|
||||
if not hasattr(konko, "_is_legacy_openai"):
|
||||
warnings.warn(
|
||||
"You are using an older version of the 'konko' package. "
|
||||
"Please consider upgrading to access new features"
|
||||
"including the completion endpoint."
|
||||
)
|
||||
return values
|
||||
|
||||
def construct_payload(
|
||||
self,
|
||||
prompt: str,
|
||||
stop: Optional[List[str]] = None,
|
||||
**kwargs: Any,
|
||||
) -> Dict[str, Any]:
|
||||
stop_to_use = stop[0] if stop and len(stop) == 1 else stop
|
||||
payload: Dict[str, Any] = {
|
||||
**self.default_params,
|
||||
"prompt": prompt,
|
||||
"stop": stop_to_use,
|
||||
**kwargs,
|
||||
}
|
||||
return {k: v for k, v in payload.items() if v is not None}
|
||||
|
||||
@property
|
||||
def _llm_type(self) -> str:
|
||||
"""Return type of model."""
|
||||
return "konko"
|
||||
|
||||
@staticmethod
|
||||
def get_user_agent() -> str:
|
||||
from langchain_community import __version__
|
||||
|
||||
return f"langchain/{__version__}"
|
||||
|
||||
@property
|
||||
def default_params(self) -> Dict[str, Any]:
|
||||
return {
|
||||
"model": self.model,
|
||||
"temperature": self.temperature,
|
||||
"top_p": self.top_p,
|
||||
"top_k": self.top_k,
|
||||
"max_tokens": self.max_tokens,
|
||||
"repetition_penalty": self.repetition_penalty,
|
||||
}
|
||||
|
||||
def _call(
|
||||
self,
|
||||
prompt: str,
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> str:
|
||||
"""Call out to Konko's text generation endpoint.
|
||||
|
||||
Args:
|
||||
prompt: The prompt to pass into the model.
|
||||
|
||||
Returns:
|
||||
The string generated by the model..
|
||||
"""
|
||||
import konko
|
||||
|
||||
payload = self.construct_payload(prompt, stop, **kwargs)
|
||||
|
||||
try:
|
||||
if is_openai_v1():
|
||||
response = konko.completions.create(**payload)
|
||||
else:
|
||||
response = konko.Completion.create(**payload)
|
||||
|
||||
except AttributeError:
|
||||
raise ValueError(
|
||||
"`konko` has no `Completion` attribute, this is likely "
|
||||
"due to an old version of the konko package. Try upgrading it "
|
||||
"with `pip install --upgrade konko`."
|
||||
)
|
||||
|
||||
if is_openai_v1():
|
||||
output = response.choices[0].text
|
||||
else:
|
||||
output = response["choices"][0]["text"]
|
||||
|
||||
return output
|
||||
|
||||
async def _acall(
|
||||
self,
|
||||
prompt: str,
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> str:
|
||||
"""Asynchronously call out to Konko's text generation endpoint.
|
||||
|
||||
Args:
|
||||
prompt: The prompt to pass into the model.
|
||||
|
||||
Returns:
|
||||
The string generated by the model.
|
||||
"""
|
||||
import konko
|
||||
|
||||
payload = self.construct_payload(prompt, stop, **kwargs)
|
||||
|
||||
try:
|
||||
if is_openai_v1():
|
||||
client = konko.AsyncKonko()
|
||||
response = await client.completions.create(**payload)
|
||||
else:
|
||||
response = await konko.Completion.acreate(**payload)
|
||||
|
||||
except AttributeError:
|
||||
raise ValueError(
|
||||
"`konko` has no `Completion` attribute, this is likely "
|
||||
"due to an old version of the konko package. Try upgrading it "
|
||||
"with `pip install --upgrade konko`."
|
||||
)
|
||||
|
||||
if is_openai_v1():
|
||||
output = response.choices[0].text
|
||||
else:
|
||||
output = response["choices"][0]["text"]
|
||||
|
||||
return output
|
||||
@@ -9,8 +9,8 @@ from langchain_core.callbacks import (
|
||||
)
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.load.serializable import Serializable
|
||||
from langchain_core.pydantic_v1 import root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
from tenacity import (
|
||||
before_sleep_log,
|
||||
retry,
|
||||
@@ -25,10 +25,10 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class _BaseYandexGPT(Serializable):
|
||||
iam_token: str = ""
|
||||
iam_token: SecretStr = ""
|
||||
"""Yandex Cloud IAM token for service or user account
|
||||
with the `ai.languageModels.user` role"""
|
||||
api_key: str = ""
|
||||
api_key: SecretStr = ""
|
||||
"""Yandex Cloud Api Key for service account
|
||||
with the `ai.languageModels.user` role"""
|
||||
folder_id: str = ""
|
||||
@@ -72,24 +72,28 @@ class _BaseYandexGPT(Serializable):
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that iam token exists in environment."""
|
||||
|
||||
iam_token = get_from_dict_or_env(values, "iam_token", "YC_IAM_TOKEN", "")
|
||||
iam_token = convert_to_secret_str(
|
||||
get_from_dict_or_env(values, "iam_token", "YC_IAM_TOKEN", "")
|
||||
)
|
||||
values["iam_token"] = iam_token
|
||||
api_key = get_from_dict_or_env(values, "api_key", "YC_API_KEY", "")
|
||||
api_key = convert_to_secret_str(
|
||||
get_from_dict_or_env(values, "api_key", "YC_API_KEY", "")
|
||||
)
|
||||
values["api_key"] = api_key
|
||||
folder_id = get_from_dict_or_env(values, "folder_id", "YC_FOLDER_ID", "")
|
||||
values["folder_id"] = folder_id
|
||||
if api_key == "" and iam_token == "":
|
||||
if api_key.get_secret_value() == "" and iam_token.get_secret_value() == "":
|
||||
raise ValueError("Either 'YC_API_KEY' or 'YC_IAM_TOKEN' must be provided.")
|
||||
|
||||
if values["iam_token"]:
|
||||
values["_grpc_metadata"] = [
|
||||
("authorization", f"Bearer {values['iam_token']}")
|
||||
("authorization", f"Bearer {values['iam_token'].get_secret_value()}")
|
||||
]
|
||||
if values["folder_id"]:
|
||||
values["_grpc_metadata"].append(("x-folder-id", values["folder_id"]))
|
||||
else:
|
||||
values["_grpc_metadata"] = (
|
||||
("authorization", f"Api-Key {values['api_key']}"),
|
||||
("authorization", f"Api-Key {values['api_key'].get_secret_value()}"),
|
||||
)
|
||||
if values["model_uri"] == "" and values["folder_id"] == "":
|
||||
raise ValueError("Either 'model_uri' or 'folder_id' must be provided.")
|
||||
|
||||
@@ -11,6 +11,10 @@ from langchain_community.storage.astradb import (
|
||||
AstraDBStore,
|
||||
)
|
||||
from langchain_community.storage.redis import RedisStore
|
||||
from langchain_community.storage.sql import (
|
||||
SQLDocStore,
|
||||
SQLStrStore,
|
||||
)
|
||||
from langchain_community.storage.upstash_redis import (
|
||||
UpstashRedisByteStore,
|
||||
UpstashRedisStore,
|
||||
@@ -22,4 +26,6 @@ __all__ = [
|
||||
"RedisStore",
|
||||
"UpstashRedisByteStore",
|
||||
"UpstashRedisStore",
|
||||
"SQLDocStore",
|
||||
"SQLStrStore",
|
||||
]
|
||||
|
||||
345
libs/community/langchain_community/storage/sql.py
Normal file
345
libs/community/langchain_community/storage/sql.py
Normal file
@@ -0,0 +1,345 @@
|
||||
"""SQL storage that persists data in a SQL database
|
||||
and supports data isolation using collections."""
|
||||
from __future__ import annotations
|
||||
|
||||
import uuid
|
||||
from typing import Any, Generic, Iterator, List, Optional, Sequence, Tuple, TypeVar
|
||||
|
||||
import sqlalchemy
|
||||
from sqlalchemy import JSON, UUID
|
||||
from sqlalchemy.orm import Session, relationship
|
||||
|
||||
try:
|
||||
from sqlalchemy.orm import declarative_base
|
||||
except ImportError:
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
|
||||
from langchain_core.documents import Document
|
||||
from langchain_core.load import Serializable, dumps, loads
|
||||
from langchain_core.stores import BaseStore
|
||||
|
||||
V = TypeVar("V")
|
||||
|
||||
ITERATOR_WINDOW_SIZE = 1000
|
||||
|
||||
Base = declarative_base() # type: Any
|
||||
|
||||
|
||||
_LANGCHAIN_DEFAULT_COLLECTION_NAME = "langchain"
|
||||
|
||||
|
||||
class BaseModel(Base):
|
||||
"""Base model for the SQL stores."""
|
||||
|
||||
__abstract__ = True
|
||||
uuid = sqlalchemy.Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
|
||||
|
||||
|
||||
_classes: Any = None
|
||||
|
||||
|
||||
def _get_storage_stores() -> Any:
|
||||
global _classes
|
||||
if _classes is not None:
|
||||
return _classes
|
||||
|
||||
class CollectionStore(BaseModel):
|
||||
"""Collection store."""
|
||||
|
||||
__tablename__ = "langchain_storage_collection"
|
||||
|
||||
name = sqlalchemy.Column(sqlalchemy.String)
|
||||
cmetadata = sqlalchemy.Column(JSON)
|
||||
|
||||
items = relationship(
|
||||
"ItemStore",
|
||||
back_populates="collection",
|
||||
passive_deletes=True,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_by_name(
|
||||
cls, session: Session, name: str
|
||||
) -> Optional["CollectionStore"]:
|
||||
# type: ignore
|
||||
return session.query(cls).filter(cls.name == name).first()
|
||||
|
||||
@classmethod
|
||||
def get_or_create(
|
||||
cls,
|
||||
session: Session,
|
||||
name: str,
|
||||
cmetadata: Optional[dict] = None,
|
||||
) -> Tuple["CollectionStore", bool]:
|
||||
"""
|
||||
Get or create a collection.
|
||||
Returns [Collection, bool] where the bool is True if the collection was created.
|
||||
""" # noqa: E501
|
||||
created = False
|
||||
collection = cls.get_by_name(session, name)
|
||||
if collection:
|
||||
return collection, created
|
||||
|
||||
collection = cls(name=name, cmetadata=cmetadata)
|
||||
session.add(collection)
|
||||
session.commit()
|
||||
created = True
|
||||
return collection, created
|
||||
|
||||
class ItemStore(BaseModel):
|
||||
"""Item store."""
|
||||
|
||||
__tablename__ = "langchain_storage_items"
|
||||
|
||||
collection_id = sqlalchemy.Column(
|
||||
UUID(as_uuid=True),
|
||||
sqlalchemy.ForeignKey(
|
||||
f"{CollectionStore.__tablename__}.uuid",
|
||||
ondelete="CASCADE",
|
||||
),
|
||||
)
|
||||
collection = relationship(CollectionStore, back_populates="items")
|
||||
|
||||
content = sqlalchemy.Column(sqlalchemy.String, nullable=True)
|
||||
|
||||
# custom_id : any user defined id
|
||||
custom_id = sqlalchemy.Column(sqlalchemy.String, nullable=True)
|
||||
|
||||
_classes = (ItemStore, CollectionStore)
|
||||
|
||||
return _classes
|
||||
|
||||
|
||||
class SQLBaseStore(BaseStore[str, V], Generic[V]):
|
||||
"""SQL storage
|
||||
|
||||
Args:
|
||||
connection_string: SQL connection string that will be passed to SQLAlchemy.
|
||||
collection_name: The name of the collection to use. (default: langchain)
|
||||
NOTE: Collections are useful to isolate your data in a given a database.
|
||||
This is not the name of the table, but the name of the collection.
|
||||
The tables will be created when initializing the store (if not exists)
|
||||
So, make sure the user has the right permissions to create tables.
|
||||
pre_delete_collection: If True, will delete the collection if it exists.
|
||||
(default: False). Useful for testing.
|
||||
engine_args: SQLAlchemy's create engine arguments.
|
||||
|
||||
Example:
|
||||
.. code-block:: python
|
||||
|
||||
from langchain_community.storage import SQLDocStore
|
||||
from langchain_community.embeddings.openai import OpenAIEmbeddings
|
||||
|
||||
# example using an SQLDocStore to store Document objects for
|
||||
# a ParentDocumentRetriever
|
||||
CONNECTION_STRING = "postgresql+psycopg2://user:pass@localhost:5432/db"
|
||||
COLLECTION_NAME = "state_of_the_union_test"
|
||||
docstore = SQLDocStore(
|
||||
collection_name=COLLECTION_NAME,
|
||||
connection_string=CONNECTION_STRING,
|
||||
)
|
||||
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
|
||||
vectorstore = ...
|
||||
|
||||
retriever = ParentDocumentRetriever(
|
||||
vectorstore=vectorstore,
|
||||
docstore=docstore,
|
||||
child_splitter=child_splitter,
|
||||
)
|
||||
|
||||
# example using an SQLStrStore to store strings
|
||||
# same example as in "InMemoryStore" but using SQL persistence
|
||||
store = SQLDocStore(
|
||||
collection_name=COLLECTION_NAME,
|
||||
connection_string=CONNECTION_STRING,
|
||||
)
|
||||
store.mset([('key1', 'value1'), ('key2', 'value2')])
|
||||
store.mget(['key1', 'key2'])
|
||||
# ['value1', 'value2']
|
||||
store.mdelete(['key1'])
|
||||
list(store.yield_keys())
|
||||
# ['key2']
|
||||
list(store.yield_keys(prefix='k'))
|
||||
# ['key2']
|
||||
|
||||
# delete the COLLECTION_NAME collection
|
||||
docstore.delete_collection()
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
connection_string: str,
|
||||
collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,
|
||||
collection_metadata: Optional[dict] = None,
|
||||
pre_delete_collection: bool = False,
|
||||
connection: Optional[sqlalchemy.engine.Connection] = None,
|
||||
engine_args: Optional[dict[str, Any]] = None,
|
||||
) -> None:
|
||||
self.connection_string = connection_string
|
||||
self.collection_name = collection_name
|
||||
self.collection_metadata = collection_metadata
|
||||
self.pre_delete_collection = pre_delete_collection
|
||||
self.engine_args = engine_args or {}
|
||||
# Create a connection if not provided, otherwise use the provided connection
|
||||
self._conn = connection if connection else self.__connect()
|
||||
self.__post_init__()
|
||||
|
||||
def __post_init__(
|
||||
self,
|
||||
) -> None:
|
||||
"""Initialize the store."""
|
||||
ItemStore, CollectionStore = _get_storage_stores()
|
||||
self.CollectionStore = CollectionStore
|
||||
self.ItemStore = ItemStore
|
||||
self.__create_tables_if_not_exists()
|
||||
self.__create_collection()
|
||||
|
||||
def __connect(self) -> sqlalchemy.engine.Connection:
|
||||
engine = sqlalchemy.create_engine(self.connection_string, **self.engine_args)
|
||||
conn = engine.connect()
|
||||
return conn
|
||||
|
||||
def __create_tables_if_not_exists(self) -> None:
|
||||
with self._conn.begin():
|
||||
Base.metadata.create_all(self._conn)
|
||||
|
||||
def __create_collection(self) -> None:
|
||||
if self.pre_delete_collection:
|
||||
self.delete_collection()
|
||||
with Session(self._conn) as session:
|
||||
self.CollectionStore.get_or_create(
|
||||
session, self.collection_name, cmetadata=self.collection_metadata
|
||||
)
|
||||
|
||||
def delete_collection(self) -> None:
|
||||
with Session(self._conn) as session:
|
||||
collection = self.__get_collection(session)
|
||||
if not collection:
|
||||
return
|
||||
session.delete(collection)
|
||||
session.commit()
|
||||
|
||||
def __get_collection(self, session: Session) -> Any:
|
||||
return self.CollectionStore.get_by_name(session, self.collection_name)
|
||||
|
||||
def __del__(self) -> None:
|
||||
if self._conn:
|
||||
self._conn.close()
|
||||
|
||||
def __serialize_value(self, obj: V) -> str:
|
||||
if isinstance(obj, Serializable):
|
||||
return dumps(obj)
|
||||
return obj
|
||||
|
||||
def __deserialize_value(self, obj: V) -> str:
|
||||
try:
|
||||
return loads(obj)
|
||||
except Exception:
|
||||
return obj
|
||||
|
||||
def mget(self, keys: Sequence[str]) -> List[Optional[V]]:
|
||||
"""Get the values associated with the given keys.
|
||||
|
||||
Args:
|
||||
keys (Sequence[str]): A sequence of keys.
|
||||
|
||||
Returns:
|
||||
A sequence of optional values associated with the keys.
|
||||
If a key is not found, the corresponding value will be None.
|
||||
"""
|
||||
with Session(self._conn) as session:
|
||||
collection = self.__get_collection(session)
|
||||
|
||||
items = (
|
||||
session.query(self.ItemStore.content, self.ItemStore.custom_id)
|
||||
.where(
|
||||
sqlalchemy.and_(
|
||||
self.ItemStore.custom_id.in_(keys),
|
||||
self.ItemStore.collection_id == (collection.uuid),
|
||||
)
|
||||
)
|
||||
.all()
|
||||
)
|
||||
|
||||
ordered_values = {key: None for key in keys}
|
||||
for item in items:
|
||||
v = item[0]
|
||||
val = self.__deserialize_value(v) if v is not None else v
|
||||
k = item[1]
|
||||
ordered_values[k] = val
|
||||
|
||||
return [ordered_values[key] for key in keys]
|
||||
|
||||
def mset(self, key_value_pairs: Sequence[Tuple[str, V]]) -> None:
|
||||
"""Set the values for the given keys.
|
||||
|
||||
Args:
|
||||
key_value_pairs (Sequence[Tuple[str, V]]): A sequence of key-value pairs.
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
with Session(self._conn) as session:
|
||||
collection = self.__get_collection(session)
|
||||
if not collection:
|
||||
raise ValueError("Collection not found")
|
||||
for id, item in key_value_pairs:
|
||||
content = self.__serialize_value(item)
|
||||
item_store = self.ItemStore(
|
||||
content=content,
|
||||
custom_id=id,
|
||||
collection_id=collection.uuid,
|
||||
)
|
||||
session.add(item_store)
|
||||
session.commit()
|
||||
|
||||
def mdelete(self, keys: Sequence[str]) -> None:
|
||||
"""Delete the given keys and their associated values.
|
||||
|
||||
Args:
|
||||
keys (Sequence[str]): A sequence of keys to delete.
|
||||
"""
|
||||
with Session(self._conn) as session:
|
||||
collection = self.__get_collection(session)
|
||||
if not collection:
|
||||
raise ValueError("Collection not found")
|
||||
if keys is not None:
|
||||
stmt = sqlalchemy.delete(self.ItemStore).where(
|
||||
sqlalchemy.and_(
|
||||
self.ItemStore.custom_id.in_(keys),
|
||||
self.ItemStore.collection_id == (collection.uuid),
|
||||
)
|
||||
)
|
||||
session.execute(stmt)
|
||||
session.commit()
|
||||
|
||||
def yield_keys(self, prefix: Optional[str] = None) -> Iterator[str]:
|
||||
"""Get an iterator over keys that match the given prefix.
|
||||
|
||||
Args:
|
||||
prefix (str, optional): The prefix to match. Defaults to None.
|
||||
|
||||
Returns:
|
||||
Iterator[str]: An iterator over keys that match the given prefix.
|
||||
"""
|
||||
with Session(self._conn) as session:
|
||||
collection = self.__get_collection(session)
|
||||
start = 0
|
||||
while True:
|
||||
stop = start + ITERATOR_WINDOW_SIZE
|
||||
query = session.query(self.ItemStore.custom_id).where(
|
||||
self.ItemStore.collection_id == (collection.uuid)
|
||||
)
|
||||
if prefix is not None:
|
||||
query = query.filter(self.ItemStore.custom_id.startswith(prefix))
|
||||
items = query.slice(start, stop).all()
|
||||
|
||||
if len(items) == 0:
|
||||
break
|
||||
for item in items:
|
||||
yield item[0]
|
||||
start += ITERATOR_WINDOW_SIZE
|
||||
|
||||
|
||||
SQLDocStore = SQLBaseStore[Document]
|
||||
SQLStrStore = SQLBaseStore[str]
|
||||
@@ -31,7 +31,7 @@ class BingSearchRun(BaseTool):
|
||||
class BingSearchResults(BaseTool):
|
||||
"""Tool that queries the Bing Search API and gets back json."""
|
||||
|
||||
name: str = "Bing Search Results JSON"
|
||||
name: str = "bing_search_results_json"
|
||||
description: str = (
|
||||
"A wrapper around Bing Search. "
|
||||
"Useful for when you need to answer questions about current events. "
|
||||
|
||||
@@ -210,6 +210,12 @@ def _import_hologres() -> Any:
|
||||
return Hologres
|
||||
|
||||
|
||||
def _import_kdbai() -> Any:
|
||||
from langchain_community.vectorstores.kdbai import KDBAI
|
||||
|
||||
return KDBAI
|
||||
|
||||
|
||||
def _import_lancedb() -> Any:
|
||||
from langchain_community.vectorstores.lancedb import LanceDB
|
||||
|
||||
@@ -523,6 +529,8 @@ def __getattr__(name: str) -> Any:
|
||||
return _import_faiss()
|
||||
elif name == "Hologres":
|
||||
return _import_hologres()
|
||||
elif name == "KDBAI":
|
||||
return _import_kdbai()
|
||||
elif name == "LanceDB":
|
||||
return _import_lancedb()
|
||||
elif name == "LLMRails":
|
||||
@@ -638,6 +646,7 @@ __all__ = [
|
||||
"Epsilla",
|
||||
"FAISS",
|
||||
"Hologres",
|
||||
"KDBAI",
|
||||
"LanceDB",
|
||||
"LLMRails",
|
||||
"Marqo",
|
||||
|
||||
@@ -214,6 +214,8 @@ class ApproxRetrievalStrategy(BaseRetrievalStrategy):
|
||||
similarityAlgo = "l2_norm"
|
||||
elif similarity is DistanceStrategy.DOT_PRODUCT:
|
||||
similarityAlgo = "dot_product"
|
||||
elif similarity is DistanceStrategy.MAX_INNER_PRODUCT:
|
||||
similarityAlgo = "max_inner_product"
|
||||
else:
|
||||
raise ValueError(f"Similarity {similarity} not supported.")
|
||||
|
||||
@@ -388,7 +390,6 @@ class ElasticsearchStore(VectorStore):
|
||||
from langchain_community.vectorstores import ElasticsearchStore
|
||||
from langchain_community.embeddings.openai import OpenAIEmbeddings
|
||||
|
||||
embeddings = OpenAIEmbeddings()
|
||||
vectorstore = ElasticsearchStore(
|
||||
embedding=OpenAIEmbeddings(),
|
||||
index_name="langchain-demo",
|
||||
@@ -413,7 +414,7 @@ class ElasticsearchStore(VectorStore):
|
||||
distance_strategy: Optional. Distance strategy to use when
|
||||
searching the index.
|
||||
Defaults to COSINE. Can be one of COSINE,
|
||||
EUCLIDEAN_DISTANCE, or DOT_PRODUCT.
|
||||
EUCLIDEAN_DISTANCE, MAX_INNER_PRODUCT or DOT_PRODUCT.
|
||||
|
||||
If you want to use a cloud hosted Elasticsearch instance, you can pass in the
|
||||
cloud_id argument instead of the es_url argument.
|
||||
@@ -509,6 +510,7 @@ class ElasticsearchStore(VectorStore):
|
||||
DistanceStrategy.COSINE,
|
||||
DistanceStrategy.DOT_PRODUCT,
|
||||
DistanceStrategy.EUCLIDEAN_DISTANCE,
|
||||
DistanceStrategy.MAX_INNER_PRODUCT,
|
||||
]
|
||||
] = None,
|
||||
strategy: BaseRetrievalStrategy = ApproxRetrievalStrategy(),
|
||||
@@ -693,6 +695,25 @@ class ElasticsearchStore(VectorStore):
|
||||
|
||||
return selected_docs
|
||||
|
||||
@staticmethod
|
||||
def _identity_fn(score: float) -> float:
|
||||
return score
|
||||
|
||||
def _select_relevance_score_fn(self) -> Callable[[float], float]:
|
||||
"""
|
||||
The 'correct' relevance function
|
||||
may differ depending on a few things, including:
|
||||
- the distance / similarity metric used by the VectorStore
|
||||
- the scale of your embeddings (OpenAI's are unit normed. Many others are not!)
|
||||
- embedding dimensionality
|
||||
- etc.
|
||||
|
||||
Vectorstores should define their own selection based method of relevance.
|
||||
"""
|
||||
# All scores from Elasticsearch are already normalized similarities:
|
||||
# https://www.elastic.co/guide/en/elasticsearch/reference/current/dense-vector.html#dense-vector-params
|
||||
return self._identity_fn
|
||||
|
||||
def similarity_search_with_score(
|
||||
self, query: str, k: int = 4, filter: Optional[List[dict]] = None, **kwargs: Any
|
||||
) -> List[Tuple[Document, float]]:
|
||||
@@ -706,6 +727,9 @@ class ElasticsearchStore(VectorStore):
|
||||
Returns:
|
||||
List of Documents most similar to the query and score for each
|
||||
"""
|
||||
if isinstance(self.strategy, ApproxRetrievalStrategy) and self.strategy.hybrid:
|
||||
raise ValueError("scores are currently not supported in hybrid mode")
|
||||
|
||||
return self._search(query=query, k=k, filter=filter, **kwargs)
|
||||
|
||||
def similarity_search_by_vector_with_relevance_scores(
|
||||
@@ -725,6 +749,9 @@ class ElasticsearchStore(VectorStore):
|
||||
Returns:
|
||||
List of Documents most similar to the embedding and score for each
|
||||
"""
|
||||
if isinstance(self.strategy, ApproxRetrievalStrategy) and self.strategy.hybrid:
|
||||
raise ValueError("scores are currently not supported in hybrid mode")
|
||||
|
||||
return self._search(query_vector=embedding, k=k, filter=filter, **kwargs)
|
||||
|
||||
def _search(
|
||||
@@ -1104,7 +1131,8 @@ class ElasticsearchStore(VectorStore):
|
||||
distance_strategy: Optional. Name of the distance
|
||||
strategy to use. Defaults to "COSINE".
|
||||
can be one of "COSINE",
|
||||
"EUCLIDEAN_DISTANCE", "DOT_PRODUCT".
|
||||
"EUCLIDEAN_DISTANCE", "DOT_PRODUCT",
|
||||
"MAX_INNER_PRODUCT".
|
||||
bulk_kwargs: Optional. Additional arguments to pass to
|
||||
Elasticsearch bulk.
|
||||
"""
|
||||
|
||||
267
libs/community/langchain_community/vectorstores/kdbai.py
Normal file
267
libs/community/langchain_community/vectorstores/kdbai.py
Normal file
@@ -0,0 +1,267 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import uuid
|
||||
from typing import Any, Iterable, List, Optional, Tuple
|
||||
|
||||
from langchain_core.documents import Document
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.vectorstores import VectorStore
|
||||
|
||||
from langchain_community.vectorstores.utils import DistanceStrategy
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class KDBAI(VectorStore):
|
||||
"""`KDB.AI` vector store [https://kdb.ai](https://kdb.ai)
|
||||
|
||||
To use, you should have the `kdbai_client` python package installed.
|
||||
|
||||
Args:
|
||||
table: kdbai_client.Table object to use as storage,
|
||||
embedding: Any embedding function implementing
|
||||
`langchain.embeddings.base.Embeddings` interface,
|
||||
distance_strategy: One option from DistanceStrategy.EUCLIDEAN_DISTANCE,
|
||||
DistanceStrategy.DOT_PRODUCT or DistanceStrategy.COSINE.
|
||||
|
||||
See the example [notebook](https://github.com/KxSystems/langchain/blob/KDB.AI/docs/docs/integrations/vectorstores/kdbai.ipynb).
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
table: Any,
|
||||
embedding: Embeddings,
|
||||
distance_strategy: Optional[
|
||||
DistanceStrategy
|
||||
] = DistanceStrategy.EUCLIDEAN_DISTANCE,
|
||||
):
|
||||
try:
|
||||
import kdbai_client # noqa
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Could not import kdbai_client python package. "
|
||||
"Please install it with `pip install kdbai_client`."
|
||||
)
|
||||
self._table = table
|
||||
self._embedding = embedding
|
||||
self.distance_strategy = distance_strategy
|
||||
|
||||
@property
|
||||
def embeddings(self) -> Optional[Embeddings]:
|
||||
if isinstance(self._embedding, Embeddings):
|
||||
return self._embedding
|
||||
return None
|
||||
|
||||
def _embed_documents(self, texts: Iterable[str]) -> List[List[float]]:
|
||||
if isinstance(self._embedding, Embeddings):
|
||||
return self._embedding.embed_documents(list(texts))
|
||||
return [self._embedding(t) for t in texts]
|
||||
|
||||
def _embed_query(self, text: str) -> List[float]:
|
||||
if isinstance(self._embedding, Embeddings):
|
||||
return self._embedding.embed_query(text)
|
||||
return self._embedding(text)
|
||||
|
||||
def _insert(
|
||||
self,
|
||||
texts: List[str],
|
||||
ids: Optional[List[str]],
|
||||
metadata: Optional[Any] = None,
|
||||
) -> None:
|
||||
try:
|
||||
import numpy as np
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Could not import numpy python package. "
|
||||
"Please install it with `pip install numpy`."
|
||||
)
|
||||
|
||||
try:
|
||||
import pandas as pd
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Could not import pandas python package. "
|
||||
"Please install it with `pip install pandas`."
|
||||
)
|
||||
|
||||
embeds = self._embedding.embed_documents(texts)
|
||||
df = pd.DataFrame()
|
||||
df["id"] = ids
|
||||
df["text"] = [t.encode("utf-8") for t in texts]
|
||||
df["embeddings"] = [np.array(e, dtype="float32") for e in embeds]
|
||||
if metadata is not None:
|
||||
df = pd.concat([df, metadata], axis=1)
|
||||
self._table.insert(df, warn=False)
|
||||
|
||||
def add_texts(
|
||||
self,
|
||||
texts: Iterable[str],
|
||||
metadatas: Optional[List[dict]] = None,
|
||||
ids: Optional[List[str]] = None,
|
||||
batch_size: int = 32,
|
||||
**kwargs: Any,
|
||||
) -> List[str]:
|
||||
"""Run more texts through the embeddings and add to the vectorstore.
|
||||
|
||||
Args:
|
||||
texts (Iterable[str]): Texts to add to the vectorstore.
|
||||
metadatas (Optional[List[dict]]): List of metadata corresponding to each
|
||||
chunk of text.
|
||||
ids (Optional[List[str]]): List of IDs corresponding to each chunk of text.
|
||||
batch_size (Optional[int]): Size of batch of chunks of text to insert at
|
||||
once.
|
||||
|
||||
Returns:
|
||||
List[str]: List of IDs of the added texts.
|
||||
"""
|
||||
|
||||
try:
|
||||
import pandas as pd
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Could not import pandas python package. "
|
||||
"Please install it with `pip install pandas`."
|
||||
)
|
||||
|
||||
texts = list(texts)
|
||||
metadf: pd.DataFrame = None
|
||||
if metadatas is not None:
|
||||
if isinstance(metadatas, pd.DataFrame):
|
||||
metadf = metadatas
|
||||
else:
|
||||
metadf = pd.DataFrame(metadatas)
|
||||
out_ids: List[str] = []
|
||||
nbatches = (len(texts) - 1) // batch_size + 1
|
||||
for i in range(nbatches):
|
||||
istart = i * batch_size
|
||||
iend = (i + 1) * batch_size
|
||||
batch = texts[istart:iend]
|
||||
if ids:
|
||||
batch_ids = ids[istart:iend]
|
||||
else:
|
||||
batch_ids = [str(uuid.uuid4()) for _ in range(len(batch))]
|
||||
if metadf is not None:
|
||||
batch_meta = metadf.iloc[istart:iend].reset_index(drop=True)
|
||||
else:
|
||||
batch_meta = None
|
||||
self._insert(batch, batch_ids, batch_meta)
|
||||
out_ids = out_ids + batch_ids
|
||||
return out_ids
|
||||
|
||||
def add_documents(
|
||||
self, documents: List[Document], batch_size: int = 32, **kwargs: Any
|
||||
) -> List[str]:
|
||||
"""Run more documents through the embeddings and add to the vectorstore.
|
||||
|
||||
Args:
|
||||
documents (List[Document]: Documents to add to the vectorstore.
|
||||
batch_size (Optional[int]): Size of batch of documents to insert at once.
|
||||
|
||||
Returns:
|
||||
List[str]: List of IDs of the added texts.
|
||||
"""
|
||||
|
||||
try:
|
||||
import pandas as pd
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Could not import pandas python package. "
|
||||
"Please install it with `pip install pandas`."
|
||||
)
|
||||
|
||||
texts = [x.page_content for x in documents]
|
||||
metadata = pd.DataFrame([x.metadata for x in documents])
|
||||
return self.add_texts(texts, metadata=metadata, batch_size=batch_size)
|
||||
|
||||
def similarity_search_with_score(
|
||||
self,
|
||||
query: str,
|
||||
k: int = 1,
|
||||
filter: Optional[List] = [],
|
||||
**kwargs: Any,
|
||||
) -> List[Tuple[Document, float]]:
|
||||
"""Run similarity search with distance from a query string.
|
||||
|
||||
Args:
|
||||
query (str): Query string.
|
||||
k (Optional[int]): number of neighbors to retrieve.
|
||||
filter (Optional[List]): KDB.AI metadata filter clause: https://code.kx.com/kdbai/use/filter.html
|
||||
|
||||
Returns:
|
||||
List[Document]: List of similar documents.
|
||||
"""
|
||||
return self.similarity_search_by_vector_with_score(
|
||||
self._embed_query(query), k=k, filter=filter, **kwargs
|
||||
)
|
||||
|
||||
def similarity_search_by_vector_with_score(
|
||||
self,
|
||||
embedding: List[float],
|
||||
*,
|
||||
k: int = 1,
|
||||
filter: Optional[List] = [],
|
||||
**kwargs: Any,
|
||||
) -> List[Tuple[Document, float]]:
|
||||
"""Return pinecone documents most similar to embedding, along with scores.
|
||||
|
||||
Args:
|
||||
embedding (List[float]): query vector.
|
||||
k (Optional[int]): number of neighbors to retrieve.
|
||||
filter (Optional[List]): KDB.AI metadata filter clause: https://code.kx.com/kdbai/use/filter.html
|
||||
|
||||
Returns:
|
||||
List[Document]: List of similar documents.
|
||||
"""
|
||||
if "n" in kwargs:
|
||||
k = kwargs.pop("n")
|
||||
matches = self._table.search(vectors=[embedding], n=k, filter=filter, **kwargs)[
|
||||
0
|
||||
]
|
||||
docs = []
|
||||
for row in matches.to_dict(orient="records"):
|
||||
text = row.pop("text")
|
||||
score = row.pop("__nn_distance")
|
||||
docs.append(
|
||||
(
|
||||
Document(
|
||||
page_content=text,
|
||||
metadata={k: v for k, v in row.items() if k != "text"},
|
||||
),
|
||||
score,
|
||||
)
|
||||
)
|
||||
return docs
|
||||
|
||||
def similarity_search(
|
||||
self,
|
||||
query: str,
|
||||
k: int = 1,
|
||||
filter: Optional[List] = [],
|
||||
**kwargs: Any,
|
||||
) -> List[Document]:
|
||||
"""Run similarity search from a query string.
|
||||
|
||||
Args:
|
||||
query (str): Query string.
|
||||
k (Optional[int]): number of neighbors to retrieve.
|
||||
filter (Optional[List]): KDB.AI metadata filter clause: https://code.kx.com/kdbai/use/filter.html
|
||||
|
||||
Returns:
|
||||
List[Document]: List of similar documents.
|
||||
"""
|
||||
docs_and_scores = self.similarity_search_with_score(
|
||||
query, k=k, filter=filter, **kwargs
|
||||
)
|
||||
return [doc for doc, _ in docs_and_scores]
|
||||
|
||||
@classmethod
|
||||
def from_texts(
|
||||
cls: Any,
|
||||
texts: List[str],
|
||||
embedding: Embeddings,
|
||||
metadatas: Optional[List[dict]] = None,
|
||||
**kwargs: Any,
|
||||
) -> Any:
|
||||
"""Not implemented."""
|
||||
raise Exception("Not implemented.")
|
||||
@@ -209,6 +209,7 @@ class MongoDBAtlasVectorSearch(VectorStore):
|
||||
for res in cursor:
|
||||
text = res.pop(self._text_key)
|
||||
score = res.pop("score")
|
||||
del res["embedding"]
|
||||
docs.append((Document(page_content=text, metadata=res), score))
|
||||
return docs
|
||||
|
||||
@@ -221,11 +222,8 @@ class MongoDBAtlasVectorSearch(VectorStore):
|
||||
) -> List[Tuple[Document, float]]:
|
||||
"""Return MongoDB documents most similar to the given query and their scores.
|
||||
|
||||
Uses the $vectorSearch stage
|
||||
performs aNN search on a vector in the specified field.
|
||||
Index the field as "vector" using Atlas Vector Search "vectorSearch" index type
|
||||
|
||||
For more info : https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-stage/
|
||||
Uses the vectorSearch operator available in MongoDB Atlas Search.
|
||||
For more: https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-stage/
|
||||
|
||||
Args:
|
||||
query: Text to look up documents similar to.
|
||||
@@ -233,7 +231,7 @@ class MongoDBAtlasVectorSearch(VectorStore):
|
||||
pre_filter: (Optional) dictionary of argument(s) to prefilter document
|
||||
fields on.
|
||||
post_filter_pipeline: (Optional) Pipeline of MongoDB aggregation stages
|
||||
following the vector Search.
|
||||
following the vectorSearch stage.
|
||||
|
||||
Returns:
|
||||
List of documents most similar to the query and their scores.
|
||||
@@ -257,11 +255,8 @@ class MongoDBAtlasVectorSearch(VectorStore):
|
||||
) -> List[Document]:
|
||||
"""Return MongoDB documents most similar to the given query.
|
||||
|
||||
Uses the $vectorSearch stage
|
||||
performs aNN search on a vector in the specified field.
|
||||
Index the field as "vector" using Atlas Vector Search "vectorSearch" index type
|
||||
|
||||
For more info : https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-stage/
|
||||
Uses the vectorSearch operator available in MongoDB Atlas Search.
|
||||
For more: https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-stage/
|
||||
|
||||
Args:
|
||||
query: Text to look up documents similar to.
|
||||
@@ -269,17 +264,22 @@ class MongoDBAtlasVectorSearch(VectorStore):
|
||||
pre_filter: (Optional) dictionary of argument(s) to prefilter document
|
||||
fields on.
|
||||
post_filter_pipeline: (Optional) Pipeline of MongoDB aggregation stages
|
||||
following the vector search.
|
||||
following the vectorSearch stage.
|
||||
|
||||
Returns:
|
||||
List of documents most similar to the query and their scores.
|
||||
"""
|
||||
additional = kwargs.get("additional")
|
||||
docs_and_scores = self.similarity_search_with_score(
|
||||
query,
|
||||
k=k,
|
||||
pre_filter=pre_filter,
|
||||
post_filter_pipeline=post_filter_pipeline,
|
||||
)
|
||||
|
||||
if additional and "similarity_score" in additional:
|
||||
for doc, score in docs_and_scores:
|
||||
doc.metadata["score"] = score
|
||||
return [doc for doc, _ in docs_and_scores]
|
||||
|
||||
def max_marginal_relevance_search(
|
||||
@@ -309,7 +309,7 @@ class MongoDBAtlasVectorSearch(VectorStore):
|
||||
pre_filter: (Optional) dictionary of argument(s) to prefilter on document
|
||||
fields.
|
||||
post_filter_pipeline: (Optional) pipeline of MongoDB aggregation stages
|
||||
following the vector search.
|
||||
following the vectorSearch stage.
|
||||
Returns:
|
||||
List of documents selected by maximal marginal relevance.
|
||||
"""
|
||||
|
||||
@@ -38,7 +38,7 @@ class PGVecto_rs(VectorStore):
|
||||
except ImportError as e:
|
||||
raise ImportError(
|
||||
"Unable to import pgvector_rs.sdk , please install with "
|
||||
'`pip install "pgvector_rs[sdk]"`.'
|
||||
'`pip install "pgvecto_rs[sdk]"`.'
|
||||
) from e
|
||||
self._store = PGVectoRs(
|
||||
db_url=db_url,
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user