mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-25 13:07:58 +00:00
docs: titles fix (#17206)
Several notebooks have Title != file name. That results in corrupted sorting in Navbar (ToC). - Fixed titles and file names. - Changed text formats to the consistent form - Redirected renamed files in the `Vercel.json`
This commit is contained in:
@@ -4,8 +4,9 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# AliCloud PAI EAS\n",
|
||||
"Machine Learning Platform for AI of Alibaba Cloud is a machine learning or deep learning engineering platform intended for enterprises and developers. It provides easy-to-use, cost-effective, high-performance, and easy-to-scale plug-ins that can be applied to various industry scenarios. With over 140 built-in optimization algorithms, Machine Learning Platform for AI provides whole-process AI engineering capabilities including data labeling (PAI-iTAG), model building (PAI-Designer and PAI-DSW), model training (PAI-DLC), compilation optimization, and inference deployment (PAI-EAS). PAI-EAS supports different types of hardware resources, including CPUs and GPUs, and features high throughput and low latency. It allows you to deploy large-scale complex models with a few clicks and perform elastic scale-ins and scale-outs in real time. It also provides a comprehensive O&M and monitoring system."
|
||||
"# Alibaba Cloud PAI EAS\n",
|
||||
"\n",
|
||||
">[Machine Learning Platform for AI of Alibaba Cloud](https://www.alibabacloud.com/help/en/pai) is a machine learning or deep learning engineering platform intended for enterprises and developers. It provides easy-to-use, cost-effective, high-performance, and easy-to-scale plug-ins that can be applied to various industry scenarios. With over 140 built-in optimization algorithms, `Machine Learning Platform for AI` provides whole-process AI engineering capabilities including data labeling (`PAI-iTAG`), model building (`PAI-Designer` and `PAI-DSW`), model training (`PAI-DLC`), compilation optimization, and inference deployment (`PAI-EAS`). `PAI-EAS` supports different types of hardware resources, including CPUs and GPUs, and features high throughput and low latency. It allows you to deploy large-scale complex models with a few clicks and perform elastic scale-ins and scale-outs in real time. It also provides a comprehensive O&M and monitoring system."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -29,7 +30,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"One who want to use eas llms must set up eas service first. When the eas service is launched, eas_service_rul and eas_service token can be got. Users can refer to https://www.alibabacloud.com/help/en/pai/user-guide/service-deployment/ for more information,"
|
||||
"One who wants to use EAS LLMs must set up EAS service first. When the EAS service is launched, `EAS_SERVICE_URL` and `EAS_SERVICE_TOKEN` can be obtained. Users can refer to https://www.alibabacloud.com/help/en/pai/user-guide/service-deployment/ for more information,"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -74,7 +75,7 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -88,10 +89,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.11"
|
||||
},
|
||||
"orig_nbformat": 4
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
"nbformat_minor": 4
|
||||
}
|
@@ -7,8 +7,9 @@
|
||||
"source": [
|
||||
"# IBM watsonx.ai\n",
|
||||
"\n",
|
||||
"[WatsonxLLM](https://ibm.github.io/watsonx-ai-python-sdk/fm_extensions.html#langchain) is a wrapper for IBM [watsonx.ai](https://www.ibm.com/products/watsonx-ai) foundation models.\n",
|
||||
"This example shows how to communicate with watsonx.ai models using LangChain."
|
||||
">[WatsonxLLM](https://ibm.github.io/watsonx-ai-python-sdk/fm_extensions.html#langchain) is a wrapper for IBM [watsonx.ai](https://www.ibm.com/products/watsonx-ai) foundation models.\n",
|
||||
"\n",
|
||||
"This example shows how to communicate with `watsonx.ai` models using `LangChain`."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -16,6 +17,8 @@
|
||||
"id": "ea35b2b7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setting up\n",
|
||||
"\n",
|
||||
"Install the package [`ibm-watsonx-ai`](https://ibm.github.io/watsonx-ai-python-sdk/install.html)."
|
||||
]
|
||||
},
|
||||
@@ -60,6 +63,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Load the model\n",
|
||||
"\n",
|
||||
"You might need to adjust model `parameters` for different models or tasks. For details, refer to [documentation](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#metanames.GenTextParamsMetaNames)."
|
||||
]
|
||||
},
|
||||
@@ -328,7 +332,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.13"
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
@@ -6,13 +6,15 @@
|
||||
"source": [
|
||||
"# SAP HANA Cloud Vector Engine\n",
|
||||
"\n",
|
||||
">SAP HANA Cloud Vector Engine is a vector store fully integrated into the SAP HANA Cloud database."
|
||||
">[SAP HANA Cloud Vector Engine](https://www.sap.com/events/teched/news-guide/ai.html#article8) is a vector store fully integrated into the `SAP HANA Cloud` database."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setting up\n",
|
||||
"\n",
|
||||
"Installation of the HANA database driver."
|
||||
]
|
||||
},
|
||||
@@ -32,7 +34,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To use `OpenAIEmbeddings` so we use the OpenAI API Key."
|
||||
"To use `OpenAIEmbeddings` we use the OpenAI API Key."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -51,6 +53,44 @@
|
||||
"# os.environ[\"OPENAI_API_KEY\"] = \"Your OpenAI API key\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Create a database connection to a HANA Cloud instance"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 30,
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-09-09T08:02:28.174088Z",
|
||||
"start_time": "2023-09-09T08:02:28.162698Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from hdbcli import dbapi\n",
|
||||
"\n",
|
||||
"# Use connection settings from the environment\n",
|
||||
"connection = dbapi.connect(\n",
|
||||
" address=os.environ.get(\"HANA_DB_ADDRESS\"),\n",
|
||||
" port=os.environ.get(\"HANA_DB_PORT\"),\n",
|
||||
" user=os.environ.get(\"HANA_DB_USER\"),\n",
|
||||
" password=os.environ.get(\"HANA_DB_PASSWORD\"),\n",
|
||||
" autocommit=True,\n",
|
||||
" sslValidateCertificate=False,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Example"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -83,37 +123,6 @@
|
||||
"embeddings = OpenAIEmbeddings()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Create a database connection to a HANA Cloud instance"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 30,
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-09-09T08:02:28.174088Z",
|
||||
"start_time": "2023-09-09T08:02:28.162698Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from hdbcli import dbapi\n",
|
||||
"\n",
|
||||
"# Use connection settings from the environment\n",
|
||||
"connection = dbapi.connect(\n",
|
||||
" address=os.environ.get(\"HANA_DB_ADDRESS\"),\n",
|
||||
" port=os.environ.get(\"HANA_DB_PORT\"),\n",
|
||||
" user=os.environ.get(\"HANA_DB_USER\"),\n",
|
||||
" password=os.environ.get(\"HANA_DB_PASSWORD\"),\n",
|
||||
" autocommit=True,\n",
|
||||
" sslValidateCertificate=False,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
@@ -161,7 +170,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Perform a query to get the two best matching document chunks from the ones that we added in the previous step.\n",
|
||||
"Perform a query to get the two best-matching document chunks from the ones that we added in the previous step.\n",
|
||||
"By default \"Cosine Similarity\" is used for the search."
|
||||
]
|
||||
},
|
||||
@@ -211,12 +220,15 @@
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"Maximal Marginal Relevance Search (MMR)\n",
|
||||
"## Maximal Marginal Relevance Search (MMR)\n",
|
||||
"\n",
|
||||
"Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. First 20 (fetch_k) items will be retrieved from the DB. The MMR algorithm will then find the best 2 (k) matches."
|
||||
"`Maximal marginal relevance` optimizes for similarity to query AND diversity among selected documents. The first 20 (fetch_k) items will be retrieved from the DB. The MMR algorithm will then find the best 2 (k) matches."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -227,7 +239,10 @@
|
||||
"end_time": "2023-09-09T08:05:23.276819Z",
|
||||
"start_time": "2023-09-09T08:05:21.972256Z"
|
||||
},
|
||||
"collapsed": false
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -346,7 +361,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Using a VectorStore as a retriever in chains for retrieval augmented generation (RAG)\n"
|
||||
"## Using a VectorStore as a retriever in chains for retrieval augmented generation (RAG)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -505,7 +520,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Standard tables vs. \"custom\" tables with vector data"
|
||||
"## Standard tables vs. \"custom\" tables with vector data"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -513,9 +528,9 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As default behaviour, the table for the embeddings is created with 3 columns\n",
|
||||
"* A column \"VEC_TEXT\", which contains the text of the Document\n",
|
||||
"* A column \"VEC_METADATA\", which contains the metadata of the Document\n",
|
||||
"* A column \"VEC_VECTOR\", which contains the embeddings-vector of the document's text"
|
||||
"* A column `VEC_TEXT`, which contains the text of the Document\n",
|
||||
"* A column `VEC_METADATA`, which contains the metadata of the Document\n",
|
||||
"* A column `VEC_VECTOR`, which contains the embeddings-vector of the document's text"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -594,11 +609,11 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Custom tables must have at least three columns that match the semantics of a standard table\n",
|
||||
"* A column with type \"NCLOB\" or \"NVARCHAR\" for the text/context of the embeddings\n",
|
||||
"* A column with type \"NCLOB\" or \"NVARCHAR\" for the metadata \n",
|
||||
"* A column with type `NCLOB` or `NVARCHAR` for the text/context of the embeddings\n",
|
||||
"* A column with type `NCLOB` or `NVARCHAR` for the metadata \n",
|
||||
"* A column with type REAL_VECTOR for the embedding vector\n",
|
||||
"\n",
|
||||
"The table can contain additional columns. When new Documents are inserted to the table, these addtional columns must allow NULL values."
|
||||
"The table can contain additional columns. When new Documents are inserted into the table, these additional columns must allow NULL values."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -654,7 +669,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Add another document and perform a similarity search on the custom table"
|
||||
"Add another document and perform a similarity search on the custom table."
|
||||
]
|
||||
},
|
||||
{
|
@@ -4,16 +4,18 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# **NeuralDB**\n",
|
||||
"NeuralDB is a CPU-friendly and fine-tunable vector store developed by ThirdAI.\n",
|
||||
"# ThirdAI NeuralDB\n",
|
||||
"\n",
|
||||
">[NeuralDB](https://www.thirdai.com/neuraldb-enterprise/) is a CPU-friendly and fine-tunable vector store developed by [ThirdAI](https://www.thirdai.com/).\n",
|
||||
"\n",
|
||||
"## Initialization\n",
|
||||
"\n",
|
||||
"### **Initialization**\n",
|
||||
"There are three initialization methods:\n",
|
||||
"- From Scratch: Basic model\n",
|
||||
"- From Bazaar: Download a pretrained base model from our model bazaar for better performance\n",
|
||||
"- From Checkpoint: Load a model that was previously saved\n",
|
||||
"\n",
|
||||
"For all of the following initialization methods, the `thirdai_key` parameter can be ommitted if the `THIRDAI_KEY` environment variable is set.\n",
|
||||
"For all of the following initialization methods, the `thirdai_key` parameter can be omitted if the `THIRDAI_KEY` environment variable is set.\n",
|
||||
"\n",
|
||||
"ThirdAI API keys can be obtained at https://www.thirdai.com/try-bolt/"
|
||||
]
|
||||
@@ -55,7 +57,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### **Inserting document sources**"
|
||||
"## Inserting document sources"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -96,7 +98,8 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### **Similarity search**\n",
|
||||
"## Similarity search\n",
|
||||
"\n",
|
||||
"To query the vectorstore, you can use the standard LangChain vectorstore method `similarity_search`, which returns a list of LangChain Document objects. Each document object represents a chunk of text from the indexed files. For example, it may contain a paragraph from one of the indexed PDF files. In addition to the text, the document's metadata field contains information such as the document's ID, the source of this document (which file it came from), and the score of the document."
|
||||
]
|
||||
},
|
||||
@@ -114,7 +117,8 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### **Fine tuning**\n",
|
||||
"## Fine tuning\n",
|
||||
"\n",
|
||||
"NeuralDBVectorStore can be fine-tuned to user behavior and domain-specific knowledge. It can be fine-tuned in two ways:\n",
|
||||
"1. Association: the vectorstore associates a source phrase with a target phrase. When the vectorstore sees the source phrase, it will also consider results that are relevant to the target phrase.\n",
|
||||
"2. Upvoting: the vectorstore upweights the score of a document for a specific query. This is useful when you want to fine-tune the vectorstore to user behavior. For example, if a user searches \"how is a car manufactured\" and likes the returned document with id 52, then we can upvote the document with id 52 for the query \"how is a car manufactured\"."
|
||||
@@ -146,15 +150,23 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "langchain",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"version": "3.10.0"
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
|
@@ -1,5 +1,17 @@
|
||||
{
|
||||
"redirects": [
|
||||
{
|
||||
"source": "/docs/integrations/llms/watsonxllm",
|
||||
"destination": "/docs/integrations/llms/ibm_watsonx"
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/llms/pai_eas_endpoint",
|
||||
"destination": "/docs/integrations/llms/alibabacloud_pai_eas_endpoint"
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/vectorstores/hanavector",
|
||||
"destination": "/docs/integrations/vectorstores/sap_hanavector"
|
||||
},
|
||||
{
|
||||
"source": "/docs/use_cases/qa_structured/sql",
|
||||
"destination": "/docs/use_cases/sql/"
|
||||
|
Reference in New Issue
Block a user