From 11accf8366da81855766b87c98ed731f026a31d2 Mon Sep 17 00:00:00 2001 From: Ofer Mendelevitch Date: Sat, 30 Dec 2023 14:04:04 -0800 Subject: [PATCH] Community: Newlines before bullets in IPYNB files (Vectara) (#15330) - **Description:** updated all Vectara IPYNB files so that bullets look okay in docs (added newline) - **Twitter handle:** @ofermend --- .../integrations/providers/vectara/vectara_summary.ipynb | 7 +++++++ .../retrievers/self_query/vectara_self_query.ipynb | 4 ++++ docs/docs/integrations/vectorstores/vectara.ipynb | 7 +++++++ 3 files changed, 18 insertions(+) diff --git a/docs/docs/integrations/providers/vectara/vectara_summary.ipynb b/docs/docs/integrations/providers/vectara/vectara_summary.ipynb index 0147eb57bad..b41e7e5280e 100644 --- a/docs/docs/integrations/providers/vectara/vectara_summary.ipynb +++ b/docs/docs/integrations/providers/vectara/vectara_summary.ipynb @@ -10,9 +10,13 @@ ">[Vectara](https://vectara.com/) is the trusted GenAI platform that provides an easy-to-use API for document indexing and querying. \n", "\n", "Vectara provides an end-to-end managed service for Retrieval Augmented Generation or [RAG](https://vectara.com/grounded-generation/), which includes:\n", + "\n", "1. A way to extract text from document files and chunk them into sentences.\n", + "\n", "2. The state-of-the-art [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model. Each text chunk is encoded into a vector embedding using Boomerang, and stored in the Vectara internal knowledge (vector+text) store\n", + "\n", "3. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) and [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/))\n", + "\n", "4. An option to create [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents, including citations.\n", "\n", "See the [Vectara API documentation](https://docs.vectara.com/docs/) for more information on how to use the API.\n", @@ -29,8 +33,11 @@ "# Setup\n", "\n", "You will need a Vectara account to use Vectara with LangChain. To get started, use the following steps:\n", + "\n", "1. [Sign up](https://www.vectara.com/integrations/langchain) for a Vectara account if you don't already have one. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.\n", + "\n", "2. Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the **\"Create Corpus\"** button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.\n", + "\n", "3. Next you'll need to create API keys to access the corpus. Click on the **\"Authorization\"** tab in the corpus view and then the **\"Create API Key\"** button. Give your key a name, and choose whether you want query only or query+index for your key. Click \"Create\" and you now have an active API key. Keep this key confidential. \n", "\n", "To use LangChain with Vectara, you'll need to have these three values: customer ID, corpus ID and api_key.\n", diff --git a/docs/docs/integrations/retrievers/self_query/vectara_self_query.ipynb b/docs/docs/integrations/retrievers/self_query/vectara_self_query.ipynb index a4b095149b1..1bc910351a6 100644 --- a/docs/docs/integrations/retrievers/self_query/vectara_self_query.ipynb +++ b/docs/docs/integrations/retrievers/self_query/vectara_self_query.ipynb @@ -10,9 +10,13 @@ ">[Vectara](https://vectara.com/) is the trusted GenAI platform that provides an easy-to-use API for document indexing and querying. \n", "\n", "Vectara provides an end-to-end managed service for Retrieval Augmented Generation or [RAG](https://vectara.com/grounded-generation/), which includes:\n", + "\n", "1. A way to extract text from document files and chunk them into sentences.\n", + "\n", "2. The state-of-the-art [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model. Each text chunk is encoded into a vector embedding using Boomerang, and stored in the Vectara internal knowledge (vector+text) store\n", + "\n", "3. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) and [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/))\n", + "\n", "4. An option to create [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents, including citations.\n", "\n", "See the [Vectara API documentation](https://docs.vectara.com/docs/) for more information on how to use the API.\n", diff --git a/docs/docs/integrations/vectorstores/vectara.ipynb b/docs/docs/integrations/vectorstores/vectara.ipynb index 3294b21019c..24fe802191d 100644 --- a/docs/docs/integrations/vectorstores/vectara.ipynb +++ b/docs/docs/integrations/vectorstores/vectara.ipynb @@ -10,9 +10,13 @@ ">[Vectara](https://vectara.com/) is the trusted GenAI platform that provides an easy-to-use API for document indexing and querying. \n", "\n", "Vectara provides an end-to-end managed service for Retrieval Augmented Generation or [RAG](https://vectara.com/grounded-generation/), which includes:\n", + "\n", "1. A way to extract text from document files and chunk them into sentences.\n", + "\n", "2. The state-of-the-art [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model. Each text chunk is encoded into a vector embedding using Boomerang, and stored in the Vectara internal knowledge (vector+text) store\n", + "\n", "3. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) and [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/))\n", + "\n", "4. An option to create [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents, including citations.\n", "\n", "See the [Vectara API documentation](https://docs.vectara.com/docs/) for more information on how to use the API.\n", @@ -28,8 +32,11 @@ "# Setup\n", "\n", "You will need a Vectara account to use Vectara with LangChain. To get started, use the following steps:\n", + "\n", "1. [Sign up](https://www.vectara.com/integrations/langchain) for a Vectara account if you don't already have one. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.\n", + "\n", "2. Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the **\"Create Corpus\"** button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.\n", + "\n", "3. Next you'll need to create API keys to access the corpus. Click on the **\"Authorization\"** tab in the corpus view and then the **\"Create API Key\"** button. Give your key a name, and choose whether you want query only or query+index for your key. Click \"Create\" and you now have an active API key. Keep this key confidential. \n", "\n", "To use LangChain with Vectara, you'll need to have these three values: customer ID, corpus ID and api_key.\n",