This commit is contained in:
Johnny Deuss 2023-10-12 16:44:03 +01:00 committed by GitHub
parent 361f8e1bc6
commit bb2ed4615c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
136 changed files with 238 additions and 231 deletions

View File

@ -6,7 +6,7 @@
"source": [ "source": [
"# Elasticsearch\n", "# Elasticsearch\n",
"\n", "\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/qa_structured/integrations/elasticsearch.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/qa_structured/integrations/elasticsearch.ipynb)\n",
"\n", "\n",
"We can use LLMs to interact with Elasticsearch analytics databases in natural language.\n", "We can use LLMs to interact with Elasticsearch analytics databases in natural language.\n",
"\n", "\n",

View File

@ -66,7 +66,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# install aditional dependencies\n", "# install additional dependencies\n",
"# ! pip install chromadb openai tiktoken" "# ! pip install chromadb openai tiktoken"
] ]
}, },

View File

@ -17,7 +17,7 @@
"\n", "\n",
"Note that SmartLLMChains\n", "Note that SmartLLMChains\n",
"- use more LLM passes (ie n+2 instead of just 1)\n", "- use more LLM passes (ie n+2 instead of just 1)\n",
"- only work then the underlying LLM has the capability for reflection, whicher smaller models often don't\n", "- only work then the underlying LLM has the capability for reflection, which smaller models often don't\n",
"- only work with underlying models that return exactly 1 output, not multiple\n", "- only work with underlying models that return exactly 1 output, not multiple\n",
"\n", "\n",
"This notebook demonstrates how to use a SmartLLMChain." "This notebook demonstrates how to use a SmartLLMChain."
@ -241,7 +241,7 @@
" ideation_llm=ChatOpenAI(temperature=0.9, model_name=\"gpt-4\"),\n", " ideation_llm=ChatOpenAI(temperature=0.9, model_name=\"gpt-4\"),\n",
" llm=ChatOpenAI(\n", " llm=ChatOpenAI(\n",
" temperature=0, model_name=\"gpt-4\"\n", " temperature=0, model_name=\"gpt-4\"\n",
" ), # will be used for critqiue and resolution as no specific llms are given\n", " ), # will be used for critique and resolution as no specific llms are given\n",
" prompt=prompt,\n", " prompt=prompt,\n",
" n_ideas=3,\n", " n_ideas=3,\n",
" verbose=True,\n", " verbose=True,\n",

View File

@ -42,7 +42,7 @@ If you are using GitHub pages for hosting, this command is a convenient way to b
### Continuous Integration ### Continuous Integration
Some common defaults for linting/formatting have been set for you. If you integrate your project with an open source Continuous Integration system (e.g. Travis CI, CircleCI), you may check for issues using the following command. Some common defaults for linting/formatting have been set for you. If you integrate your project with an open-source Continuous Integration system (e.g. Travis CI, CircleCI), you may check for issues using the following command.
``` ```
$ yarn ci $ yarn ci

View File

@ -91,7 +91,7 @@
- [Chat with a `CSV` | `LangChain Agents` Tutorial (Beginners)](https://youtu.be/tjeti5vXWOU) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao) - [Chat with a `CSV` | `LangChain Agents` Tutorial (Beginners)](https://youtu.be/tjeti5vXWOU) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
- [Create Your Own ChatGPT with `PDF` Data in 5 Minutes (LangChain Tutorial)](https://youtu.be/au2WVVGUvc8) by [Liam Ottley](https://www.youtube.com/@LiamOttley) - [Create Your Own ChatGPT with `PDF` Data in 5 Minutes (LangChain Tutorial)](https://youtu.be/au2WVVGUvc8) by [Liam Ottley](https://www.youtube.com/@LiamOttley)
- [Build a Custom Chatbot with OpenAI: `GPT-Index` & LangChain | Step-by-Step Tutorial](https://youtu.be/FIDv6nc4CgU) by [Fabrikod](https://www.youtube.com/@fabrikod) - [Build a Custom Chatbot with OpenAI: `GPT-Index` & LangChain | Step-by-Step Tutorial](https://youtu.be/FIDv6nc4CgU) by [Fabrikod](https://www.youtube.com/@fabrikod)
- [`Flowise` is an open source no-code UI visual tool to build 🦜🔗LangChain applications](https://youtu.be/CovAPtQPU0k) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA) - [`Flowise` is an open-source no-code UI visual tool to build 🦜🔗LangChain applications](https://youtu.be/CovAPtQPU0k) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA)
- [LangChain & GPT 4 For Data Analysis: The `Pandas` Dataframe Agent](https://youtu.be/rFQ5Kmkd4jc) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics) - [LangChain & GPT 4 For Data Analysis: The `Pandas` Dataframe Agent](https://youtu.be/rFQ5Kmkd4jc) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
- [`GirlfriendGPT` - AI girlfriend with LangChain](https://youtu.be/LiN3D1QZGQw) by [Toolfinder AI](https://www.youtube.com/@toolfinderai) - [`GirlfriendGPT` - AI girlfriend with LangChain](https://youtu.be/LiN3D1QZGQw) by [Toolfinder AI](https://www.youtube.com/@toolfinderai)
- [How to build with Langchain 10x easier | ⛓️ LangFlow & `Flowise`](https://youtu.be/Ya1oGL7ZTvU) by [AI Jason](https://www.youtube.com/@AIJasonZ) - [How to build with Langchain 10x easier | ⛓️ LangFlow & `Flowise`](https://youtu.be/Ya1oGL7ZTvU) by [AI Jason](https://www.youtube.com/@AIJasonZ)

View File

@ -101,7 +101,7 @@
"source": [ "source": [
"Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key.\n", "Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key.\n",
"\n", "\n",
"Note that when composing a RunnableMap when another Runnable we don't even need to wrap our dictuionary in the RunnableMap class — the type conversion is handled for us." "Note that when composing a RunnableMap when another Runnable we don't even need to wrap our dictionary in the RunnableMap class — the type conversion is handled for us."
] ]
}, },
{ {

View File

@ -6,7 +6,7 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Custom Pairwise Evaluator\n", "# Custom Pairwise Evaluator\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/custom.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/custom.ipynb)\n",
"\n", "\n",
"You can make your own pairwise string evaluators by inheriting from `PairwiseStringEvaluator` class and overwriting the `_evaluate_string_pairs` method (and the `_aevaluate_string_pairs` method if you want to use the evaluator asynchronously).\n", "You can make your own pairwise string evaluators by inheriting from `PairwiseStringEvaluator` class and overwriting the `_evaluate_string_pairs` method (and the `_aevaluate_string_pairs` method if you want to use the evaluator asynchronously).\n",
"\n", "\n",
@ -28,7 +28,7 @@
"from langchain.evaluation import PairwiseStringEvaluator\n", "from langchain.evaluation import PairwiseStringEvaluator\n",
"\n", "\n",
"\n", "\n",
"class LengthComparisonPairwiseEvalutor(PairwiseStringEvaluator):\n", "class LengthComparisonPairwiseEvaluator(PairwiseStringEvaluator):\n",
" \"\"\"\n", " \"\"\"\n",
" Custom evaluator to compare two strings.\n", " Custom evaluator to compare two strings.\n",
" \"\"\"\n", " \"\"\"\n",
@ -66,7 +66,7 @@
} }
], ],
"source": [ "source": [
"evaluator = LengthComparisonPairwiseEvalutor()\n", "evaluator = LengthComparisonPairwiseEvaluator()\n",
"\n", "\n",
"evaluator.evaluate_string_pairs(\n", "evaluator.evaluate_string_pairs(\n",
" prediction=\"The quick brown fox jumped over the lazy dog.\",\n", " prediction=\"The quick brown fox jumped over the lazy dog.\",\n",

View File

@ -8,7 +8,7 @@
}, },
"source": [ "source": [
"# Pairwise Embedding Distance \n", "# Pairwise Embedding Distance \n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/pairwise_embedding_distance.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/pairwise_embedding_distance.ipynb)\n",
"\n", "\n",
"One way to measure the similarity (or dissimilarity) between two predictions on a shared or similar input is to embed the predictions and compute a vector distance between the two embeddings.<a name=\"cite_ref-1\"></a>[<sup>[1]</sup>](#cite_note-1)\n", "One way to measure the similarity (or dissimilarity) between two predictions on a shared or similar input is to embed the predictions and compute a vector distance between the two embeddings.<a name=\"cite_ref-1\"></a>[<sup>[1]</sup>](#cite_note-1)\n",
"\n", "\n",
@ -86,7 +86,7 @@
"source": [ "source": [
"## Select the Distance Metric\n", "## Select the Distance Metric\n",
"\n", "\n",
"By default, the evalutor uses cosine distance. You can choose a different distance metric if you'd like. " "By default, the evaluator uses cosine distance. You can choose a different distance metric if you'd like. "
] ]
}, },
{ {
@ -230,4 +230,4 @@
}, },
"nbformat": 4, "nbformat": 4,
"nbformat_minor": 4 "nbformat_minor": 4
} }

View File

@ -6,13 +6,13 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Pairwise String Comparison\n", "# Pairwise String Comparison\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/pairwise_string.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/pairwise_string.ipynb)\n",
"\n", "\n",
"Often you will want to compare predictions of an LLM, Chain, or Agent for a given input. The `StringComparison` evaluators facilitate this so you can answer questions like:\n", "Often you will want to compare predictions of an LLM, Chain, or Agent for a given input. The `StringComparison` evaluators facilitate this so you can answer questions like:\n",
"\n", "\n",
"- Which LLM or prompt produces a preferred output for a given question?\n", "- Which LLM or prompt produces a preferred output for a given question?\n",
"- Which examples should I include for few-shot example selection?\n", "- Which examples should I include for few-shot example selection?\n",
"- Which output is better to include for fintetuning?\n", "- Which output is better to include for fine-tuning?\n",
"\n", "\n",
"The simplest and often most reliable automated way to choose a preferred prediction for a given input is to use the `pairwise_string` evaluator.\n", "The simplest and often most reliable automated way to choose a preferred prediction for a given input is to use the `pairwise_string` evaluator.\n",
"\n", "\n",
@ -379,4 +379,4 @@
}, },
"nbformat": 4, "nbformat": 4,
"nbformat_minor": 5 "nbformat_minor": 5
} }

View File

@ -5,7 +5,7 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Comparing Chain Outputs\n", "# Comparing Chain Outputs\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/examples/comparisons.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/examples/comparisons.ipynb)\n",
"\n", "\n",
"Suppose you have two different prompts (or LLMs). How do you know which will generate \"better\" results?\n", "Suppose you have two different prompts (or LLMs). How do you know which will generate \"better\" results?\n",
"\n", "\n",
@ -16,7 +16,7 @@
"2. A dataset of inputs\n", "2. A dataset of inputs\n",
"3. 2 (or more) LLMs, Chains, or Agents to compare\n", "3. 2 (or more) LLMs, Chains, or Agents to compare\n",
"\n", "\n",
"Then we will aggregate the restults to determine the preferred model.\n", "Then we will aggregate the results to determine the preferred model.\n",
"\n", "\n",
"### Step 1. Create the Evaluator\n", "### Step 1. Create the Evaluator\n",
"\n", "\n",
@ -445,4 +445,4 @@
}, },
"nbformat": 4, "nbformat": 4,
"nbformat_minor": 4 "nbformat_minor": 4
} }

View File

@ -6,7 +6,7 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Criteria Evaluation\n", "# Criteria Evaluation\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/criteria_eval_chain.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/criteria_eval_chain.ipynb)\n",
"\n", "\n",
"In scenarios where you wish to assess a model's output using a specific rubric or criteria set, the `criteria` evaluator proves to be a handy tool. It allows you to verify if an LLM or Chain's output complies with a defined set of criteria.\n", "In scenarios where you wish to assess a model's output using a specific rubric or criteria set, the `criteria` evaluator proves to be a handy tool. It allows you to verify if an LLM or Chain's output complies with a defined set of criteria.\n",
"\n", "\n",
@ -73,7 +73,7 @@
"- prediction (str) The predicted response.\n", "- prediction (str) The predicted response.\n",
"\n", "\n",
"The criteria evaluators return a dictionary with the following values:\n", "The criteria evaluators return a dictionary with the following values:\n",
"- score: Binary integeer 0 to 1, where 1 would mean that the output is compliant with the criteria, and 0 otherwise\n", "- score: Binary integer 0 to 1, where 1 would mean that the output is compliant with the criteria, and 0 otherwise\n",
"- value: A \"Y\" or \"N\" corresponding to the score\n", "- value: A \"Y\" or \"N\" corresponding to the score\n",
"- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score" "- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score"
] ]

View File

@ -6,7 +6,7 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Custom String Evaluator\n", "# Custom String Evaluator\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/custom.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/custom.ipynb)\n",
"\n", "\n",
"You can make your own custom string evaluators by inheriting from the `StringEvaluator` class and implementing the `_evaluate_strings` (and `_aevaluate_strings` for async support) methods.\n", "You can make your own custom string evaluators by inheriting from the `StringEvaluator` class and implementing the `_evaluate_strings` (and `_aevaluate_strings` for async support) methods.\n",
"\n", "\n",

View File

@ -7,7 +7,7 @@
}, },
"source": [ "source": [
"# Embedding Distance\n", "# Embedding Distance\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/embedding_distance.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/embedding_distance.ipynb)\n",
"\n", "\n",
"To measure semantic similarity (or dissimilarity) between a prediction and a reference label string, you could use a vector vector distance metric the two embedded representations using the `embedding_distance` evaluator.<a name=\"cite_ref-1\"></a>[<sup>[1]</sup>](#cite_note-1)\n", "To measure semantic similarity (or dissimilarity) between a prediction and a reference label string, you could use a vector vector distance metric the two embedded representations using the `embedding_distance` evaluator.<a name=\"cite_ref-1\"></a>[<sup>[1]</sup>](#cite_note-1)\n",
"\n", "\n",
@ -80,7 +80,7 @@
"source": [ "source": [
"## Select the Distance Metric\n", "## Select the Distance Metric\n",
"\n", "\n",
"By default, the evalutor uses cosine distance. You can choose a different distance metric if you'd like. " "By default, the evaluator uses cosine distance. You can choose a different distance metric if you'd like. "
] ]
}, },
{ {
@ -221,4 +221,4 @@
}, },
"nbformat": 4, "nbformat": 4,
"nbformat_minor": 4 "nbformat_minor": 4
} }

View File

@ -6,7 +6,7 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Exact Match\n", "# Exact Match\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/exact_match.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/exact_match.ipynb)\n",
"\n", "\n",
"Probably the simplest ways to evaluate an LLM or runnable's string output against a reference label is by a simple string equivalence.\n", "Probably the simplest ways to evaluate an LLM or runnable's string output against a reference label is by a simple string equivalence.\n",
"\n", "\n",

View File

@ -6,7 +6,7 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Regex Match\n", "# Regex Match\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/regex_match.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/regex_match.ipynb)\n",
"\n", "\n",
"To evaluate chain or runnable string predictions against a custom regex, you can use the `regex_match` evaluator." "To evaluate chain or runnable string predictions against a custom regex, you can use the `regex_match` evaluator."
] ]

View File

@ -6,7 +6,7 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"# String Distance\n", "# String Distance\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/string_distance.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/string_distance.ipynb)\n",
"\n", "\n",
"One of the simplest ways to compare an LLM or chain's string output against a reference label is by using string distance measurements such as Levenshtein or postfix distance. This can be used alongside approximate/fuzzy matching criteria for very basic unit testing.\n", "One of the simplest ways to compare an LLM or chain's string output against a reference label is by using string distance measurements such as Levenshtein or postfix distance. This can be used alongside approximate/fuzzy matching criteria for very basic unit testing.\n",
"\n", "\n",

View File

@ -6,7 +6,7 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Custom Trajectory Evaluator\n", "# Custom Trajectory Evaluator\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/trajectory/custom.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/trajectory/custom.ipynb)\n",
"\n", "\n",
"You can make your own custom trajectory evaluators by inheriting from the [AgentTrajectoryEvaluator](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.AgentTrajectoryEvaluator.html#langchain.evaluation.schema.AgentTrajectoryEvaluator) class and overwriting the `_evaluate_agent_trajectory` (and `_aevaluate_agent_action`) method.\n", "You can make your own custom trajectory evaluators by inheriting from the [AgentTrajectoryEvaluator](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.AgentTrajectoryEvaluator.html#langchain.evaluation.schema.AgentTrajectoryEvaluator) class and overwriting the `_evaluate_agent_trajectory` (and `_aevaluate_agent_action`) method.\n",
"\n", "\n",

View File

@ -8,7 +8,7 @@
}, },
"source": [ "source": [
"# Agent Trajectory\n", "# Agent Trajectory\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/trajectory/trajectory_eval.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/trajectory/trajectory_eval.ipynb)\n",
"\n", "\n",
"Agents can be difficult to holistically evaluate due to the breadth of actions and generation they can make. We recommend using multiple evaluation techniques appropriate to your use case. One way to evaluate an agent is to look at the whole trajectory of actions taken along with their responses.\n", "Agents can be difficult to holistically evaluate due to the breadth of actions and generation they can make. We recommend using multiple evaluation techniques appropriate to your use case. One way to evaluate an agent is to look at the whole trajectory of actions taken along with their responses.\n",
"\n", "\n",

View File

@ -8,7 +8,7 @@
}, },
"source": [ "source": [
"# LangSmith Walkthrough\n", "# LangSmith Walkthrough\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/langsmith/walkthrough.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/langsmith/walkthrough.ipynb)\n",
"\n", "\n",
"LangChain makes it easy to prototype LLM applications and Agents. However, delivering LLM applications to production can be deceptively difficult. You will likely have to heavily customize and iterate on your prompts, chains, and other components to create a high-quality product.\n", "LangChain makes it easy to prototype LLM applications and Agents. However, delivering LLM applications to production can be deceptively difficult. You will likely have to heavily customize and iterate on your prompts, chains, and other components to create a high-quality product.\n",
"\n", "\n",
@ -402,7 +402,7 @@
" # You can select a default one such as \"helpfulness\" or provide your own.\n", " # You can select a default one such as \"helpfulness\" or provide your own.\n",
" RunEvalConfig.LabeledCriteria(\"helpfulness\"),\n", " RunEvalConfig.LabeledCriteria(\"helpfulness\"),\n",
" # The LabeledScoreString evaluator outputs a score on a scale from 1-10.\n", " # The LabeledScoreString evaluator outputs a score on a scale from 1-10.\n",
" # You can use defalut criteria or write our own rubric\n", " # You can use default criteria or write our own rubric\n",
" RunEvalConfig.LabeledScoreString(\n", " RunEvalConfig.LabeledScoreString(\n",
" {\n", " {\n",
" \"accuracy\": \"\"\"\n", " \"accuracy\": \"\"\"\n",
@ -433,7 +433,7 @@
"Use the [run_on_dataset](https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.run_on_dataset.html#langchain.smith.evaluation.runner_utils.run_on_dataset) (or asynchronous [arun_on_dataset](https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.arun_on_dataset.html#langchain.smith.evaluation.runner_utils.arun_on_dataset)) function to evaluate your model. This will:\n", "Use the [run_on_dataset](https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.run_on_dataset.html#langchain.smith.evaluation.runner_utils.run_on_dataset) (or asynchronous [arun_on_dataset](https://api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.runner_utils.arun_on_dataset.html#langchain.smith.evaluation.runner_utils.arun_on_dataset)) function to evaluate your model. This will:\n",
"1. Fetch example rows from the specified dataset.\n", "1. Fetch example rows from the specified dataset.\n",
"2. Run your agent (or any custom function) on each example.\n", "2. Run your agent (or any custom function) on each example.\n",
"3. Apply evalutors to the resulting run traces and corresponding reference examples to generate automated feedback.\n", "3. Apply evaluators to the resulting run traces and corresponding reference examples to generate automated feedback.\n",
"\n", "\n",
"The results will be visible in the LangSmith app." "The results will be visible in the LangSmith app."
] ]
@ -756,7 +756,7 @@
"source": [ "source": [
"## Conclusion\n", "## Conclusion\n",
"\n", "\n",
"Congratulations! You have succesfully traced and evaluated an agent using LangSmith!\n", "Congratulations! You have successfully traced and evaluated an agent using LangSmith!\n",
"\n", "\n",
"This was a quick guide to get started, but there are many more ways to use LangSmith to speed up your developer flow and produce better results.\n", "This was a quick guide to get started, but there are many more ways to use LangSmith to speed up your developer flow and produce better results.\n",
"\n", "\n",

View File

@ -20,14 +20,14 @@
"\n", "\n",
"Running an LLM locally requires a few things:\n", "Running an LLM locally requires a few things:\n",
"\n", "\n",
"1. `Open source LLM`: An open source LLM that can be freely modified and shared \n", "1. `Open-source LLM`: An open-source LLM that can be freely modified and shared \n",
"2. `Inference`: Ability to run this LLM on your device w/ acceptable latency\n", "2. `Inference`: Ability to run this LLM on your device w/ acceptable latency\n",
"\n", "\n",
"### Open Source LLMs\n", "### Open-source LLMs\n",
"\n", "\n",
"Users can now gain access to a rapidly growing set of [open source LLMs](https://cameronrwolfe.substack.com/p/the-history-of-open-source-llms-better). \n", "Users can now gain access to a rapidly growing set of [open-source LLMs](https://cameronrwolfe.substack.com/p/the-history-of-open-source-llms-better). \n",
"\n", "\n",
"These LLMs can be assessed across at least two dimentions (see figure):\n", "These LLMs can be assessed across at least two dimensions (see figure):\n",
" \n", " \n",
"1. `Base model`: What is the base-model and how was it trained?\n", "1. `Base model`: What is the base-model and how was it trained?\n",
"2. `Fine-tuning approach`: Was the base-model fine-tuned and, if so, what [set of instructions](https://cameronrwolfe.substack.com/p/beyond-llama-the-power-of-open-llms#%C2%A7alpaca-an-instruction-following-llama-model) was used?\n", "2. `Fine-tuning approach`: Was the base-model fine-tuned and, if so, what [set of instructions](https://cameronrwolfe.substack.com/p/beyond-llama-the-power-of-open-llms#%C2%A7alpaca-an-instruction-following-llama-model) was used?\n",
@ -42,7 +42,7 @@
"\n", "\n",
"### Inference\n", "### Inference\n",
"\n", "\n",
"A few frameworks for this have emerged to support inference of open source LLMs on various devices:\n", "A few frameworks for this have emerged to support inference of open-source LLMs on various devices:\n",
"\n", "\n",
"1. [`llama.cpp`](https://github.com/ggerganov/llama.cpp): C++ implementation of llama inference code with [weight optimization / quantization](https://finbarr.ca/how-is-llama-cpp-possible/)\n", "1. [`llama.cpp`](https://github.com/ggerganov/llama.cpp): C++ implementation of llama inference code with [weight optimization / quantization](https://finbarr.ca/how-is-llama-cpp-possible/)\n",
"2. [`gpt4all`](https://docs.gpt4all.io/index.html): Optimized C backend for inference\n", "2. [`gpt4all`](https://docs.gpt4all.io/index.html): Optimized C backend for inference\n",
@ -164,7 +164,7 @@
"\n", "\n",
"See the [`llama.cpp`](docs/integrations/llms/llamacpp) setup [here](https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md) to enable this.\n", "See the [`llama.cpp`](docs/integrations/llms/llamacpp) setup [here](https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md) to enable this.\n",
"\n", "\n",
"In particular, ensure that conda is using the correct virtual enviorment that you created (`miniforge3`).\n", "In particular, ensure that conda is using the correct virtual environment that you created (`miniforge3`).\n",
"\n", "\n",
"E.g., for me:\n", "E.g., for me:\n",
"\n", "\n",
@ -574,7 +574,7 @@
"* `Privacy`: private data (e.g., journals, etc) that a user does not want to share \n", "* `Privacy`: private data (e.g., journals, etc) that a user does not want to share \n",
"* `Cost`: text preprocessing (extraction/tagging), summarization, and agent simulations are token-use-intensive tasks\n", "* `Cost`: text preprocessing (extraction/tagging), summarization, and agent simulations are token-use-intensive tasks\n",
"\n", "\n",
"In addition, [here](https://blog.langchain.dev/using-langsmith-to-support-fine-tuning-of-open-source-llms/) is an overview on fine-tuning, which can utilize open source LLMs." "In addition, [here](https://blog.langchain.dev/using-langsmith-to-support-fine-tuning-of-open-source-llms/) is an overview on fine-tuning, which can utilize open-source LLMs."
] ]
} }
], ],

View File

@ -6,7 +6,7 @@
"source": [ "source": [
"# Data anonymization with Microsoft Presidio\n", "# Data anonymization with Microsoft Presidio\n",
"\n", "\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/privacy/presidio_data_anonymization/index.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/privacy/presidio_data_anonymization/index.ipynb)\n",
"\n", "\n",
"## Use case\n", "## Use case\n",
"\n", "\n",

View File

@ -14,9 +14,9 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Mutli-language data anonymization with Microsoft Presidio\n", "# Multi-language data anonymization with Microsoft Presidio\n",
"\n", "\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/privacy/presidio_data_anonymization/multi_language.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/privacy/presidio_data_anonymization/multi_language.ipynb)\n",
"\n", "\n",
"\n", "\n",
"## Use case\n", "## Use case\n",

View File

@ -16,7 +16,7 @@
"source": [ "source": [
"# Reversible data anonymization with Microsoft Presidio\n", "# Reversible data anonymization with Microsoft Presidio\n",
"\n", "\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/privacy/presidio_data_anonymization/reversible.ipynb)\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/privacy/presidio_data_anonymization/reversible.ipynb)\n",
"\n", "\n",
"\n", "\n",
"## Use case\n", "## Use case\n",

View File

@ -95,7 +95,8 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"from langchain.prompts import PromptTemplate\nfrom langchain.chains import LLMChain\n", "from langchain.prompts import PromptTemplate\n",
"from langchain.chains import LLMChain\n",
"from langchain.llms.fake import FakeListLLM\n", "from langchain.llms.fake import FakeListLLM\n",
"from langchain_experimental.comprehend_moderation.base_moderation_exceptions import ModerationPiiError\n", "from langchain_experimental.comprehend_moderation.base_moderation_exceptions import ModerationPiiError\n",
"\n", "\n",
@ -399,7 +400,8 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"from langchain.prompts import PromptTemplate\nfrom langchain.chains import LLMChain\n", "from langchain.prompts import PromptTemplate\n",
"from langchain.chains import LLMChain\n",
"from langchain.llms.fake import FakeListLLM\n", "from langchain.llms.fake import FakeListLLM\n",
"\n", "\n",
"template = \"\"\"Question: {question}\n", "template = \"\"\"Question: {question}\n",
@ -565,7 +567,8 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"from langchain.llms import HuggingFaceHub\n", "from langchain.llms import HuggingFaceHub\n",
"from langchain.prompts import PromptTemplate\nfrom langchain.chains import LLMChain\n", "from langchain.prompts import PromptTemplate\n",
"from langchain.chains import LLMChain\n",
"\n", "\n",
"template = \"\"\"Question: {question}\n", "template = \"\"\"Question: {question}\n",
"\n", "\n",
@ -659,7 +662,7 @@
"---\n", "---\n",
"## With Amazon SageMaker Jumpstart\n", "## With Amazon SageMaker Jumpstart\n",
"\n", "\n",
"The exmaple below shows how to use Amazon Comprehend Moderation chain with an Amazon SageMaker Jumpstart hosted LLM. You should have an Amazon SageMaker Jumpstart hosted LLM endpoint within your AWS Account. " "The example below shows how to use Amazon Comprehend Moderation chain with an Amazon SageMaker Jumpstart hosted LLM. You should have an Amazon SageMaker Jumpstart hosted LLM endpoint within your AWS Account. "
] ]
}, },
{ {

View File

@ -130,7 +130,7 @@
"\n", "\n",
"In this example we unlock more of the power of PromptLayer.\n", "In this example we unlock more of the power of PromptLayer.\n",
"\n", "\n",
"PromptLayer allows you to visually create, version, and track prompt templates. Using the [Prompt Registry](https://docs.promptlayer.com/features/prompt-registry), we can programatically fetch the prompt template called `example`.\n", "PromptLayer allows you to visually create, version, and track prompt templates. Using the [Prompt Registry](https://docs.promptlayer.com/features/prompt-registry), we can programmatically fetch the prompt template called `example`.\n",
"\n", "\n",
"We also define a `pl_id_callback` function which takes in the `promptlayer_request_id` and logs a score, metadata and links the prompt template used. Read more about tracking on [our docs](https://docs.promptlayer.com/features/prompt-history/request-id)." "We also define a `pl_id_callback` function which takes in the `promptlayer_request_id` and logs a score, metadata and links the prompt template used. Read more about tracking on [our docs](https://docs.promptlayer.com/features/prompt-history/request-id)."
] ]

View File

@ -81,7 +81,7 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Model Version\n", "## Model Version\n",
"Azure OpenAI responses contain `model` property, which is name of the model used to generate the response. However unlike native OpenAI responses, it does not contain the version of the model, which is set on the deplyoment in Azure. This makes it tricky to know which version of the model was used to generate the response, which as result can lead to e.g. wrong total cost calculation with `OpenAICallbackHandler`.\n", "Azure OpenAI responses contain `model` property, which is name of the model used to generate the response. However unlike native OpenAI responses, it does not contain the version of the model, which is set on the deployment in Azure. This makes it tricky to know which version of the model was used to generate the response, which as result can lead to e.g. wrong total cost calculation with `OpenAICallbackHandler`.\n",
"\n", "\n",
"To solve this problem, you can pass `model_version` parameter to `AzureChatOpenAI` class, which will be added to the model name in the llm output. This way you can easily distinguish between different versions of the model." "To solve this problem, you can pass `model_version` parameter to `AzureChatOpenAI` class, which will be added to the model name in the llm output. This way you can easily distinguish between different versions of the model."
] ]

View File

@ -7,7 +7,7 @@
"source": [ "source": [
"# Baidu Qianfan\n", "# Baidu Qianfan\n",
"\n", "\n",
"Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.\n", "Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.\n",
"\n", "\n",
"Basically, those model are split into the following type:\n", "Basically, those model are split into the following type:\n",
"\n", "\n",
@ -144,10 +144,10 @@
"source": [ "source": [
"## Use different models in Qianfan\n", "## Use different models in Qianfan\n",
"\n", "\n",
"In the case you want to deploy your own model based on Ernie Bot or third-party open sources model, you could follow these steps:\n", "In the case you want to deploy your own model based on Ernie Bot or third-party open-source model, you could follow these steps:\n",
"\n", "\n",
"- 1. Optional, if the model are included in the default models, skip itDeploy your model in Qianfan Console, get your own customized deploy endpoint.\n", "- 1. Optional, if the model are included in the default models, skip itDeploy your model in Qianfan Console, get your own customized deploy endpoint.\n",
"- 2. Set up the field called `endpoint` in the initlization:" "- 2. Set up the field called `endpoint` in the initialization:"
] ]
}, },
{ {

View File

@ -131,7 +131,7 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Using PromptLayer Track\n", "## Using PromptLayer Track\n",
"If you would like to use any of the [PromptLayer tracking features](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9), you need to pass the argument `return_pl_id` when instantializing the PromptLayer LLM to get the request id. " "If you would like to use any of the [PromptLayer tracking features](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9), you need to pass the argument `return_pl_id` when instantiating the PromptLayer LLM to get the request id. "
] ]
}, },
{ {

View File

@ -15,7 +15,7 @@
"3. Initialize the `DiscordChatLoader` with the file path pointed to the text file.\n", "3. Initialize the `DiscordChatLoader` with the file path pointed to the text file.\n",
"4. Call `loader.load()` (or `loader.lazy_load()`) to perform the conversion.\n", "4. Call `loader.load()` (or `loader.lazy_load()`) to perform the conversion.\n",
"\n", "\n",
"## 1. Creat message dump\n", "## 1. Create message dump\n",
"\n", "\n",
"Currently (2023/08/23) this loader only supports .txt files in the format generated by copying messages in the app to your clipboard and pasting in a file. Below is an example." "Currently (2023/08/23) this loader only supports .txt files in the format generated by copying messages in the app to your clipboard and pasting in a file. Below is an example."
] ]
@ -266,7 +266,7 @@
"source": [ "source": [
"### Next Steps\n", "### Next Steps\n",
"\n", "\n",
"You can then use these messages how you see fit, such as finetuning a model, few-shot example selection, or directly make predictions for the next message " "You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message "
] ]
}, },
{ {

View File

@ -7,7 +7,7 @@
"source": [ "source": [
"# Facebook Messenger\n", "# Facebook Messenger\n",
"\n", "\n",
"This notebook shows how to load data from Facebook in a format you can finetune on. The overall steps are:\n", "This notebook shows how to load data from Facebook in a format you can fine-tune on. The overall steps are:\n",
"\n", "\n",
"1. Download your messenger data to disk.\n", "1. Download your messenger data to disk.\n",
"2. Create the Chat Loader and call `loader.load()` (or `loader.lazy_load()`) to perform the conversion.\n", "2. Create the Chat Loader and call `loader.load()` (or `loader.lazy_load()`) to perform the conversion.\n",

View File

@ -7,7 +7,7 @@
"source": [ "source": [
"# GMail\n", "# GMail\n",
"\n", "\n",
"This loader goes over how to load data from GMail. There are many ways you could want to load data from GMail. This loader is currently fairly opionated in how to do so. The way it does it is it first looks for all messages that you have sent. It then looks for messages where you are responding to a previous email. It then fetches that previous email, and creates a training example of that email, followed by your email.\n", "This loader goes over how to load data from GMail. There are many ways you could want to load data from GMail. This loader is currently fairly opinionated in how to do so. The way it does it is it first looks for all messages that you have sent. It then looks for messages where you are responding to a previous email. It then fetches that previous email, and creates a training example of that email, followed by your email.\n",
"\n", "\n",
"Note that there are clear limitations here. For example, all examples created are only looking at the previous email for context.\n", "Note that there are clear limitations here. For example, all examples created are only looking at the previous email for context.\n",
"\n", "\n",

View File

@ -17,7 +17,7 @@
"\n", "\n",
"## 1. Access Chat DB\n", "## 1. Access Chat DB\n",
"\n", "\n",
"It's likely that your terminal is denied access to `~/Library/Messages`. To use this class, you can copy the DB to an accessible directory (e.g., Documents) and load from there. Alternatively (and not recommended), you can grant full disk access for your terminal emulator in System Settings > Securityand Privacy > Full Disk Access.\n", "It's likely that your terminal is denied access to `~/Library/Messages`. To use this class, you can copy the DB to an accessible directory (e.g., Documents) and load from there. Alternatively (and not recommended), you can grant full disk access for your terminal emulator in System Settings > Security and Privacy > Full Disk Access.\n",
"\n", "\n",
"We have created an example database you can use at [this linked drive file](https://drive.google.com/file/d/1NebNKqTA2NXApCmeH6mu0unJD2tANZzo/view?usp=sharing)." "We have created an example database you can use at [this linked drive file](https://drive.google.com/file/d/1NebNKqTA2NXApCmeH6mu0unJD2tANZzo/view?usp=sharing)."
] ]

View File

@ -14,9 +14,9 @@
"2. Create the `SlackChatLoader` with the file path pointed to the json file or directory of JSON files\n", "2. Create the `SlackChatLoader` with the file path pointed to the json file or directory of JSON files\n",
"3. Call `loader.load()` (or `loader.lazy_load()`) to perform the conversion. Optionally use `merge_chat_runs` to combine message from the same sender in sequence, and/or `map_ai_messages` to convert messages from the specified sender to the \"AIMessage\" class.\n", "3. Call `loader.load()` (or `loader.lazy_load()`) to perform the conversion. Optionally use `merge_chat_runs` to combine message from the same sender in sequence, and/or `map_ai_messages` to convert messages from the specified sender to the \"AIMessage\" class.\n",
"\n", "\n",
"## 1. Creat message dump\n", "## 1. Create message dump\n",
"\n", "\n",
"Currently (2023/08/23) this loader best supports a zip directory of files in the format generated by exporting your a direct message converstion from Slack. Follow up-to-date instructions from slack on how to do so.\n", "Currently (2023/08/23) this loader best supports a zip directory of files in the format generated by exporting your a direct message conversation from Slack. Follow up-to-date instructions from slack on how to do so.\n",
"\n", "\n",
"We have an example in the LangChain repo." "We have an example in the LangChain repo."
] ]
@ -106,7 +106,7 @@
"source": [ "source": [
"### Next Steps\n", "### Next Steps\n",
"\n", "\n",
"You can then use these messages how you see fit, such as finetuning a model, few-shot example selection, or directly make predictions for the next message. " "You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message. "
] ]
}, },
{ {

View File

@ -5,7 +5,7 @@
"id": "735455a6-f82e-4252-b545-27385ef883f4", "id": "735455a6-f82e-4252-b545-27385ef883f4",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Telegram\n", " Telegram\n",
"\n", "\n",
"This notebook shows how to use the Telegram chat loader. This class helps map exported Telegram conversations to LangChain chat messages.\n", "This notebook shows how to use the Telegram chat loader. This class helps map exported Telegram conversations to LangChain chat messages.\n",
"\n", "\n",
@ -14,7 +14,7 @@
"2. Create the `TelegramChatLoader` with the file path pointed to the json file or directory of JSON files\n", "2. Create the `TelegramChatLoader` with the file path pointed to the json file or directory of JSON files\n",
"3. Call `loader.load()` (or `loader.lazy_load()`) to perform the conversion. Optionally use `merge_chat_runs` to combine message from the same sender in sequence, and/or `map_ai_messages` to convert messages from the specified sender to the \"AIMessage\" class.\n", "3. Call `loader.load()` (or `loader.lazy_load()`) to perform the conversion. Optionally use `merge_chat_runs` to combine message from the same sender in sequence, and/or `map_ai_messages` to convert messages from the specified sender to the \"AIMessage\" class.\n",
"\n", "\n",
"## 1. Creat message dump\n", "## 1. Create message dump\n",
"\n", "\n",
"Currently (2023/08/23) this loader best supports json files in the format generated by exporting your chat history from the [Telegram Desktop App](https://desktop.telegram.org/).\n", "Currently (2023/08/23) this loader best supports json files in the format generated by exporting your chat history from the [Telegram Desktop App](https://desktop.telegram.org/).\n",
"\n", "\n",
@ -155,7 +155,7 @@
"source": [ "source": [
"### Next Steps\n", "### Next Steps\n",
"\n", "\n",
"You can then use these messages how you see fit, such as finetuning a model, few-shot example selection, or directly make predictions for the next message " "You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message "
] ]
}, },
{ {

View File

@ -7,7 +7,7 @@
"source": [ "source": [
"# Twitter (via Apify)\n", "# Twitter (via Apify)\n",
"\n", "\n",
"This notebook shows how to load chat messages from Twitter to finetune on. We do this by utilizing Apify. \n", "This notebook shows how to load chat messages from Twitter to fine-tune on. We do this by utilizing Apify. \n",
"\n", "\n",
"First, use Apify to export tweets. An example" "First, use Apify to export tweets. An example"
] ]

View File

@ -7,7 +7,7 @@
"source": [ "source": [
"# WeChat\n", "# WeChat\n",
"\n", "\n",
"There is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundrudes of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages.\n", "There is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundreds of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages.\n",
"\n", "\n",
"> Highly inspired by https://python.langchain.com/docs/integrations/chat_loaders/discord\n", "> Highly inspired by https://python.langchain.com/docs/integrations/chat_loaders/discord\n",
"\n", "\n",
@ -19,7 +19,7 @@
"4. Initialize the `WeChatChatLoader` with the file path pointed to the text file.\n", "4. Initialize the `WeChatChatLoader` with the file path pointed to the text file.\n",
"5. Call `loader.load()` (or `loader.lazy_load()`) to perform the conversion.\n", "5. Call `loader.load()` (or `loader.lazy_load()`) to perform the conversion.\n",
"\n", "\n",
"## 1. Creat message dump\n", "## 1. Create message dump\n",
"\n", "\n",
"This loader only supports .txt files in the format generated by copying messages in the app to your clipboard and pasting in a file. Below is an example." "This loader only supports .txt files in the format generated by copying messages in the app to your clipboard and pasting in a file. Below is an example."
] ]
@ -249,7 +249,7 @@
"source": [ "source": [
"### Next Steps\n", "### Next Steps\n",
"\n", "\n",
"You can then use these messages how you see fit, such as finetuning a model, few-shot example selection, or directly make predictions for the next message " "You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message "
] ]
}, },
{ {

View File

@ -14,7 +14,7 @@
"2. Create the `WhatsAppChatLoader` with the file path pointed to the json file or directory of JSON files\n", "2. Create the `WhatsAppChatLoader` with the file path pointed to the json file or directory of JSON files\n",
"3. Call `loader.load()` (or `loader.lazy_load()`) to perform the conversion.\n", "3. Call `loader.load()` (or `loader.lazy_load()`) to perform the conversion.\n",
"\n", "\n",
"## 1. Creat message dump\n", "## 1. Create message dump\n",
"\n", "\n",
"To make the export of your WhatsApp conversation(s), complete the following steps:\n", "To make the export of your WhatsApp conversation(s), complete the following steps:\n",
"\n", "\n",
@ -22,7 +22,7 @@
"2. Click the three dots in the top right corner and select \"More\".\n", "2. Click the three dots in the top right corner and select \"More\".\n",
"3. Then select \"Export chat\" and choose \"Without media\".\n", "3. Then select \"Export chat\" and choose \"Without media\".\n",
"\n", "\n",
"An example of the data format for each converation is below: " "An example of the data format for each conversation is below: "
] ]
}, },
{ {
@ -64,7 +64,7 @@
"\n", "\n",
"The WhatsAppChatLoader accepts the resulting zip file, unzipped directory, or the path to any of the chat `.txt` files therein.\n", "The WhatsAppChatLoader accepts the resulting zip file, unzipped directory, or the path to any of the chat `.txt` files therein.\n",
"\n", "\n",
"Provide that as well as the user name you want to take on the role of \"AI\" when finetuning." "Provide that as well as the user name you want to take on the role of \"AI\" when fine-tuning."
] ]
}, },
{ {
@ -145,7 +145,7 @@
"source": [ "source": [
"### Next Steps\n", "### Next Steps\n",
"\n", "\n",
"You can then use these messages how you see fit, such as finetuning a model, few-shot example selection, or directly make predictions for the next message." "You can then use these messages how you see fit, such as fine-tuning a model, few-shot example selection, or directly make predictions for the next message."
] ]
}, },
{ {

View File

@ -6,7 +6,7 @@
"source": [ "source": [
"# Apify Dataset\n", "# Apify Dataset\n",
"\n", "\n",
">[Apify Dataset](https://docs.apify.com/platform/storage/dataset) is a scaleable append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of [Apify Actors](https://apify.com/store)—serverless cloud programs for varius web scraping, crawling, and data extraction use cases.\n", ">[Apify Dataset](https://docs.apify.com/platform/storage/dataset) is a scalable append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of [Apify Actors](https://apify.com/store)—serverless cloud programs for various web scraping, crawling, and data extraction use cases.\n",
"\n", "\n",
"This notebook shows how to load Apify datasets to LangChain.\n", "This notebook shows how to load Apify datasets to LangChain.\n",
"\n", "\n",

View File

@ -6,7 +6,7 @@
"source": [ "source": [
"# Dropbox\n", "# Dropbox\n",
"\n", "\n",
"[Drobpox](https://en.wikipedia.org/wiki/Dropbox) is a file hosting service that brings everything-traditional files, cloud content, and web shortcuts together in one place.\n", "[Dropbox](https://en.wikipedia.org/wiki/Dropbox) is a file hosting service that brings everything-traditional files, cloud content, and web shortcuts together in one place.\n",
"\n", "\n",
"This notebook covers how to load documents from *Dropbox*. In addition to common files such as text and PDF files, it also supports *Dropbox Paper* files.\n", "This notebook covers how to load documents from *Dropbox*. In addition to common files such as text and PDF files, it also supports *Dropbox Paper* files.\n",
"\n", "\n",
@ -17,7 +17,7 @@
"3. Generate access token: https://www.dropbox.com/developers/apps/create.\n", "3. Generate access token: https://www.dropbox.com/developers/apps/create.\n",
"4. `pip install dropbox` (requires `pip install unstructured` for PDF filetype).\n", "4. `pip install dropbox` (requires `pip install unstructured` for PDF filetype).\n",
"\n", "\n",
"## Intructions\n", "## Instructions\n",
"\n", "\n",
"`DropboxLoader`` requires you to create a Dropbox App and generate an access token. This can be done from https://www.dropbox.com/developers/apps/create. You also need to have the Dropbox Python SDK installed (pip install dropbox).\n", "`DropboxLoader`` requires you to create a Dropbox App and generate an access token. This can be done from https://www.dropbox.com/developers/apps/create. You also need to have the Dropbox Python SDK installed (pip install dropbox).\n",
"\n", "\n",

View File

@ -13,11 +13,11 @@
"\n", "\n",
"## Overview\n", "## Overview\n",
"\n", "\n",
"The `Etherscan` loader use `Etherscan API` to load transacactions histories under specific account on `Ethereum Mainnet`.\n", "The `Etherscan` loader use `Etherscan API` to load transactions histories under specific account on `Ethereum Mainnet`.\n",
"\n", "\n",
"You will need a `Etherscan api key` to proceed. The free api key has 5 calls per seconds quota.\n", "You will need a `Etherscan api key` to proceed. The free api key has 5 calls per seconds quota.\n",
"\n", "\n",
"The loader supports the following six functinalities:\n", "The loader supports the following six functionalities:\n",
"* Retrieve normal transactions under specific account on Ethereum Mainet\n", "* Retrieve normal transactions under specific account on Ethereum Mainet\n",
"* Retrieve internal transactions under specific account on Ethereum Mainet\n", "* Retrieve internal transactions under specific account on Ethereum Mainet\n",
"* Retrieve erc20 transactions under specific account on Ethereum Mainet\n", "* Retrieve erc20 transactions under specific account on Ethereum Mainet\n",
@ -28,7 +28,7 @@
"\n", "\n",
"If the account does not have corresponding transactions, the loader will a list with one document. The content of document is ''.\n", "If the account does not have corresponding transactions, the loader will a list with one document. The content of document is ''.\n",
"\n", "\n",
"You can pass differnt filters to loader to access different functionalities we mentioned above:\n", "You can pass different filters to loader to access different functionalities we mentioned above:\n",
"* \"normal_transaction\"\n", "* \"normal_transaction\"\n",
"* \"internal_transaction\"\n", "* \"internal_transaction\"\n",
"* \"erc20_transaction\"\n", "* \"erc20_transaction\"\n",
@ -41,7 +41,7 @@
"\n", "\n",
"All functions related to transactions histories are restricted 1000 histories maximum because of Etherscan limit. You can use the following parameters to find the transaction histories you need:\n", "All functions related to transactions histories are restricted 1000 histories maximum because of Etherscan limit. You can use the following parameters to find the transaction histories you need:\n",
"* offset: default to 20. Shows 20 transactions for one time\n", "* offset: default to 20. Shows 20 transactions for one time\n",
"* page: default to 1. This controls pagenation.\n", "* page: default to 1. This controls pagination.\n",
"* start_block: Default to 0. The transaction histories starts from 0 block.\n", "* start_block: Default to 0. The transaction histories starts from 0 block.\n",
"* end_block: Default to 99999999. The transaction histories starts from 99999999 block\n", "* end_block: Default to 99999999. The transaction histories starts from 99999999 block\n",
"* sort: \"desc\" or \"asc\". Set default to \"desc\" to get latest transactions." "* sort: \"desc\" or \"asc\". Set default to \"desc\" to get latest transactions."

View File

@ -89,7 +89,7 @@
"def generate_code(human_input):\n", "def generate_code(human_input):\n",
" # I have no idea if the Jon Carmack thing makes for better code. YMMV.\n", " # I have no idea if the Jon Carmack thing makes for better code. YMMV.\n",
" # See https://python.langchain.com/en/latest/modules/models/chat/getting_started.html for chat info\n", " # See https://python.langchain.com/en/latest/modules/models/chat/getting_started.html for chat info\n",
" system_prompt_template = \"\"\"You are expert coder Jon Carmack. Use the provided design context to create idomatic HTML/CSS code as possible based on the user request.\n", " system_prompt_template = \"\"\"You are expert coder Jon Carmack. Use the provided design context to create idiomatic HTML/CSS code as possible based on the user request.\n",
" Everything must be inline in one file and your response must be directly renderable by the browser.\n", " Everything must be inline in one file and your response must be directly renderable by the browser.\n",
" Figma file nodes and metadata: {context}\"\"\"\n", " Figma file nodes and metadata: {context}\"\"\"\n",
"\n", "\n",

View File

@ -7,7 +7,7 @@
"source": [ "source": [
"# Geopandas\n", "# Geopandas\n",
"\n", "\n",
"[Geopandas](https://geopandas.org/en/stable/index.html) is an open source project to make working with geospatial data in python easier. \n", "[Geopandas](https://geopandas.org/en/stable/index.html) is an open-source project to make working with geospatial data in python easier. \n",
"\n", "\n",
"GeoPandas extends the datatypes used by pandas to allow spatial operations on geometric types. \n", "GeoPandas extends the datatypes used by pandas to allow spatial operations on geometric types. \n",
"\n", "\n",
@ -95,7 +95,7 @@
"id": "030a535c", "id": "030a535c",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Visiualization of the sample of SF crimne data. " "Visualization of the sample of SF crime data. "
] ]
}, },
{ {

View File

@ -20,7 +20,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"To access the GitHub API, you need a personal access token - you can set up yours here: https://github.com/settings/tokens?type=beta. You can either set this token as the environment variable ``GITHUB_PERSONAL_ACCESS_TOKEN`` and it will be automatically pulled in, or you can pass it in directly at initializaiton as the ``access_token`` named parameter." "To access the GitHub API, you need a personal access token - you can set up yours here: https://github.com/settings/tokens?type=beta. You can either set this token as the environment variable ``GITHUB_PERSONAL_ACCESS_TOKEN`` and it will be automatically pulled in, or you can pass it in directly at initialization as the ``access_token`` named parameter."
] ]
}, },
{ {

View File

@ -8,7 +8,7 @@
"source": [ "source": [
"# Joplin\n", "# Joplin\n",
"\n", "\n",
">[Joplin](https://joplinapp.org/) is an open source note-taking app. Capture your thoughts and securely access them from any device.\n", ">[Joplin](https://joplinapp.org/) is an open-source note-taking app. Capture your thoughts and securely access them from any device.\n",
"\n", "\n",
"This notebook covers how to load documents from a `Joplin` database.\n", "This notebook covers how to load documents from a `Joplin` database.\n",
"\n", "\n",

View File

@ -54,7 +54,7 @@
"\n", "\n",
"# Or set up access information to use a Mastodon app.\n", "# Or set up access information to use a Mastodon app.\n",
"# Note that the access token can either be passed into\n", "# Note that the access token can either be passed into\n",
"# constructor or you can set the envirovnment \"MASTODON_ACCESS_TOKEN\".\n", "# constructor or you can set the environment \"MASTODON_ACCESS_TOKEN\".\n",
"# loader = MastodonTootsLoader(\n", "# loader = MastodonTootsLoader(\n",
"# access_token=\"<ACCESS TOKEN OF MASTODON APP>\",\n", "# access_token=\"<ACCESS TOKEN OF MASTODON APP>\",\n",
"# api_base_url=\"<API BASE URL OF MASTODON APP INSTANCE>\",\n", "# api_base_url=\"<API BASE URL OF MASTODON APP INSTANCE>\",\n",

View File

@ -34,7 +34,7 @@
"os.environ['O365_CLIENT_SECRET'] = \"YOUR CLIENT SECRET\"\n", "os.environ['O365_CLIENT_SECRET'] = \"YOUR CLIENT SECRET\"\n",
"```\n", "```\n",
"\n", "\n",
"This loader uses an authentication called [*on behalf of a user*](https://learn.microsoft.com/en-us/graph/auth-v2-user?context=graph%2Fapi%2F1.0&view=graph-rest-1.0). It is a 2 step authentication with user consent. When you instantiate the loader, it will call will print a url that the user must visit to give consent to the app on the required permissions. The user must then visit this url and give consent to the application. Then the user must copy the resulting page url and paste it back on the console. The method will then return True if the login attempt was succesful.\n", "This loader uses an authentication called [*on behalf of a user*](https://learn.microsoft.com/en-us/graph/auth-v2-user?context=graph%2Fapi%2F1.0&view=graph-rest-1.0). It is a 2 step authentication with user consent. When you instantiate the loader, it will call will print a url that the user must visit to give consent to the app on the required permissions. The user must then visit this url and give consent to the application. Then the user must copy the resulting page url and paste it back on the console. The method will then return True if the login attempt was successful.\n",
"\n", "\n",
"\n", "\n",
"```python\n", "```python\n",

View File

@ -7,7 +7,7 @@
"source": [ "source": [
"# Source Code\n", "# Source Code\n",
"\n", "\n",
"This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a seperate document.\n", "This notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a separate document.\n",
"\n", "\n",
"This approach can potentially improve the accuracy of QA models over source code. Currently, the supported languages for code parsing are Python and JavaScript. The language used for parsing can be configured, along with the minimum number of lines required to activate the splitting based on syntax." "This approach can potentially improve the accuracy of QA models over source code. Currently, the supported languages for code parsing are Python and JavaScript. The language used for parsing can be configured, along with the minimum number of lines required to activate the splitting based on syntax."
] ]

View File

@ -7,7 +7,7 @@
"source": [ "source": [
"# Weather\n", "# Weather\n",
"\n", "\n",
">[OpenWeatherMap](https://openweathermap.org/) is an open source weather service provider\n", ">[OpenWeatherMap](https://openweathermap.org/) is an open-source weather service provider\n",
"\n", "\n",
"This loader fetches the weather data from the OpenWeatherMap's OneCall API, using the pyowm Python package. You must initialize the loader with your OpenWeatherMap API token and the names of the cities you want the weather data for." "This loader fetches the weather data from the OpenWeatherMap's OneCall API, using the pyowm Python package. You must initialize the loader with your OpenWeatherMap API token and the names of the cities you want the weather data for."
] ]

View File

@ -46,7 +46,7 @@
"* `HFContentFormatter`: Formats request and response data for text-generation Hugging Face models\n", "* `HFContentFormatter`: Formats request and response data for text-generation Hugging Face models\n",
"* `LLamaContentFormatter`: Formats request and response data for LLaMa2\n", "* `LLamaContentFormatter`: Formats request and response data for LLaMa2\n",
"\n", "\n",
"*Note: `OSSContentFormatter` is being deprecated and replaced with `GPT2ContentFormatter`. The logic is the same but `GPT2ContentFormatter` is a more suitable name. You can still continue to use `OSSContentFormatter` as the changes are backwards compatibile.*\n", "*Note: `OSSContentFormatter` is being deprecated and replaced with `GPT2ContentFormatter`. The logic is the same but `GPT2ContentFormatter` is a more suitable name. You can still continue to use `OSSContentFormatter` as the changes are backwards compatible.*\n",
"\n", "\n",
"Below is an example using a summarization model from Hugging Face." "Below is an example using a summarization model from Hugging Face."
] ]

View File

@ -7,13 +7,13 @@
"source": [ "source": [
"# Baidu Qianfan\n", "# Baidu Qianfan\n",
"\n", "\n",
"Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.\n", "Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.\n",
"\n", "\n",
"Basically, those model are split into the following type:\n", "Basically, those model are split into the following type:\n",
"\n", "\n",
"- Embedding\n", "- Embedding\n",
"- Chat\n", "- Chat\n",
"- Coompletion\n", "- Completion\n",
"\n", "\n",
"In this notebook, we will introduce how to use langchain with [Qianfan](https://cloud.baidu.com/doc/WENXINWORKSHOP/index.html) mainly in `Completion` corresponding\n", "In this notebook, we will introduce how to use langchain with [Qianfan](https://cloud.baidu.com/doc/WENXINWORKSHOP/index.html) mainly in `Completion` corresponding\n",
" to the package `langchain/llms` in langchain:\n", " to the package `langchain/llms` in langchain:\n",
@ -24,7 +24,7 @@
"\n", "\n",
"To use the LLM services based on Baidu Qianfan, you have to initialize these parameters:\n", "To use the LLM services based on Baidu Qianfan, you have to initialize these parameters:\n",
"\n", "\n",
"You could either choose to init the AK,SK in enviroment variables or init params:\n", "You could either choose to init the AK,SK in environment variables or init params:\n",
"\n", "\n",
"```base\n", "```base\n",
"export QIANFAN_AK=XXX\n", "export QIANFAN_AK=XXX\n",
@ -158,7 +158,7 @@
"In the case you want to deploy your own model based on EB or serval open sources model, you could follow these steps:\n", "In the case you want to deploy your own model based on EB or serval open sources model, you could follow these steps:\n",
"\n", "\n",
"- 1. Optional, if the model are included in the default models, skip itDeploy your model in Qianfan Console, get your own customized deploy endpoint.\n", "- 1. Optional, if the model are included in the default models, skip itDeploy your model in Qianfan Console, get your own customized deploy endpoint.\n",
"- 2. Set up the field called `endpoint` in the initlization:" "- 2. Set up the field called `endpoint` in the initialization:"
] ]
}, },
{ {

View File

@ -50,7 +50,7 @@
} }
], ],
"source": [ "source": [
"# converstion can take several minutes\n", "# conversation can take several minutes\n",
"!ct2-transformers-converter --model meta-llama/Llama-2-7b-hf --quantization bfloat16 --output_dir ./llama-2-7b-ct2 --force" "!ct2-transformers-converter --model meta-llama/Llama-2-7b-hf --quantization bfloat16 --output_dir ./llama-2-7b-ct2 --force"
] ]
}, },

View File

@ -28,7 +28,8 @@
"source": [ "source": [
"import os\n", "import os\n",
"from langchain.llms import DeepInfra\n", "from langchain.llms import DeepInfra\n",
"from langchain.prompts import PromptTemplate\nfrom langchain.chains import LLMChain" "from langchain.prompts import PromptTemplate\n",
"from langchain.chains import LLMChain"
] ]
}, },
{ {
@ -50,7 +51,7 @@
}, },
"outputs": [ "outputs": [
{ {
"name": "stdin", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
" ········\n" " ········\n"
@ -81,7 +82,7 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Create the DeepInfra instance\n", "## Create the DeepInfra instance\n",
"You can also use our open source [deepctl tool](https://github.com/deepinfra/deepctl#deepctl) to manage your model deployments. You can view a list of available parameters [here](https://deepinfra.com/databricks/dolly-v2-12b#API)." "You can also use our open-source [deepctl tool](https://github.com/deepinfra/deepctl#deepctl) to manage your model deployments. You can view a list of available parameters [here](https://deepinfra.com/databricks/dolly-v2-12b#API)."
] ]
}, },
{ {

View File

@ -7,7 +7,7 @@
"# ForefrontAI\n", "# ForefrontAI\n",
"\n", "\n",
"\n", "\n",
"The `Forefront` platform gives you the ability to fine-tune and use [open source large language models](https://docs.forefront.ai/forefront/master/models).\n", "The `Forefront` platform gives you the ability to fine-tune and use [open-source large language models](https://docs.forefront.ai/forefront/master/models).\n",
"\n", "\n",
"This notebook goes over how to use Langchain with [ForefrontAI](https://www.forefront.ai/).\n" "This notebook goes over how to use Langchain with [ForefrontAI](https://www.forefront.ai/).\n"
] ]
@ -27,7 +27,8 @@
"source": [ "source": [
"import os\n", "import os\n",
"from langchain.llms import ForefrontAI\n", "from langchain.llms import ForefrontAI\n",
"from langchain.prompts import PromptTemplate\nfrom langchain.chains import LLMChain" "from langchain.prompts import PromptTemplate\n",
"from langchain.chains import LLMChain"
] ]
}, },
{ {

View File

@ -61,7 +61,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Optional: Validate your Enviroment variables ```GRADIENT_ACCESS_TOKEN``` and ```GRADIENT_WORKSPACE_ID``` to get currently deployed models." "Optional: Validate your Environment variables ```GRADIENT_ACCESS_TOKEN``` and ```GRADIENT_WORKSPACE_ID``` to get currently deployed models."
] ]
}, },
{ {

View File

@ -15,7 +15,7 @@
"id": "59fcaebc", "id": "59fcaebc",
"metadata": {}, "metadata": {},
"source": [ "source": [
"For more detailed information on `manifest`, and how to use it with local hugginface models like in this example, see https://github.com/HazyResearch/manifest\n", "For more detailed information on `manifest`, and how to use it with local huggingface models like in this example, see https://github.com/HazyResearch/manifest\n",
"\n", "\n",
"Another example of [using Manifest with Langchain](https://github.com/HazyResearch/manifest/blob/main/examples/langchain_chatgpt.html)." "Another example of [using Manifest with Langchain](https://github.com/HazyResearch/manifest/blob/main/examples/langchain_chatgpt.html)."
] ]

View File

@ -7,7 +7,7 @@
"source": [ "source": [
"# MosaicML\n", "# MosaicML\n",
"\n", "\n",
"[MosaicML](https://docs.mosaicml.com/en/latest/inference.html) offers a managed inference service. You can either use a variety of open source models, or deploy your own.\n", "[MosaicML](https://docs.mosaicml.com/en/latest/inference.html) offers a managed inference service. You can either use a variety of open-source models, or deploy your own.\n",
"\n", "\n",
"This example goes over how to use LangChain to interact with MosaicML Inference for text completion." "This example goes over how to use LangChain to interact with MosaicML Inference for text completion."
] ]

View File

@ -334,7 +334,7 @@
"source": [ "source": [
"## Using the Hub for prompt management\n", "## Using the Hub for prompt management\n",
" \n", " \n",
"Open source models often benefit from specific prompts. \n", "Open-source models often benefit from specific prompts. \n",
"\n", "\n",
"For example, [Mistral 7b](https://mistral.ai/news/announcing-mistral-7b/) was fine-tuned for chat using the prompt format shown [here](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1).\n", "For example, [Mistral 7b](https://mistral.ai/news/announcing-mistral-7b/) was fine-tuned for chat using the prompt format shown [here](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1).\n",
"\n", "\n",

View File

@ -6,7 +6,7 @@
"source": [ "source": [
"# Predibase\n", "# Predibase\n",
"\n", "\n",
"[Predibase](https://predibase.com/) allows you to train, finetune, and deploy any ML model—from linear regression to large language model. \n", "[Predibase](https://predibase.com/) allows you to train, fine-tune, and deploy any ML model—from linear regression to large language model. \n",
"\n", "\n",
"This example demonstrates using Langchain with models deployed on Predibase" "This example demonstrates using Langchain with models deployed on Predibase"
] ]

View File

@ -180,7 +180,7 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Using PromptLayer Track\n", "## Using PromptLayer Track\n",
"If you would like to use any of the [PromptLayer tracking features](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9), you need to pass the argument `return_pl_id` when instantializing the PromptLayer LLM to get the request id. " "If you would like to use any of the [PromptLayer tracking features](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9), you need to pass the argument `return_pl_id` when instantiating the PromptLayer LLM to get the request id. "
] ]
}, },
{ {

View File

@ -44,10 +44,10 @@
"## Choose a Model\n", "## Choose a Model\n",
"Takeoff supports many of the most powerful generative text models, such as Falcon, MPT, and Llama. See the [supported models](https://docs.titanml.co/docs/titan-takeoff/supported-models) for more information. For information about using your own models, see the [custom models](https://docs.titanml.co/docs/titan-takeoff/Advanced/custom-models).\n", "Takeoff supports many of the most powerful generative text models, such as Falcon, MPT, and Llama. See the [supported models](https://docs.titanml.co/docs/titan-takeoff/supported-models) for more information. For information about using your own models, see the [custom models](https://docs.titanml.co/docs/titan-takeoff/Advanced/custom-models).\n",
"\n", "\n",
"Going forward in this demo we will be using the falcon 7B instruct model. This is a good open source model that is trained to follow instructions, and is small enough to easily inference even on CPUs.\n", "Going forward in this demo we will be using the falcon 7B instruct model. This is a good open-source model that is trained to follow instructions, and is small enough to easily inference even on CPUs.\n",
"\n", "\n",
"## Taking off\n", "## Taking off\n",
"Models are referred to by their model id on HuggingFace. Takeoff uses port 8000 by default, but can be configured to use another port. There is also support to use a Nvidia GPU by specifing cuda for the device flag.\n", "Models are referred to by their model id on HuggingFace. Takeoff uses port 8000 by default, but can be configured to use another port. There is also support to use a Nvidia GPU by specifying cuda for the device flag.\n",
"\n", "\n",
"To start the takeoff server, run:" "To start the takeoff server, run:"
] ]

View File

@ -1,6 +1,6 @@
# Chaindesk # Chaindesk
>[Chaindesk](https://chaindesk.ai) is an [open source](https://github.com/gmpetrov/databerry) document retrieval platform that helps to connect your personal data with Large Language Models. >[Chaindesk](https://chaindesk.ai) is an [open-source](https://github.com/gmpetrov/databerry) document retrieval platform that helps to connect your personal data with Large Language Models.
## Installation and Setup ## Installation and Setup

View File

@ -570,7 +570,7 @@
"\n", "\n",
"- If you close the ClearML Callback using `clearml_callback.flush_tracker(..., finish=True)` the Callback cannot be used anymore. Make a new one if you want to keep logging.\n", "- If you close the ClearML Callback using `clearml_callback.flush_tracker(..., finish=True)` the Callback cannot be used anymore. Make a new one if you want to keep logging.\n",
"\n", "\n",
"- Check out the rest of the open source ClearML ecosystem, there is a data version manager, a remote execution agent, automated pipelines and much more!\n" "- Check out the rest of the open-source ClearML ecosystem, there is a data version manager, a remote execution agent, automated pipelines and much more!\n"
] ]
}, },
{ {

View File

@ -1,5 +1,5 @@
# CnosDB # CnosDB
> [CnosDB](https://github.com/cnosdb/cnosdb) is an open source distributed time series database with high performance, high compression rate and high ease of use. > [CnosDB](https://github.com/cnosdb/cnosdb) is an open-source distributed time series database with high performance, high compression rate and high ease of use.
## Installation and Setup ## Installation and Setup

View File

@ -19,7 +19,7 @@ See the notebook [Connect to Databricks](/docs/use_cases/qa_structured/integrati
Databricks MLflow integrates with LangChain Databricks MLflow integrates with LangChain
------------------------------------------- -------------------------------------------
MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. See the notebook [MLflow Callback Handler](/docs/integrations/providers/mlflow_tracking) for details about MLflow's integration with LangChain. MLflow is an open-source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. See the notebook [MLflow Callback Handler](/docs/integrations/providers/mlflow_tracking) for details about MLflow's integration with LangChain.
Databricks provides a fully managed and hosted version of MLflow integrated with enterprise security features, high availability, and other Databricks workspace features such as experiment and run management and notebook revision capture. MLflow on Databricks offers an integrated experience for tracking and securing machine learning model training runs and running machine learning projects. See [MLflow guide](https://docs.databricks.com/mlflow/index.html) for more details. Databricks provides a fully managed and hosted version of MLflow integrated with enterprise security features, high availability, and other Databricks workspace features such as experiment and run management and notebook revision capture. MLflow on Databricks offers an integrated experience for tracking and securing machine learning model training runs and running machine learning projects. See [MLflow guide](https://docs.databricks.com/mlflow/index.html) for more details.

View File

@ -1,6 +1,6 @@
# Doctran # Doctran
>[Doctran](https://github.com/psychic-api/doctran) is a python package. It uses LLMs and open source >[Doctran](https://github.com/psychic-api/doctran) is a python package. It uses LLMs and open-source
> NLP libraries to transform raw text into clean, structured, information-dense documents > NLP libraries to transform raw text into clean, structured, information-dense documents
> that are optimized for vector space retrieval. You can think of `Doctran` as a black box where > that are optimized for vector space retrieval. You can think of `Doctran` as a black box where
> messy strings go in and nice, clean, labelled strings come out. > messy strings go in and nice, clean, labelled strings come out.

View File

@ -4,7 +4,7 @@ This page covers how to use the [Helicone](https://helicone.ai) ecosystem within
## What is Helicone? ## What is Helicone?
Helicone is an [open source](https://github.com/Helicone/helicone) observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage. Helicone is an [open-source](https://github.com/Helicone/helicone) observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage.
![Helicone](/img/HeliconeDashboard.png) ![Helicone](/img/HeliconeDashboard.png)

View File

@ -4,7 +4,7 @@
>`Hologres` supports standard `SQL` syntax, is compatible with `PostgreSQL`, and supports most PostgreSQL functions. Hologres supports online analytical processing (OLAP) and ad hoc analysis for up to petabytes of data, and provides high-concurrency and low-latency online data services. >`Hologres` supports standard `SQL` syntax, is compatible with `PostgreSQL`, and supports most PostgreSQL functions. Hologres supports online analytical processing (OLAP) and ad hoc analysis for up to petabytes of data, and provides high-concurrency and low-latency online data services.
>`Hologres` provides **vector database** functionality by adopting [Proxima](https://www.alibabacloud.com/help/en/hologres/latest/vector-processing). >`Hologres` provides **vector database** functionality by adopting [Proxima](https://www.alibabacloud.com/help/en/hologres/latest/vector-processing).
>`Proxima` is a high-performance software library developed by `Alibaba DAMO Academy`. It allows you to search for the nearest neighbors of vectors. Proxima provides higher stability and performance than similar open source software such as Faiss. Proxima allows you to search for similar text or image embeddings with high throughput and low latency. Hologres is deeply integrated with Proxima to provide a high-performance vector search service. >`Proxima` is a high-performance software library developed by `Alibaba DAMO Academy`. It allows you to search for the nearest neighbors of vectors. Proxima provides higher stability and performance than similar open-source software such as Faiss. Proxima allows you to search for similar text or image embeddings with high throughput and low latency. Hologres is deeply integrated with Proxima to provide a high-performance vector search service.
## Installation and Setup ## Installation and Setup

View File

@ -4,7 +4,7 @@ This page covers how to use the [Log10](https://log10.io) within LangChain.
## What is Log10? ## What is Log10?
Log10 is an [open source](https://github.com/log10-io/log10) proxiless LLM data management and application development platform that lets you log, debug and tag your Langchain calls. Log10 is an [open-source](https://github.com/log10-io/log10) proxiless LLM data management and application development platform that lets you log, debug and tag your Langchain calls.
## Quick start ## Quick start

View File

@ -43,7 +43,7 @@ You can use the PromptLayer request ID to add a prompt, score, or other metadata
This LLM is identical to the [OpenAI](/docs/ecosystem/integrations/openai.html) LLM, except that This LLM is identical to the [OpenAI](/docs/ecosystem/integrations/openai.html) LLM, except that
- all your requests will be logged to your PromptLayer account - all your requests will be logged to your PromptLayer account
- you can add `pl_tags` when instantiating to tag your requests on PromptLayer - you can add `pl_tags` when instantiating to tag your requests on PromptLayer
- you can add `return_pl_id` when instantializing to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9). - you can add `return_pl_id` when instantiating to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](/docs/integrations/chat/promptlayer_chatopenai.html) and `PromptLayerOpenAIChat` PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](/docs/integrations/chat/promptlayer_chatopenai.html) and `PromptLayerOpenAIChat`

View File

@ -54,7 +54,7 @@ The only way to use a Redis Cluster is with LangChain classes accepting a precon
The Cache wrapper allows for [Redis](https://redis.io) to be used as a remote, low-latency, in-memory cache for LLM prompts and responses. The Cache wrapper allows for [Redis](https://redis.io) to be used as a remote, low-latency, in-memory cache for LLM prompts and responses.
#### Standard Cache #### Standard Cache
The standard cache is the Redis bread & butter of use case in production for both [open source](https://redis.io) and [enterprise](https://redis.com) users globally. The standard cache is the Redis bread & butter of use case in production for both [open-source](https://redis.io) and [enterprise](https://redis.com) users globally.
To import this cache: To import this cache:
```python ```python

View File

@ -1,6 +1,6 @@
# scikit-learn # scikit-learn
>[scikit-learn](https://scikit-learn.org/stable/) is an open source collection of machine learning algorithms, >[scikit-learn](https://scikit-learn.org/stable/) is an open-source collection of machine learning algorithms,
> including some implementations of the [k nearest neighbors](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.NearestNeighbors.html). `SKLearnVectorStore` wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format. > including some implementations of the [k nearest neighbors](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.NearestNeighbors.html). `SKLearnVectorStore` wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format.
## Installation and Setup ## Installation and Setup

View File

@ -1,6 +1,6 @@
# Supabase (Postgres) # Supabase (Postgres)
>[Supabase](https://supabase.com/docs) is an open source `Firebase` alternative. >[Supabase](https://supabase.com/docs) is an open-source `Firebase` alternative.
> `Supabase` is built on top of `PostgreSQL`, which offers strong `SQL` > `Supabase` is built on top of `PostgreSQL`, which offers strong `SQL`
> querying capabilities and enables a simple interface with already-existing tools and frameworks. > querying capabilities and enables a simple interface with already-existing tools and frameworks.

View File

@ -1,6 +1,6 @@
# Tigris # Tigris
> [Tigris](https://tigrisdata.com) is an open source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications. > [Tigris](https://tigrisdata.com) is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.
> `Tigris` eliminates the infrastructure complexity of managing, operating, and synchronizing multiple tools, allowing you to focus on building great applications instead. > `Tigris` eliminates the infrastructure complexity of managing, operating, and synchronizing multiple tools, allowing you to focus on building great applications instead.
## Installation and Setup ## Installation and Setup

View File

@ -4,7 +4,7 @@ This page covers how to use [TruLens](https://trulens.org) to evaluate and track
## What is TruLens? ## What is TruLens?
TruLens is an [opensource](https://github.com/truera/trulens) package that provides instrumentation and evaluation tools for large language model (LLM) based applications. TruLens is an [open-source](https://github.com/truera/trulens) package that provides instrumentation and evaluation tools for large language model (LLM) based applications.
## Quick start ## Quick start

View File

@ -1,6 +1,6 @@
# Typesense # Typesense
> [Typesense](https://typesense.org) is an open source, in-memory search engine, that you can either > [Typesense](https://typesense.org) is an open-source, in-memory search engine, that you can either
> [self-host](https://typesense.org/docs/guide/install-typesense.html#option-2-local-machine-self-hosting) or run > [self-host](https://typesense.org/docs/guide/install-typesense.html#option-2-local-machine-self-hosting) or run
> on [Typesense Cloud](https://cloud.typesense.org/). > on [Typesense Cloud](https://cloud.typesense.org/).
> `Typesense` focuses on performance by storing the entire index in RAM (with a backup on disk) and also > `Typesense` focuses on performance by storing the entire index in RAM (with a backup on disk) and also

View File

@ -155,7 +155,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"query = \"Did he mention who she suceeded\"\n", "query = \"Did he mention who she succeeded\"\n",
"result = qa({\"question\": query})" "result = qa({\"question\": query})"
] ]
}, },
@ -267,7 +267,7 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"chat_history = [(query, result[\"answer\"])]\n", "chat_history = [(query, result[\"answer\"])]\n",
"query = \"Did he mention who she suceeded\"\n", "query = \"Did he mention who she succeeded\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})" "result = qa({\"question\": query, \"chat_history\": chat_history})"
] ]
}, },
@ -656,7 +656,7 @@
], ],
"source": [ "source": [
"chat_history = [(query, result[\"answer\"])]\n", "chat_history = [(query, result[\"answer\"])]\n",
"query = \"Did he mention who she suceeded\"\n", "query = \"Did he mention who she succeeded\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})" "result = qa({\"question\": query, \"chat_history\": chat_history})"
] ]
}, },

View File

@ -1,6 +1,6 @@
# Weather # Weather
>[OpenWeatherMap](https://openweathermap.org/) is an open source weather service provider. >[OpenWeatherMap](https://openweathermap.org/) is an open-source weather service provider.

View File

@ -9,7 +9,7 @@ What is `Weaviate`?
- Weaviate allows you to store JSON documents in a class property-like fashion while attaching machine learning vectors to these documents to represent them in vector space. - Weaviate allows you to store JSON documents in a class property-like fashion while attaching machine learning vectors to these documents to represent them in vector space.
- Weaviate can be used stand-alone (aka bring your vectors) or with a variety of modules that can do the vectorization for you and extend the core capabilities. - Weaviate can be used stand-alone (aka bring your vectors) or with a variety of modules that can do the vectorization for you and extend the core capabilities.
- Weaviate has a GraphQL-API to access your data easily. - Weaviate has a GraphQL-API to access your data easily.
- We aim to bring your vector search set up to production to query in mere milliseconds (check our [open source benchmarks](https://weaviate.io/developers/weaviate/current/benchmarks/) to see if Weaviate fits your use case). - We aim to bring your vector search set up to production to query in mere milliseconds (check our [open-source benchmarks](https://weaviate.io/developers/weaviate/current/benchmarks/) to see if Weaviate fits your use case).
- Get to know Weaviate in the [basics getting started guide](https://weaviate.io/developers/weaviate/current/core-knowledge/basics.html) in under five minutes. - Get to know Weaviate in the [basics getting started guide](https://weaviate.io/developers/weaviate/current/core-knowledge/basics.html) in under five minutes.
**Weaviate in detail:** **Weaviate in detail:**

View File

@ -26,7 +26,7 @@
"source": [ "source": [
"## Set up Azure Cognitive Search\n", "## Set up Azure Cognitive Search\n",
"\n", "\n",
"To set up ACS, please follow the instrcutions [here](https://learn.microsoft.com/en-us/azure/search/search-create-service-portal).\n", "To set up ACS, please follow the instructions [here](https://learn.microsoft.com/en-us/azure/search/search-create-service-portal).\n",
"\n", "\n",
"Please note\n", "Please note\n",
"1. the name of your ACS service, \n", "1. the name of your ACS service, \n",

View File

@ -137,7 +137,7 @@
"\n", "\n",
"Ive worked on these issues a long time. \n", "Ive worked on these issues a long time. \n",
"\n", "\n",
"I know what works: Investing in crime preventionand community police officers wholl walk the beat, wholl know the neighborhood, and who can restore trust and safety. \n", "I know what works: Investing in crime prevention and community police officers wholl walk the beat, wholl know the neighborhood, and who can restore trust and safety. \n",
"\n", "\n",
"So lets not abandon our streets. Or choose between safety and equal justice.\n", "So lets not abandon our streets. Or choose between safety and equal justice.\n",
"----------------------------------------------------------------------------------------------------\n", "----------------------------------------------------------------------------------------------------\n",
@ -373,7 +373,7 @@
"\n", "\n",
"Ive worked on these issues a long time. \n", "Ive worked on these issues a long time. \n",
"\n", "\n",
"I know what works: Investing in crime preventionand community police officers wholl walk the beat, wholl know the neighborhood, and who can restore trust and safety. \n", "I know what works: Investing in crime prevention and community police officers wholl walk the beat, wholl know the neighborhood, and who can restore trust and safety. \n",
"\n", "\n",
"So lets not abandon our streets. Or choose between safety and equal justice.\n", "So lets not abandon our streets. Or choose between safety and equal justice.\n",
"----------------------------------------------------------------------------------------------------\n", "----------------------------------------------------------------------------------------------------\n",

View File

@ -157,7 +157,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# You can use an additional document transformer to reorder documents after removing redudance.\n", "# You can use an additional document transformer to reorder documents after removing redundance.\n",
"from langchain.document_transformers import LongContextReorder\n", "from langchain.document_transformers import LongContextReorder\n",
"\n", "\n",
"filter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)\n", "filter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)\n",

View File

@ -11,7 +11,7 @@
"\n", "\n",
"This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search.\n", "This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search.\n",
"\n", "\n",
"The logic of this retriever is taken from [this documentaion](https://docs.pinecone.io/docs/hybrid-search)\n", "The logic of this retriever is taken from [this documentation](https://docs.pinecone.io/docs/hybrid-search)\n",
"\n", "\n",
"To use Pinecone, you must have an API key and an Environment. \n", "To use Pinecone, you must have an API key and an Environment. \n",
"Here are the [installation instructions](https://docs.pinecone.io/docs/quickstart)." "Here are the [installation instructions](https://docs.pinecone.io/docs/quickstart)."
@ -140,7 +140,7 @@
" dimension=1536, # dimensionality of dense model\n", " dimension=1536, # dimensionality of dense model\n",
" metric=\"dotproduct\", # sparse values supported only for dotproduct\n", " metric=\"dotproduct\", # sparse values supported only for dotproduct\n",
" pod_type=\"s1\",\n", " pod_type=\"s1\",\n",
" metadata_config={\"indexed\": []}, # see explaination above\n", " metadata_config={\"indexed\": []}, # see explanation above\n",
")" ")"
] ]
}, },

View File

@ -7,7 +7,7 @@
"source": [ "source": [
"# Weaviate Hybrid Search\n", "# Weaviate Hybrid Search\n",
"\n", "\n",
">[Weaviate](https://weaviate.io/developers/weaviate) is an open source vector database.\n", ">[Weaviate](https://weaviate.io/developers/weaviate) is an open-source vector database.\n",
"\n", "\n",
">[Hybrid search](https://weaviate.io/blog/hybrid-search-explained) is a technique that combines multiple search algorithms to improve the accuracy and relevance of search results. It uses the best features of both keyword-based search algorithms with vector search techniques.\n", ">[Hybrid search](https://weaviate.io/blog/hybrid-search-explained) is a technique that combines multiple search algorithms to improve the accuracy and relevance of search results. It uses the best features of both keyword-based search algorithms with vector search techniques.\n",
"\n", "\n",

View File

@ -7,7 +7,7 @@
"source": [ "source": [
"# Baidu Qianfan\n", "# Baidu Qianfan\n",
"\n", "\n",
"Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.\n", "Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.\n",
"\n", "\n",
"Basically, those model are split into the following type:\n", "Basically, those model are split into the following type:\n",
"\n", "\n",
@ -24,7 +24,7 @@
"\n", "\n",
"To use the LLM services based on Baidu Qianfan, you have to initialize these parameters:\n", "To use the LLM services based on Baidu Qianfan, you have to initialize these parameters:\n",
"\n", "\n",
"You could either choose to init the AK,SK in enviroment variables or init params:\n", "You could either choose to init the AK,SK in environment variables or init params:\n",
"\n", "\n",
"```base\n", "```base\n",
"export QIANFAN_AK=XXX\n", "export QIANFAN_AK=XXX\n",
@ -97,7 +97,7 @@
"In the case you want to deploy your own model based on Ernie Bot or third-party open sources model, you could follow these steps:\n", "In the case you want to deploy your own model based on Ernie Bot or third-party open sources model, you could follow these steps:\n",
"\n", "\n",
"- 1. Optional, if the model are included in the default models, skip itDeploy your model in Qianfan Console, get your own customized deploy endpoint.\n", "- 1. Optional, if the model are included in the default models, skip itDeploy your model in Qianfan Console, get your own customized deploy endpoint.\n",
"- 2. Set up the field called `endpoint` in the initlization:" "- 2. Set up the field called `endpoint` in the initialization:"
] ]
}, },
{ {

View File

@ -8,7 +8,7 @@
"\n", "\n",
">[Vertex AI PaLM API](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/overview) is a service on Google Cloud exposing the embedding models. \n", ">[Vertex AI PaLM API](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/overview) is a service on Google Cloud exposing the embedding models. \n",
"\n", "\n",
"Note: This integration is seperate from the Google PaLM integration.\n", "Note: This integration is separate from the Google PaLM integration.\n",
"\n", "\n",
"By default, Google Cloud [does not use](https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance#foundation_model_development) Customer Data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in [Google's Customer Data Processing Addendum (CDPA)](https://cloud.google.com/terms/data-processing-addendum).\n", "By default, Google Cloud [does not use](https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance#foundation_model_development) Customer Data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in [Google's Customer Data Processing Addendum (CDPA)](https://cloud.google.com/terms/data-processing-addendum).\n",
"\n", "\n",

View File

@ -57,7 +57,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Optional: Validate your Environment variables ```GRADIENT_ACCESS_TOKEN``` and ```GRADIENT_WORKSPACE_ID``` to get currently deployed models. Using the `gradientai` Python package." "Optional: Validate your environment variables ```GRADIENT_ACCESS_TOKEN``` and ```GRADIENT_WORKSPACE_ID``` to get currently deployed models. Using the `gradientai` Python package."
] ]
}, },
{ {

View File

@ -6,7 +6,7 @@
"source": [ "source": [
"# MosaicML\n", "# MosaicML\n",
"\n", "\n",
">[MosaicML](https://docs.mosaicml.com/en/latest/inference.html) offers a managed inference service. You can either use a variety of open source models, or deploy your own.\n", ">[MosaicML](https://docs.mosaicml.com/en/latest/inference.html) offers a managed inference service. You can either use a variety of open-source models, or deploy your own.\n",
"\n", "\n",
"This example goes over how to use LangChain to interact with `MosaicML` Inference for text embedding." "This example goes over how to use LangChain to interact with `MosaicML` Inference for text embedding."
] ]

View File

@ -4,7 +4,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# ClickUp Langchiain Toolkit" "# ClickUp Langchain Toolkit"
] ]
}, },
{ {

View File

@ -15,11 +15,11 @@
"3. Set your environmental variables\n", "3. Set your environmental variables\n",
"4. Pass the tools to your agent with `toolkit.get_tools()`\n", "4. Pass the tools to your agent with `toolkit.get_tools()`\n",
"\n", "\n",
"Each of these steps will be explained in greate detail below.\n", "Each of these steps will be explained in great detail below.\n",
"\n", "\n",
"1. **Get Issues**- fetches issues from the repository.\n", "1. **Get Issues**- fetches issues from the repository.\n",
"\n", "\n",
"2. **Get Issue**- feteches details about a specific issue.\n", "2. **Get Issue**- fetches details about a specific issue.\n",
"\n", "\n",
"3. **Comment on Issue**- posts a comment on a specific issue.\n", "3. **Comment on Issue**- posts a comment on a specific issue.\n",
"\n", "\n",

View File

@ -15,11 +15,11 @@
"3. Set your environmental variables\n", "3. Set your environmental variables\n",
"4. Pass the tools to your agent with `toolkit.get_tools()`\n", "4. Pass the tools to your agent with `toolkit.get_tools()`\n",
"\n", "\n",
"Each of these steps will be explained in greate detail below.\n", "Each of these steps will be explained in great detail below.\n",
"\n", "\n",
"1. **Get Issues**- fetches issues from the repository.\n", "1. **Get Issues**- fetches issues from the repository.\n",
"\n", "\n",
"2. **Get Issue**- feteches details about a specific issue.\n", "2. **Get Issue**- fetches details about a specific issue.\n",
"\n", "\n",
"3. **Comment on Issue**- posts a comment on a specific issue.\n", "3. **Comment on Issue**- posts a comment on a specific issue.\n",
"\n", "\n",

View File

@ -111,7 +111,7 @@
"id": "54c01168", "id": "54c01168",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Disclamer ⚠️\n", "## Disclaimer ⚠️\n",
"\n", "\n",
"The query chain may generate insert/update/delete queries. When this is not expected, use a custom prompt or create a SQL users without write permissions.\n", "The query chain may generate insert/update/delete queries. When this is not expected, use a custom prompt or create a SQL users without write permissions.\n",
"\n", "\n",

View File

@ -8,7 +8,7 @@
"\n", "\n",
">[AnalyticDB for PostgreSQL](https://www.alibabacloud.com/help/en/analyticdb-for-postgresql/latest/product-introduction-overview) is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.\n", ">[AnalyticDB for PostgreSQL](https://www.alibabacloud.com/help/en/analyticdb-for-postgresql/latest/product-introduction-overview) is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.\n",
"\n", "\n",
">`AnalyticDB for PostgreSQL` is developed based on the open source `Greenplum Database` project and is enhanced with in-depth extensions by `Alibaba Cloud`. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries.\n", ">`AnalyticDB for PostgreSQL` is developed based on the open-source `Greenplum Database` project and is enhanced with in-depth extensions by `Alibaba Cloud`. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries.\n",
"\n", "\n",
"This notebook shows how to use functionality related to the `AnalyticDB` vector database.\n", "This notebook shows how to use functionality related to the `AnalyticDB` vector database.\n",
"To run, you should have an [AnalyticDB](https://www.alibabacloud.com/help/en/analyticdb-for-postgresql/latest/product-introduction-overview) instance up and running:\n", "To run, you should have an [AnalyticDB](https://www.alibabacloud.com/help/en/analyticdb-for-postgresql/latest/product-introduction-overview) instance up and running:\n",

View File

@ -18,7 +18,7 @@
"metadata": {}, "metadata": {},
"source": [ "source": [
"```{note}\n", "```{note}\n",
"NOTE: Annoy is read-only - once the index is built you cannot add any more emebddings!\n", "NOTE: Annoy is read-only - once the index is built you cannot add any more embeddings!\n",
"If you want to progressively add new entries to your VectorStore then better choose an alternative!\n", "If you want to progressively add new entries to your VectorStore then better choose an alternative!\n",
"```" "```"
] ]

View File

@ -276,7 +276,7 @@
"data": { "data": {
"text/plain": [ "text/plain": [
"[Document(page_content='And I will keep doing everything in my power to crack down on gun trafficking and ghost guns you can buy online and make at home—they have no serial numbers and cant be traced. \\n\\nAnd I ask Congress to pass proven measures to reduce gun violence. Pass universal background checks. Why should anyone on a terrorist list be able to purchase a weapon? \\n\\nBan assault weapons and high-capacity magazines. \\n\\nRepeal the liability shield that makes gun manufacturers the only industry in America that cant be sued. \\n\\nThese laws dont infringe on the Second Amendment. They save lives. \\n\\nThe most fundamental right in America is the right to vote and to have it counted. And its under assault. \\n\\nIn state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \\n\\nWe cannot let this happen.', metadata={'source': '../../../state_of_the_union.txt'}),\n", "[Document(page_content='And I will keep doing everything in my power to crack down on gun trafficking and ghost guns you can buy online and make at home—they have no serial numbers and cant be traced. \\n\\nAnd I ask Congress to pass proven measures to reduce gun violence. Pass universal background checks. Why should anyone on a terrorist list be able to purchase a weapon? \\n\\nBan assault weapons and high-capacity magazines. \\n\\nRepeal the liability shield that makes gun manufacturers the only industry in America that cant be sued. \\n\\nThese laws dont infringe on the Second Amendment. They save lives. \\n\\nThe most fundamental right in America is the right to vote and to have it counted. And its under assault. \\n\\nIn state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \\n\\nWe cannot let this happen.', metadata={'source': '../../../state_of_the_union.txt'}),\n",
" Document(page_content='We cant change how divided weve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \\n\\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \\n\\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \\n\\nOfficer Mora was 27 years old. \\n\\nOfficer Rivera was 22. \\n\\nBoth Dominican Americans whod grown up on the same streets they later chose to patrol as police officers. \\n\\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \\n\\nIve worked on these issues a long time. \\n\\nI know what works: Investing in crime preventionand community police officers wholl walk the beat, wholl know the neighborhood, and who can restore trust and safety.', metadata={'source': '../../../state_of_the_union.txt'}),\n", " Document(page_content='We cant change how divided weve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \\n\\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \\n\\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \\n\\nOfficer Mora was 27 years old. \\n\\nOfficer Rivera was 22. \\n\\nBoth Dominican Americans whod grown up on the same streets they later chose to patrol as police officers. \\n\\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \\n\\nIve worked on these issues a long time. \\n\\nI know what works: Investing in crime prevention and community police officers wholl walk the beat, wholl know the neighborhood, and who can restore trust and safety.', metadata={'source': '../../../state_of_the_union.txt'}),\n",
" Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \\n\\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \\n\\nWe can do both. At our border, weve installed new technology like cutting-edge scanners to better detect drug smuggling. \\n\\nWeve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \\n\\nWere putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \\n\\nWere securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt'}),\n", " Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \\n\\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \\n\\nWe can do both. At our border, weve installed new technology like cutting-edge scanners to better detect drug smuggling. \\n\\nWeve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \\n\\nWere putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \\n\\nWere securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt'}),\n",
" Document(page_content='So lets not abandon our streets. Or choose between safety and equal justice. \\n\\nLets come together to protect our communities, restore trust, and hold law enforcement accountable. \\n\\nThats why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers. \\n\\nThats why the American Rescue Plan provided $350 Billion that cities, states, and counties can use to hire more police and invest in proven strategies like community violence interruption—trusted messengers breaking the cycle of violence and trauma and giving young people hope. \\n\\nWe should all agree: The answer is not to Defund the police. The answer is to FUND the police with the resources and training they need to protect our communities. \\n\\nI ask Democrats and Republicans alike: Pass my budget and keep our neighborhoods safe.', metadata={'source': '../../../state_of_the_union.txt'})]" " Document(page_content='So lets not abandon our streets. Or choose between safety and equal justice. \\n\\nLets come together to protect our communities, restore trust, and hold law enforcement accountable. \\n\\nThats why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers. \\n\\nThats why the American Rescue Plan provided $350 Billion that cities, states, and counties can use to hire more police and invest in proven strategies like community violence interruption—trusted messengers breaking the cycle of violence and trauma and giving young people hope. \\n\\nWe should all agree: The answer is not to Defund the police. The answer is to FUND the police with the resources and training they need to protect our communities. \\n\\nI ask Democrats and Republicans alike: Pass my budget and keep our neighborhoods safe.', metadata={'source': '../../../state_of_the_union.txt'})]"
] ]

View File

@ -10,7 +10,7 @@
">Hologres supports standard SQL syntax, is compatible with PostgreSQL, and supports most PostgreSQL functions. Hologres supports online analytical processing (OLAP) and ad hoc analysis for up to petabytes of data, and provides high-concurrency and low-latency online data services. \n", ">Hologres supports standard SQL syntax, is compatible with PostgreSQL, and supports most PostgreSQL functions. Hologres supports online analytical processing (OLAP) and ad hoc analysis for up to petabytes of data, and provides high-concurrency and low-latency online data services. \n",
"\n", "\n",
">Hologres provides **vector database** functionality by adopting [Proxima](https://www.alibabacloud.com/help/en/hologres/latest/vector-processing).\n", ">Hologres provides **vector database** functionality by adopting [Proxima](https://www.alibabacloud.com/help/en/hologres/latest/vector-processing).\n",
">Proxima is a high-performance software library developed by Alibaba DAMO Academy. It allows you to search for the nearest neighbors of vectors. Proxima provides higher stability and performance than similar open source software such as Faiss. Proxima allows you to search for similar text or image embeddings with high throughput and low latency. Hologres is deeply integrated with Proxima to provide a high-performance vector search service.\n", ">Proxima is a high-performance software library developed by Alibaba DAMO Academy. It allows you to search for the nearest neighbors of vectors. Proxima provides higher stability and performance than similar open-source software such as Faiss. Proxima allows you to search for similar text or image embeddings with high throughput and low latency. Hologres is deeply integrated with Proxima to provide a high-performance vector search service.\n",
"\n", "\n",
"This notebook shows how to use functionality related to the `Hologres Proxima` vector database.\n", "This notebook shows how to use functionality related to the `Hologres Proxima` vector database.\n",
"Click [here](https://www.alibabacloud.com/zh/product/hologres) to fast deploy a Hologres cloud instance." "Click [here](https://www.alibabacloud.com/zh/product/hologres) to fast deploy a Hologres cloud instance."

View File

@ -11,7 +11,7 @@
"See the [LLMRails API documentation ](https://docs.llmrails.com/) for more information on how to use the API.\n", "See the [LLMRails API documentation ](https://docs.llmrails.com/) for more information on how to use the API.\n",
"\n", "\n",
"This notebook shows how to use functionality related to the `LLMRails`'s integration with langchain.\n", "This notebook shows how to use functionality related to the `LLMRails`'s integration with langchain.\n",
"Note that unlike many other integrations in this category, LLMRails provides an end-to-end managed service for retrieval agumented generation, which includes:\n", "Note that unlike many other integrations in this category, LLMRails provides an end-to-end managed service for retrieval augmented generation, which includes:\n",
"1. A way to extract text from document files and chunk them into sentences.\n", "1. A way to extract text from document files and chunk them into sentences.\n",
"2. Its own embeddings model and vector store - each text segment is encoded into a vector embedding and stored in the LLMRails internal vector store\n", "2. Its own embeddings model and vector store - each text segment is encoded into a vector embedding and stored in the LLMRails internal vector store\n",
"3. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.llmrails.com/datastores/search))\n", "3. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.llmrails.com/datastores/search))\n",

View File

@ -10,7 +10,7 @@
"\n", "\n",
"This notebook shows how to use functionality related to the Marqo vectorstore.\n", "This notebook shows how to use functionality related to the Marqo vectorstore.\n",
"\n", "\n",
">[Marqo](https://www.marqo.ai/) is an open-source vector search engine. Marqo allows you to store and query multimodal data such as text and images. Marqo creates the vectors for you using a huge selection of opensource models, you can also provide your own finetuned models and Marqo will handle the loading and inference for you.\n", ">[Marqo](https://www.marqo.ai/) is an open-source vector search engine. Marqo allows you to store and query multi-modal data such as text and images. Marqo creates the vectors for you using a huge selection of open-source models, you can also provide your own fine-tuned models and Marqo will handle the loading and inference for you.\n",
"\n", "\n",
"To run this notebook with our docker image please run the following commands first to get Marqo:\n", "To run this notebook with our docker image please run the following commands first to get Marqo:\n",
"\n", "\n",

View File

@ -19,7 +19,7 @@
"id": "43ead5d5-2c1f-4dce-a69a-cb00e4f9d6f0", "id": "43ead5d5-2c1f-4dce-a69a-cb00e4f9d6f0",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Setting up envrionments" "## Setting up environments"
] ]
}, },
{ {
@ -174,7 +174,7 @@
"\n", "\n",
"**NOTE**: Please be aware of SQL injection, this interface must not be directly called by end-user.\n", "**NOTE**: Please be aware of SQL injection, this interface must not be directly called by end-user.\n",
"\n", "\n",
"If you custimized your `column_map` under your setting, you search with filter like this:" "If you customized your `column_map` under your setting, you search with filter like this:"
] ]
}, },
{ {

View File

@ -344,7 +344,7 @@
"\n", "\n",
"Ive worked on these issues a long time. \n", "Ive worked on these issues a long time. \n",
"\n", "\n",
"I know what works: Investing in crime preventionand community police officers wholl walk the beat, wholl know the neighborhood, and who can restore trust and safety.\n", "I know what works: Investing in crime prevention and community police officers wholl walk the beat, wholl know the neighborhood, and who can restore trust and safety.\n",
"--------------------------------------------------------------------------------\n", "--------------------------------------------------------------------------------\n",
"--------------------------------------------------------------------------------\n", "--------------------------------------------------------------------------------\n",
"Score: 0.2448441215698569\n", "Score: 0.2448441215698569\n",

View File

@ -497,7 +497,7 @@
"\n", "\n",
"I\u2019ve worked on these issues a long time. \n", "I\u2019ve worked on these issues a long time. \n",
"\n", "\n",
"I know what works: Investing in crime preventionand community police officers who\u2019ll walk the beat, who\u2019ll know the neighborhood, and who can restore trust and safety. \n", "I know what works: Investing in crime prevention and community police officers who\u2019ll walk the beat, who\u2019ll know the neighborhood, and who can restore trust and safety. \n",
"\n" "\n"
] ]
} }

Some files were not shown because too many files have changed in this diff Show More