mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-28 17:38:36 +00:00
docs: fix links (#15848)
This commit is contained in:
parent
9c871f427b
commit
18411c379c
@ -201,7 +201,7 @@
|
||||
"\n",
|
||||
"* E.g., for Llama-7b: `ollama pull llama2` will download the most basic version of the model (e.g., smallest # parameters and 4 bit quantization)\n",
|
||||
"* We can also specify a particular version from the [model list](https://github.com/jmorganca/ollama?tab=readme-ov-file#model-library), e.g., `ollama pull llama2:13b`\n",
|
||||
"* See the full set of parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html)"
|
||||
"* See the full set of parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.ollama.Ollama.html)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -241,7 +241,7 @@
|
||||
"\n",
|
||||
"As noted above, see the [API reference](https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html?highlight=llamacpp#langchain.llms.llamacpp.LlamaCpp) for the full set of parameters. \n",
|
||||
"\n",
|
||||
"From the [llama.cpp docs](https://python.langchain.com/docs/integrations/llms/llamacpp), a few are worth commenting on:\n",
|
||||
"From the [llama.cpp API reference docs](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.llamacpp.LlamaCpp.htm), a few are worth commenting on:\n",
|
||||
"\n",
|
||||
"`n_gpu_layers`: number of layers to be loaded into GPU memory\n",
|
||||
"\n",
|
||||
@ -378,9 +378,9 @@
|
||||
"source": [
|
||||
"### GPT4All\n",
|
||||
"\n",
|
||||
"We can use model weights downloaded from [GPT4All](https://python.langchain.com/docs/integrations/llms/gpt4all) model explorer.\n",
|
||||
"We can use model weights downloaded from [GPT4All](/docs/integrations/llms/gpt4all) model explorer.\n",
|
||||
"\n",
|
||||
"Similar to what is shown above, we can run inference and use [the API reference](https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html?highlight=gpt4all#langchain.llms.gpt4all.GPT4All) to set parameters of interest."
|
||||
"Similar to what is shown above, we can run inference and use [the API reference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gpt4all.GPT4All.html) to set parameters of interest."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -390,7 +390,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pip install gpt4all\n"
|
||||
"%pip install gpt4all"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -582,9 +582,9 @@
|
||||
"source": [
|
||||
"## Use cases\n",
|
||||
"\n",
|
||||
"Given an `llm` created from one of the models above, you can use it for [many use cases](docs/use_cases).\n",
|
||||
"Given an `llm` created from one of the models above, you can use it for [many use cases](/docs/use_cases/).\n",
|
||||
"\n",
|
||||
"For example, here is a guide to [RAG](docs/use_cases/question_answering/local_retrieval_qa) with local LLMs.\n",
|
||||
"For example, here is a guide to [RAG](/docs/use_cases/question_answering/local_retrieval_qa) with local LLMs.\n",
|
||||
"\n",
|
||||
"In general, use cases for local LLMs can be driven by at least two factors:\n",
|
||||
"\n",
|
||||
@ -611,7 +611,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
@ -399,24 +399,6 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"**Improvements**\n",
|
||||
"\n",
|
||||
"The performance of the `SQLDatabaseChain` can be enhanced in several ways:\n",
|
||||
"\n",
|
||||
"- [Adding sample rows](#adding-sample-rows)\n",
|
||||
"- [Specifying custom table information](/docs/integrations/tools/sqlite#custom-table-info)\n",
|
||||
"- [Using Query Checker](/docs/integrations/tools/sqlite#use-query-checker) self-correct invalid SQL using parameter `use_query_checker=True`\n",
|
||||
"- [Customizing the LLM Prompt](/docs/integrations/tools/sqlite#customize-prompt) include specific instructions or relevant information, using parameter `prompt=CUSTOM_PROMPT`\n",
|
||||
"- [Get intermediate steps](/docs/integrations/tools/sqlite#return-intermediate-steps) access the SQL statement as well as the final result using parameter `return_intermediate_steps=True`\n",
|
||||
"- [Limit the number of rows](/docs/integrations/tools/sqlite#choosing-how-to-limit-the-number-of-rows-returned) a query will return using parameter `top_k=5`\n",
|
||||
"\n",
|
||||
"You might find [SQLDatabaseSequentialChain](/docs/integrations/tools/sqlite#sqldatabasesequentialchain)\n",
|
||||
"useful for cases in which the number of tables in the database is large.\n",
|
||||
"\n",
|
||||
"This `Sequential Chain` handles the process of:\n",
|
||||
"\n",
|
||||
"1. Determining which tables to use based on the user question\n",
|
||||
"2. Calling the normal SQL database chain using only relevant tables\n",
|
||||
"\n",
|
||||
"**Adding Sample Rows**\n",
|
||||
"\n",
|
||||
@ -1269,7 +1251,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
@ -11,7 +11,7 @@
|
||||
"\n",
|
||||
"LangChain has [integrations](https://integrations.langchain.com/) with many open-source LLMs that can be run locally.\n",
|
||||
"\n",
|
||||
"See [here](docs/guides/local_llms) for setup instructions for these LLMs. \n",
|
||||
"See [here](/docs/guides/local_llms) for setup instructions for these LLMs. \n",
|
||||
"\n",
|
||||
"For example, here we show how to run `GPT4All` or `LLaMA2` locally (e.g., on your laptop) using local embeddings and a local LLM.\n",
|
||||
"\n",
|
||||
@ -141,11 +141,11 @@
|
||||
"\n",
|
||||
"Note: new versions of `llama-cpp-python` use GGUF model files (see [here](https://github.com/abetlen/llama-cpp-python/pull/633)).\n",
|
||||
"\n",
|
||||
"If you have an existing GGML model, see [here](docs/integrations/llms/llamacpp) for instructions for conversion for GGUF. \n",
|
||||
"If you have an existing GGML model, see [here](/docs/integrations/llms/llamacpp) for instructions for conversion for GGUF. \n",
|
||||
" \n",
|
||||
"And / or, you can download a GGUF converted model (e.g., [here](https://huggingface.co/TheBloke)).\n",
|
||||
"\n",
|
||||
"Finally, as noted in detail [here](docs/guides/local_llms) install `llama-cpp-python`"
|
||||
"Finally, as noted in detail [here](/docs/guides/local_llms) install `llama-cpp-python`"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -201,7 +201,7 @@
|
||||
"id": "fcf81052",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Setting model parameters as noted in the [llama.cpp docs](https://python.langchain.com/docs/integrations/llms/llamacpp)."
|
||||
"Setting model parameters as noted in the [llama.cpp docs](/docs/integrations/llms/llamacpp)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -230,7 +230,7 @@
|
||||
"id": "3831b16a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note that these indicate that [Metal was enabled properly](https://python.langchain.com/docs/integrations/llms/llamacpp):\n",
|
||||
"Note that these indicate that [Metal was enabled properly](/docs/integrations/llms/llamacpp):\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"ggml_metal_init: allocating\n",
|
||||
@ -304,7 +304,7 @@
|
||||
"\n",
|
||||
"Similarly, we can use `GPT4All`.\n",
|
||||
"\n",
|
||||
"[Download the GPT4All model binary](https://python.langchain.com/docs/integrations/llms/gpt4all).\n",
|
||||
"[Download the GPT4All model binary](/docs/integrations/llms/gpt4all).\n",
|
||||
"\n",
|
||||
"The Model Explorer on the [GPT4All](https://gpt4all.io/index.html) is a great way to choose and download a model.\n",
|
||||
"\n",
|
||||
|
@ -27,7 +27,7 @@
|
||||
"\n",
|
||||
"**Step 2: Add that parameter as a configurable field for the chain**\n",
|
||||
"\n",
|
||||
"This will let you easily call the chain and configure any relevant flags at runtime. See [this documentation](docs/expression_language/how_to/configure) for more information on configuration.\n",
|
||||
"This will let you easily call the chain and configure any relevant flags at runtime. See [this documentation](/docs/expression_language/how_to/configure) for more information on configuration.\n",
|
||||
"\n",
|
||||
"**Step 3: Call the chain with that configurable field**\n",
|
||||
"\n",
|
||||
@ -298,14 +298,6 @@
|
||||
" config={\"configurable\": {\"search_kwargs\": {\"namespace\": \"ankush\"}}},\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e3aa0b9e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
@ -55,7 +55,7 @@
|
||||
"\n",
|
||||
"#### Retrieval and generation\n",
|
||||
"4. **Retrieve**: Given a user input, relevant splits are retrieved from storage using a [Retriever](/docs/modules/data_connection/retrievers/).\n",
|
||||
"5. **Generate**: A [ChatModel](/docs/modules/model_io/chat) / [LLM](/docs/modules/model_io/llms/) produces an answer using a prompt that includes the question and the retrieved data"
|
||||
"5. **Generate**: A [ChatModel](/docs/modules/model_io/chat/) / [LLM](/docs/modules/model_io/llms/) produces an answer using a prompt that includes the question and the retrieved data"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -449,7 +449,7 @@
|
||||
"`TextSplitter`: Object that splits a list of `Document`s into smaller chunks. Subclass of `DocumentTransformer`s.\n",
|
||||
"- Explore `Context-aware splitters`, which keep the location (\"context\") of each split in the original `Document`:\n",
|
||||
" - [Markdown files](/docs/modules/data_connection/document_transformers/markdown_header_metadata)\n",
|
||||
" - [Code (py or js)](docs/integrations/document_loaders/source_code)\n",
|
||||
" - [Code (py or js)](/docs/integrations/document_loaders/source_code)\n",
|
||||
" - [Scientific papers](/docs/integrations/document_loaders/grobid)\n",
|
||||
"- [Interface](https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.TextSplitter.html): API reference for the base interface.\n",
|
||||
"\n",
|
||||
@ -865,7 +865,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
@ -1422,7 +1422,7 @@
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/tools/sqlite",
|
||||
"destination": "/docs/use_cases/qa_structured/sqlite"
|
||||
"destination": "/docs/use_cases/qa_structured/sql"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/callbacks/filecallbackhandler.html",
|
||||
|
Loading…
Reference in New Issue
Block a user