Minor grammatical fixes (#1325)

Fixed typos and links in a few places across documents
This commit is contained in:
Lakshya Agarwal
2023-03-02 10:48:09 +05:30
committed by GitHub
parent 59157b6891
commit cfed0497ac
9 changed files with 14 additions and 14 deletions

View File

@@ -7,7 +7,7 @@
"source": [
"# LLM Serialization\n",
"\n",
"This notebook walks how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (eg the provider, the temperature, etc)."
"This notebook walks through how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (e.g., the provider, the temperature, etc)."
]
},
{

View File

@@ -31,13 +31,13 @@ The examples here are all "how-to" guides for how to integrate with various LLM
`Forefront AI <./integrations/forefrontai_example.html>`_: Covers how to utilize the Forefront AI wrapper.
`PromptLayer OpenAI <./integrations/promptlayer_openai.html>`_: Covers how to use `PromptLayer <https://promptlayer.com>`_ with Langchain.
`PromptLayer OpenAI <./integrations/promptlayer_openai.html>`_: Covers how to use `PromptLayer <https://promptlayer.com>`_ with LangChain.
`Anthropic <./integrations/anthropic_example.html>`_: Covers how to use Anthropic models with Langchain.
`Anthropic <./integrations/anthropic_example.html>`_: Covers how to use Anthropic models with LangChain.
`DeepInfra <./integrations/deepinfra_example.html>`_: Covers how to utilize the DeepInfra wrapper.
`Self-Hosted Models (via Runhouse) <./integrations/self_hosted_examples.html>`_: Covers how to run models on existing or on-demand remote compute with Langchain.
`Self-Hosted Models (via Runhouse) <./integrations/self_hosted_examples.html>`_: Covers how to run models on existing or on-demand remote compute with LangChain.
.. toctree::

View File

@@ -2,9 +2,9 @@
## LLMs
Wrappers around Large Language Models (in particular, the "generate" ability of large language models) are at the core of LangChain functionality.
The core method that these classes expose is a `generate` method, which takes in a list of strings and returns an LLMResult (which contains outputs for all input strings).
Read more about LLMResult. This interface operates over a list of strings because often the lists of strings can be batched to the LLM provider,
providing speed and efficiency gains.
The core method that these classes expose is a `generate` method, which takes in a list of strings and returns an LLMResult (which contains outputs for all input strings). Read more about [LLMResult](#llmresult).
This interface operates over a list of strings because often the lists of strings can be batched to the LLM provider, providing speed and efficiency gains.
For convenience, this class also exposes a simpler, more user friendly interface (via `__call__`).
The interface for this takes in a single string, and returns a single string.