mirror of
https://github.com/hwchase17/langchain.git
synced 2025-10-05 20:28:56 +00:00
docs: Update Nvidia documentation (#21240)
Updating Nvidia docs ahead for 5/15 competition. Thanks!
This commit is contained in:
@@ -12,7 +12,7 @@
|
||||
"The `ChatNVIDIA` class is a LangChain chat model that connects to [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/).\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to NVIDIA hosted API endpoints for NVIDIA AI Foundation Models like Mixtral 8x7B, Llama 2, Stable Diffusion, etc. These models, hosted on the [NVIDIA NGC catalog](https://catalog.ngc.nvidia.com/ai-foundation-models), are optimized, tested, and hosted on the NVIDIA AI platform, making them fast and easy to evaluate, further customize, and seamlessly run at peak performance on any accelerated stack.\n",
|
||||
"> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to NVIDIA hosted API endpoints for NVIDIA AI Foundation Models like Mixtral 8x7B, Llama 2, Stable Diffusion, etc. These models, hosted on the [NVIDIA API catalog](https://build.nvidia.com/), are optimized, tested, and hosted on the NVIDIA AI platform, making them fast and easy to evaluate, further customize, and seamlessly run at peak performance on any accelerated stack.\n",
|
||||
"> \n",
|
||||
"> With [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/), you can get quick results from a fully accelerated stack running on [NVIDIA DGX Cloud](https://www.nvidia.com/en-us/data-center/dgx-cloud/). Once customized, these models can be deployed anywhere with enterprise-grade security, stability, and support using [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/).\n",
|
||||
"> \n",
|
||||
@@ -58,13 +58,13 @@
|
||||
"\n",
|
||||
"**To get started:**\n",
|
||||
"\n",
|
||||
"1. Create a free account with the [NVIDIA NGC](https://catalog.ngc.nvidia.com/) service, which hosts AI solution catalogs, containers, models, etc.\n",
|
||||
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models\n",
|
||||
"\n",
|
||||
"2. Navigate to `Catalog > AI Foundation Models > (Model with API endpoint)`.\n",
|
||||
"2. Click on your model of choice\n",
|
||||
"\n",
|
||||
"3. Select the `API` option and click `Generate Key`.\n",
|
||||
"3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.\n",
|
||||
"\n",
|
||||
"4. Save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints."
|
||||
"4. Copy and save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -311,7 +311,7 @@
|
||||
"\n",
|
||||
"Some model types support unique prompting techniques and chat messages. We will review a few important ones below.\n",
|
||||
"\n",
|
||||
"**To find out more about a specific model, please navigate to the API section of an AI Foundation model [as linked here](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/codellama-13b/api).**"
|
||||
"**To find out more about a specific model, please navigate to the API section of an AI Foundation model [as linked here](https://build.nvidia.com/).**"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
Reference in New Issue
Block a user