From 77d43ef31cb1276223c8ae200cfac41216eb8f51 Mon Sep 17 00:00:00 2001 From: imartinez Date: Fri, 8 Mar 2024 00:55:51 +0100 Subject: [PATCH] Update installation doc --- fern/docs/pages/installation/installation.mdx | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/fern/docs/pages/installation/installation.mdx b/fern/docs/pages/installation/installation.mdx index 26fc201a..11bbce91 100644 --- a/fern/docs/pages/installation/installation.mdx +++ b/fern/docs/pages/installation/installation.mdx @@ -137,7 +137,11 @@ Follow these steps to set up a local TensorRT-powered PrivateGPT: - Nvidia Cuda 12.2 or higher is currently required to run TensorRT-LLM. -- Install tensorrt_llm via pip with pip install --no-cache-dir --extra-index-url https://pypi.nvidia.com tensorrt-llm as explained [here](https://pypi.org/project/tensorrt-llm/) +- Install tensorrt_llm via pip as explained [here](https://pypi.org/project/tensorrt-llm/) + +```bash +pip install --no-cache-dir --extra-index-url https://pypi.nvidia.com tensorrt-llm +```` - For this example we will use Llama2. The Llama2 model files need to be created via scripts following the instructions [here](https://github.com/NVIDIA/trt-llm-rag-windows/blob/release/1.0/README.md#building-trt-engine). The following files will be created from following the steps in the link: