1
0
mirror of https://github.com/imartinez/privateGPT.git synced 2025-05-08 08:17:09 +00:00

Merge pull request from neuro-inc/feature/migrate-docs

Migrate docs to apolo and add pre-requisites
This commit is contained in:
Ovchinnikov Konstantyn 2024-10-15 12:52:26 +01:00 committed by GitHub
commit 69f201a73e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 15 additions and 11 deletions

View File

@ -119,5 +119,5 @@ jobs:
POSTGRES_DB: postgres
PGDATA: ${{ volumes.pgdata.mount }}/pgdata
volumes:
# - ${{ volumes.pgdata.ref_rw }}
- ${{ volumes.pgdata_onprem.ref_rw }}
- ${{ volumes.pgdata.ref_rw }}
# - ${{ volumes.pgdata_onprem.ref_rw }}

View File

@ -1,16 +1,20 @@
# 🔒 PrivateGPT 📑
## Pre-requisites
* Apolo cli. [Instructions](https://docs.apolo.us/index/cli/installing)
* HuggingFace access to the model you want to deploy. [For example LLAMA](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)
## Run on apolo / neu.ro platform
Note: this setup is mostly for POC purposes. For production-ready setup, you'll need to replace some of it's components with production-ready Apps.
1. `git clone` this repo && `cd` into root of it. We assume you've installed CLIs for the platform and authorized to the cluster.
1. Build image for web app with `neuro-flow build privategpt`
2. Create block storage for PGVector with `neuro disk create --name pgdata 10G --timeout-unused 100d`
3. Create secret with HuggingFace token to pull models `neuro secret add HF_TOKEN <token>` (see https://huggingface.co/settings/tokens)
4. `neuro-flow run pgvector` -- start vector store
5. `neuro-flow run ollama` -- start embeddings server
6. `neuro-flow run vllm` -- start LLM inference server. Note: if you want to change LLM hosted there, change it in bash command and in `env.VLLM_MODEL` of `pgpt` job.
7. `neuro-flow run pgpt` -- start PrivateGPT web server.
1. `$ git clone` this repo && `$ cd` into root of it.
1. Build image for web app with `$ apolo-flow build privategpt`
2. Create block storage for PGVector with `$ apolo disk create --name pgdata 10G --timeout-unused 100d`
3. Create secret with HuggingFace token to pull models `$ apolo secret add HF_TOKEN <token>` (see https://huggingface.co/settings/tokens)
4. `$ apolo-flow run pgvector` -- start vector store
5. `$ apolo-flow run ollama` -- start embeddings server
6. `$ apolo-flow run vllm` -- start LLM inference server. Note: if you want to change LLM hosted there, change it in bash command and in `env.VLLM_MODEL` of `pgpt` job.
7. `$ apolo-flow run pgpt` -- start PrivateGPT web server.
### Running PrivateGPT as stand-alone job
<details>
@ -42,7 +46,7 @@ Embeddings config section:
- `OLLAMA_EMBEDDING_MODEL` (str, optional) -- embeddings model to use. Must be already loaded into Ollama instance
Having above values, run job with
`neuro run --volume storage:.apps/pgpt/data:/home/worker/app/local_data --http-port=8080 ghcr.io/neuro-inc/private-gpt`
`$ apolo run --volume storage:.apps/pgpt/data:/home/worker/app/local_data --http-port=8080 ghcr.io/neuro-inc/private-gpt`
Other platform-related configurations like `--life-span`, etc. also work here.