182 Commits

Author SHA1 Message Date
github-actions[bot]
ca2b8da69c chore(main): release 0.6.1 (#2041)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-08-05 17:17:34 +02:00
Javier Martinez
f09f6dd255 fix: add built image from DockerHub (#2042)
* chore: update docker-compose with profiles

* docs: add quick start doc

* chore: generate docker release when new version is released

* chore: add dockerhub image in docker-compose

* docs: update quickstart with local/remote images

* chore: update docker tag

* chore: refactor dockerfile names

* chore: update docker-compose names

* docs: update llamacpp naming

* fix: naming

* docs: fix llamacpp command
2024-08-05 17:15:38 +02:00
Liam Dowd
1c665f7900 fix: Adding azopenai to model list (#2035)
Fixing the error I encountered while using the azopenai mode
2024-08-05 16:30:10 +02:00
Javier Martinez
1d4c14d7a3 fix(deploy): generate docker release when new version is released (#2038) 2024-08-05 16:28:19 +02:00
Javier Martinez
dae0727a1b fix(deploy): improve Docker-Compose and quickstart on Docker (#2037)
* chore: update docker-compose with profiles

* docs: add quick start doc
2024-08-05 16:28:19 +02:00
github-actions[bot]
6674b46fea chore(main): release 0.6.0 (#1834)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-08-02 11:28:22 +02:00
Javier Martinez
e44a7f5773 chore: bump version (#2033) 2024-08-02 11:26:03 +02:00
Javier Martinez
cf61bf780f feat(llm): add progress bar when ollama is pulling models (#2031)
* fix: add ollama progress bar when pulling models

* feat: add ollama queue

* fix: mypy
2024-08-01 19:14:26 +02:00
Javier Martinez
50b3027a24 docs: update docs and capture (#2029)
* docs: update Readme

* style: refactor image

* docs: change important to tip
2024-08-01 10:01:22 +02:00
Javier Martinez
54659588b5 fix: nomic embeddings (#2030)
* fix: allow to configure trust_remote_code

based on: https://github.com/zylon-ai/private-gpt/issues/1893#issuecomment-2118629391

* fix: nomic hf embeddings
2024-08-01 09:43:30 +02:00
Javier Martinez
8119842ae6 feat(recipe): add our first recipe Summarize (#2028)
* feat: add summary recipe

* test: add summary tests

* docs: move all recipes docs

* docs: add recipes and summarize doc

* docs: update openapi reference

* refactor: split method in two method (summary)

* feat: add initial summarize ui

* feat: add mode explanation

* fix: mypy

* feat: allow to configure async property in summarize

* refactor: move modes to enum and update mode explanations

* docs: fix url

* docs: remove list-llm pages

* docs: remove double header

* fix: summary description
2024-07-31 16:53:27 +02:00
Javier Martinez
40638a18a5 fix: unify embedding models (#2027)
* feat: unify embedding model to nomic

* docs: add embedding dimensions mismatch

* docs: fix fern
2024-07-31 14:35:46 +02:00
Javier Martinez
9027d695c1 feat: make llama3.1 as default (#2022)
* feat: change ollama default model to llama3.1

* chore: bump versions

* feat: Change default model in local mode to llama3.1

* chore: make sure last poetry version is used

* fix: mypy

* fix: do not add BOS (with last llamacpp-python version)
2024-07-31 14:35:36 +02:00
Javier Martinez
e54a8fe043 fix: prevent to ingest local files (by default) (#2010)
* feat: prevent to local ingestion (by default) and add white-list

* docs: add local ingestion warning

* docs: add missing comment

* fix: update exception error

* fix: black
2024-07-31 14:33:46 +02:00
Javier Martinez
1020cd5328 fix: light mode (#2025) 2024-07-31 12:59:31 +02:00
Quentin McGaw
65c5a1708b chore(docker): dockerfiles improvements and fixes (#1792)
* `UID` and `GID` build arguments for `worker` user

* `POETRY_EXTRAS` build argument with default values

* Copy `Makefile` for `make ingest` command

* Do NOT copy markdown files
I doubt anyone reads a markdown file within a Docker image

* Fix PYTHONPATH value

* Set home directory to `/home/worker` when creating user

* Combine `ENV` instructions together

* Define environment variables with their defaults
- For documentation purposes
- Reflect defaults set in settings-docker.yml

* `PGPT_EMBEDDING_MODE` to define embedding mode

* Remove ineffective `python3 -m pipx ensurepath`

* Use `&&` instead of `;` to chain commands to detect failure better

* Add `--no-root` flag to poetry install commands

* Set PGPT_PROFILES to docker

* chore: remove envs

* chore: update to use ollama in docker-compose

* chore: don't copy makefile

* chore: don't copy fern

* fix: tiktoken cache

* fix: docker compose port

* fix: ffmpy dependency (#2020)

* fix: ffmpy dependency

* fix: block ffmpy to commit sha

* feat(llm): autopull ollama models (#2019)

* chore: update ollama (llm)

* feat: allow to autopull ollama models

* fix: mypy

* chore: install always ollama client

* refactor: check connection and pull ollama method to utils

* docs: update ollama config with autopulling info

...

* chore: autopull ollama models

* chore: add GID/UID comment

...

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-30 17:59:38 +02:00
Robert Hirsch
d080969407 added llama3 prompt (#1962)
* added llama3 prompt

* more fixes to pass tests; changed type VectorStore -> BasePydanticVectorStore, see https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md#2024-05-14

* fix: new llama3 prompt

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-29 17:28:00 +02:00
Javier Martinez
d4375d078f fix(ui): gradio bug fixes (#2021)
* fix: when two user messages were sent

* fix: add source divider

* fix: add favicon

* fix: add zylon link

* refactor: update label
2024-07-29 16:48:16 +02:00
Javier Martinez
20bad17c98 feat(llm): autopull ollama models (#2019)
* chore: update ollama (llm)

* feat: allow to autopull ollama models

* fix: mypy

* chore: install always ollama client

* refactor: check connection and pull ollama method to utils

* docs: update ollama config with autopulling info
2024-07-29 13:25:42 +02:00
Javier Martinez
dabf556dae fix: ffmpy dependency (#2020)
* fix: ffmpy dependency

* fix: block ffmpy to commit sha
2024-07-29 11:56:57 +02:00
Iván Martínez
05a986231c Add proper param to demo urls (#2007) 2024-07-22 14:44:03 +02:00
Javier Martinez
b62669784b docs: update welcome page (#2004) 2024-07-18 14:42:39 +02:00
Javier Martinez
2c78bb2958 docs: add PR and issue templates (#2002)
* chore: add pull request template

* chore: add issue templates

* chore: require more information in bugs
2024-07-18 12:56:10 +02:00
Iván Martínez
90d211c5cd Update README.md (#2003)
* Update README.md

Remove the outdated contact form and point to Zylon website for those looking for a ready-to-use enterprise solution built on top of PrivateGPT

* Update README.md

Update text to address the comments

* Update README.md

Improve text
2024-07-18 12:11:24 +02:00
Jackson
43cc31f740 feat(vectordb): Milvus vector db Integration (#1996)
* integrate Milvus into Private GPT

* adjust milvus settings

* update doc info and reformat

* adjust milvus initialization

* adjust import error

* mionr update

* adjust format

* adjust the db storing path

* update doc
2024-07-18 10:55:45 +02:00
Javier Martinez
4523a30c8f feat(docs): update documentation and fix preview-docs (#2000)
* docs: add missing configurations

* docs: change HF embeddings by ollama

* docs: add disclaimer about Gradio UI

* docs: improve readability in concepts

* docs: reorder `Fully Local Setups`

* docs: improve setup instructions

* docs: prevent have duplicate documentation and use table to show different options

* docs: rename privateGpt to PrivateGPT

* docs: update ui image

* docs: remove useless header

* docs: convert to alerts ingestion disclaimers

* docs: add UI alternatives

* docs: reference UI alternatives in disclaimers

* docs: fix table

* chore: update doc preview version

* chore: add permissions

* chore: remove useless line

* docs: fixes

...
2024-07-18 10:06:51 +02:00
Javier Martinez
01b7ccd064 fix(config): make tokenizer optional and include a troubleshooting doc (#1998)
* docs: add troubleshooting

* fix: pass HF token to setup script and prevent to download tokenizer when it is empty

* fix: improve log and disable specific tokenizer by default

* chore: change HF_TOKEN environment to be aligned with default config

* ifx: mypy
2024-07-17 10:06:27 +02:00
Javier Martinez
15f73dbc48 docs: update repo links, citations (#1990)
* docs: update project links

...

* docs: update citation
2024-07-09 10:03:57 +02:00
fern
187bc9320e (feat): add github button (#1989)
Co-authored-by: chdeskur <chdeskur@gmail.com>
2024-07-09 08:48:47 +02:00
Marco Braga
dde02245bc fix(docs): Fix concepts.mdx referencing to installation page (#1779)
* Fix/update concepts.mdx referencing to installation page

The link for `/installation` is broken in the "Main Concepts" page.

The correct path would be `./installation` or  maybe `/installation/getting-started/installation`

* fix: docs

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-08 16:19:50 +02:00
Mart
067a5f144c feat(docs): Fix setup docu (#1926)
* Update settings.mdx

* docs: add cmd

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-08 16:19:16 +02:00
Proger666
2612928839 feat(vectorstore): Add clickhouse support as vectore store (#1883)
* Added ClickHouse vector sotre support

* port fix

* updated lock file

* fix: mypy

* fix: mypy

---------

Co-authored-by: Valery Denisov <valerydenisov@double.cloud>
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-08 16:18:22 +02:00
uw4
fc13368bc7 feat(llm): Support for Google Gemini LLMs and Embeddings (#1965)
* Support for Google Gemini LLMs and Embeddings

Initial support for Gemini, enables usage of Google LLMs and embedding models (see settings-gemini.yaml)

Install via
poetry install --extras "llms-gemini embeddings-gemini"

Notes:
* had to bump llama-index-core to later version that supports Gemini
* poetry --no-update did not work: Gemini/llama_index seem to require more (transient) updates to make it work...

* fix: crash when gemini is not selected

* docs: add gemini llm

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-08 11:47:36 +02:00
Shengsheng Huang
19a7c065ef feat(docs): update doc for ipex-llm (#1968) 2024-07-08 09:42:44 +02:00
Javier Martinez
b687dc8524 feat: bump dependencies (#1987) 2024-07-05 16:31:13 +02:00
Pablo Orgaz
c7212ac7cc fix(LLM): mistral ignoring assistant messages (#1954)
* fix: mistral ignoring assistant messages

* fix: typing

* fix: fix tests
2024-05-30 15:41:16 +02:00
Yevhenii Semendiak
3b3e96ad6c Allow parameterizing OpenAI embeddings component (api_base, key, model) (#1920)
* Allow parameterizing OpenAI embeddings component (api_base, key, model)

* Update settings

* Update description
2024-05-17 09:52:50 +02:00
jcbonnet-fwd
45df99feb7 Add timeout parameter for better support of openailike LLM tools on local computer (like LM Studio). (#1858)
feat(llm): Improve settings of the OpenAILike LLM
2024-05-10 16:44:08 +02:00
Fran García
966af4771d fix(settings): enable cors by default so it will work when using ts sdk (spa) (#1925) 2024-05-10 14:13:46 +02:00
Fran García
d13029a046 feat(docs): add privategpt-ts sdk (#1924) 2024-05-10 14:13:15 +02:00
Patrick Peng
9d0d614706 fix: Replacing unsafe eval() with json.loads() (#1890) 2024-04-30 09:58:19 +02:00
icsy7867
e21bf20c10 feat: prompt_style applied to all LLMs + extra LLM params. (#1835)
* Updated prompt_style to be moved to the main LLM setting since all LLMs from llama_index can utilize this.  I also included temperature, context window size, max_tokens, max_new_tokens into the openailike to help ensure the settings are consistent from the other implementations.

* Removed prompt_style from llamacpp entirely

* Fixed settings-local.yaml to include prompt_style in the LLM settings instead of llamacpp.
2024-04-30 09:53:10 +02:00
Daniel Gallego Vico
c1802e7cf0 fix(docs): Update installation.mdx (#1866)
Update repo url
2024-04-19 17:10:58 +02:00
Marco Repetto
2a432bf9c5 fix: make embedding_api_base match api_base when on docker (#1859) 2024-04-19 15:42:19 +02:00
dividebysandwich
947e737f30 fix: "no such group" error in Dockerfile, added docx2txt and cryptography deps (#1841)
* Fixed "no such group" error in Dockerfile, added docx2txt to poetry so docx parsing works out of the box for docker containers

* added cryptography dependency for pdf parsing
2024-04-19 15:40:00 +02:00
imartinez
49ef729abc Allow passing HF access token to download tokenizer. Fallback to default tokenizer. 2024-04-19 15:38:25 +02:00
Pablo Orgaz
347be643f7 fix(llm): special tokens and leading space (#1831) 2024-04-04 14:37:29 +02:00
imartinez
08c4ab175e Fix version in poetry 2024-04-03 10:59:35 +02:00
imartinez
f469b4619d Add required Ollama setting 2024-04-02 18:27:57 +02:00
github-actions[bot]
94ef38cbba chore(main): release 0.5.0 (#1708)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-04-02 17:45:15 +02:00
Иван
8a836e4651 feat(docs): Add guide Llama-CPP Linux AMD GPU support (#1782) 2024-04-02 16:55:05 +02:00
Ingrid Stevens
f0b174c097 feat(ui): Add Model Information to ChatInterface label 2024-04-02 16:52:27 +02:00
igeni
bac818add5 feat(code): improve concat of strings in ui (#1785) 2024-04-02 16:42:40 +02:00
Brett England
ea153fb92f feat(scripts): Wipe qdrant and obtain db Stats command (#1783) 2024-04-02 16:41:42 +02:00
Robin Boone
b3b0140e24 feat(llm): Ollama LLM-Embeddings decouple + longer keep_alive settings (#1800) 2024-04-02 16:23:10 +02:00
machatschek
83adc12a8e feat(RAG): Introduce SentenceTransformer Reranker (#1810) 2024-04-02 10:29:51 +02:00
Marco Repetto
f83abff8bc feat(docker): set default Docker to use Ollama (#1812) 2024-04-01 13:08:48 +02:00
icsy7867
087cb0b7b7 feat(rag): expose similarity_top_k and similarity_score to settings (#1771)
* Added RAG settings to settings.py, vector_store and chat_service to add similarity_top_k and similarity_score

* Updated settings in vector and chat service per Ivans request

* Updated code for mypy
2024-03-20 22:25:26 +01:00
Marco Repetto
774e256052 fix: Fixed docker-compose (#1758)
* Fixed docker-compose

* Update docker-compose.yaml
2024-03-20 21:36:45 +01:00
Iván Martínez
6f6c785dac feat(llm): Ollama timeout setting (#1773)
* added request_timeout to ollama, default set to 30.0 in settings.yaml and settings-ollama.yaml

* Update settings-ollama.yaml

* Update settings.yaml

* updated settings.py and tidied up settings-ollama-yaml

* feat(UI): Faster startup and document listing (#1763)

* fix(ingest): update script label (#1770)

huggingface -> Hugging Face

* Fix lint errors

---------

Co-authored-by: Stephen Gresham <steve@gresham.id.au>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
2024-03-20 21:33:46 +01:00
Brett England
c2d694852b feat: wipe per storage type (#1772) 2024-03-20 21:31:44 +01:00
Ikko Eltociear Ashimine
7d2de5c96f fix(ingest): update script label (#1770)
huggingface -> Hugging Face
2024-03-20 20:23:08 +01:00
Iván Martínez
348df781b5 feat(UI): Faster startup and document listing (#1763) 2024-03-20 19:11:44 +01:00
Iván Martínez
572518143a feat(docs): Feature/upgrade docs (#1741)
* Upgrade fern version

* Add info about SDKs
2024-03-19 21:26:53 +01:00
Brett England
134fc54d7d feat(ingest): Created a faster ingestion mode - pipeline (#1750)
* Unify pgvector and postgres connection settings

* Remove local changes

* Update file pgvector->postgres

* postgresql should be postgres

* Adding pipeline ingestion mode

* disable hugging face parallelism.  Continue on file to doc transform failure

* Semaphore to limit docq async workers. ETA reporting
2024-03-19 21:24:46 +01:00
Otto L
1efac6a3fe feat(llm - embed): Add support for Azure OpenAI (#1698)
* Add support for Azure OpenAI

* fix: wrong default api_version

Should be dashes instead of underscores.
see: https://learn.microsoft.com/en-us/azure/ai-services/openai/reference

* fix: code styling

applied "make check" changes

* refactor: extend documentation

* mention azopenai as available option and extras
* add recommended section
* include settings-azopenai.yaml configuration file

* fix: documentation
2024-03-15 16:49:50 +01:00
Brett England
258d02d87c fix(docs): Minor documentation amendment (#1739)
* Unify pgvector and postgres connection settings

* Remove local changes

* Update file pgvector->postgres

* postgresql should be postgres
2024-03-15 16:36:32 +01:00
Brett England
63de7e4930 feat: unify settings for vector and nodestore connections to PostgreSQL (#1730)
* Unify pgvector and postgres connection settings

* Remove local changes

* Update file pgvector->postgres
2024-03-15 09:55:17 +01:00
Brett England
68b3a34b03 feat(nodestore): add Postgres for the doc and index store (#1706)
* Adding Postgres for the doc and index store

* Adding documentation.  Rename postgres database local->simple.  Postgres storage dependencies

* Update documentation for postgres storage

* Renaming feature to nodestore

* update docstore -> nodestore in doc

* missed some docstore changes in doc

* Updated poetry.lock

* Formatting updates to pass ruff/black checks

* Correction to unreachable code!

* Format adjustment to pass black test

* Adjust extra inclusion name for vector pg

* extra dep change for pg vector

* storage-postgres -> storage-nodestore-postgres

* Hash change on poetry lock
2024-03-14 17:12:33 +01:00
Iván Martínez
d17c34e81a fix(settings): set default tokenizer to avoid running make setup fail (#1709) 2024-03-13 09:53:40 +01:00
Andrew Jiang
84ad16af80 feat(docs): upgrade fern (#1596) 2024-03-11 23:02:56 +01:00
Arun Yadav
821bca32e9 feat(local): tiktoken cache within repo for offline (#1467) 2024-03-11 22:55:13 +01:00
icsy7867
02dc83e8e9 feat(llm): adds serveral settings for llamacpp and ollama (#1703) 2024-03-11 22:51:05 +01:00
Hoffelhas
410bf7a71f feat(ui): maintain score order when curating sources (#1643)
* Update ui.py

Changed 'curated_sources' from a list, in order to maintain score order when returning the curated sources.

* Maintain score order after curating sources
2024-03-11 22:27:30 +01:00
icsy7867
290b9fb084 feat(ui): add sources check to not repeat identical sources (#1705) 2024-03-11 22:24:18 +01:00
github-actions[bot]
1b03b369c0 chore(main): release 0.4.0 (#1628)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-03-06 17:53:35 +01:00
Iván Martínez
45f05711eb feat: Upgrade to LlamaIndex to 0.10 (#1663)
* Extract optional dependencies

* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity

* Support Ollama embeddings

* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine

* Fix vector retriever filters
2024-03-06 17:51:30 +01:00
Daniel Gallego Vico
12f3a39e8a Update x handle to zylon private gpt (#1644) 2024-02-23 15:51:35 +01:00
TQ
cd40e3982b feat(Vector): support pgvector (#1624) 2024-02-20 15:29:26 +01:00
github-actions[bot]
066ea5bf28 chore(main): release 0.3.0 (#1413)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-02-16 17:42:39 +01:00
Iván Martínez
aa13afde07 feat(UI): Select file to Query or Delete + Delete ALL (#1612)
---------

Co-authored-by: Robin Boone <rboone@sofics.com>
2024-02-16 17:36:09 +01:00
icsy7867
24fb80ca38 fix(UI): Updated ui.py. Frees up the CPU to not be bottlenecked.
Updated ui.py to include a small sleep timer while building the stream deltas.  This recursive function fires off so quickly to eats up too much of the CPU.  This small sleep frees up the CPU to not be bottlenecked.  This value can go lower/shorter.  But 0.02 or 0.025 seems to work well. (#1589)

Co-authored-by: root <root@wesgitlabdemo.icl.gtri.org>
2024-02-16 12:52:14 +01:00
Ygal Blum
6bbec79583 feat(llm): Add support for Ollama LLM (#1526) 2024-02-09 15:50:50 +01:00
Nick Smirnov
b178b51451 feat(bulk-ingest): Add --ignored Flag to Exclude Specific Files and Directories During Ingestion (#1432) 2024-02-07 19:59:32 +01:00
Iván Martínez
24fae660e6 feat: Add stream information to generate SDKs (#1569) 2024-02-02 16:14:22 +01:00
Pablo Orgaz
3e67e21d38 Add embedding mode config (#1541) 2024-01-25 10:55:32 +01:00
Naveen Kannan
869233f0e4 fix: Adding an LLM param to fix broken generator from llamacpp (#1519) 2024-01-17 18:10:45 +01:00
CognitiveTech
e326126d0d feat: add mistral + chatml prompts (#1426) 2024-01-16 22:51:14 +01:00
Robert Gay
6191bcdbd6 fix: minor bug in chat stream output - python error being serialized (#1449) 2024-01-16 16:41:20 +01:00
Iván Martínez
d3acd85fe3 fix(tests): load the test settings only when running tests
Previous implementation causes false positives with the last version of LlamaIndex
2024-01-09 12:03:16 +01:00
Guido Schulz
0a89d76cc5 fix(docs): Update quickstart doc and set version in pyproject.toml to 0.2.0 2023-12-26 13:09:31 +01:00
Matthew Hill
2d27a9f956 feat(llm): Add openailike llm mode (#1447)
This mode behaves the same as the openai mode, except that it allows setting custom models not
supported by OpenAI. It can be used with any tool that serves models from an OpenAI compatible API.

Implements #1424
2023-12-26 10:26:08 +01:00
imartinez
fee9f08ef3 Move back to 3900 for the context window to avoid melting local machines 2023-12-22 18:21:43 +01:00
Iván Martínez
fde2b942bc fix(deploy): fix local and external dockerfiles 2023-12-22 14:16:46 +01:00
Iván Martínez
4c69c458ab Improve ingest logs (#1438) 2023-12-21 17:13:46 +01:00
Iván Martínez
4780540870 feat(settings): Configurable context_window and tokenizer (#1437) 2023-12-21 14:49:35 +01:00
Iván Martínez
6eeb95ec7f feat(API): Ingest plain text (#1417)
* Add ingest/text route to ingest plain text

* Add new ingest text test and adapt ingest/file ones

* Include new API in docs

* Remove duplicated logic
2023-12-18 21:47:05 +01:00
Pablo Orgaz
059f35840a fix(docker): docker broken copy (#1419) 2023-12-18 16:55:18 +01:00
Iván Martínez
8ec7cf49f4 feat(settings): Update default model to TheBloke/Mistral-7B-Instruct-v0.2-GGUF (#1415)
* Update LlamaCPP dependency

* Default to TheBloke/Mistral-7B-Instruct-v0.2-GGUF

* Fix API docs
2023-12-17 16:11:08 +01:00
Rohit Das
c71ae7cee9 feat(ui): make chat area stretch to fill the screen (#1397) 2023-12-17 12:02:13 +01:00
cognitivetech
2564f8d2bb fix(settings): correct yaml multiline string (#1403) 2023-12-16 19:02:46 +01:00
Eliott Bouhana
4e496e970a docs: remove misleading comment about pgpt working with python 3.12 (#1394)
I was misled into believing I could install using python 3.12 whereas the pyproject.toml explicitly states otherwise. This PR only removes this comment to make sure other people are not also trapped 😄
2023-12-15 21:35:02 +01:00
Federico Grandi
3582764801 ci: fix preview docs checkout ref (#1393) 2023-12-12 20:33:34 +01:00
Federico Grandi
1d28ae2915 docs: fix minor capitalization typo (#1392) 2023-12-12 20:31:38 +01:00
github-actions[bot]
e8ac51bba4 chore(main): release 0.2.0 (#1387)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-12-10 20:08:12 +01:00
3ly-13
145f3ec9f4 feat(ui): Allows User to Set System Prompt via "Additional Options" in Chat Interface (#1353) 2023-12-10 19:45:14 +01:00
3ly-13
a072a40a7c Allow setting OpenAI model in settings (#1386)
feat(settings): Allow setting openai model to be used. Default to GPT 3.5
2023-12-09 20:13:00 +01:00
Louis Melchior
a3ed14c58f feat(llm): drop default_system_prompt (#1385)
As discussed on Discord, the decision has been made to remove the system prompts by default, to better segregate the API and the UI usages.

A concurrent PR (#1353) is enabling the dynamic setting of a system prompt in the UI.

Therefore, if UI users want to use a custom system prompt, they can specify one directly in the UI.
If the API users want to use a custom prompt, they can pass it directly into their messages that they are passing to the API.

In the highlight of the two use case above, it becomes clear that default system_prompt does not need to exist.
2023-12-08 23:13:51 +01:00
Iván Martínez
f235c50be9 Delete old docs (#1384) 2023-12-08 22:39:23 +01:00
EEmlan
9302620eac Adding german speaking model to documentation (#1374) 2023-12-08 11:26:25 +01:00
Max Zangs
9cf972563e Add setup option to Makefile (#1368) 2023-12-08 10:34:12 +01:00
github-actions[bot]
3d301d0c6f chore(main): release 0.1.0 (#1094)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-12-01 14:45:54 +01:00
lopagela
56af625d71 Fix the parallel ingestion mode, and make it available through conf (#1336)
* Fix the parallel ingestion mode, and make it available through conf

Also updated the documentation to show how to configure the ingest mode.

* PR feedback: redirect to documentation
2023-11-30 11:41:55 +01:00
Francisco García Sierra
b7ca7d35a0 Update ingest api docs with Windows support (#1289) 2023-11-29 20:56:37 +01:00
ishaandatta
28d03fdda8 Adding working combination of LLM and Embedding Model to recipes (#1315)
Co-authored-by: ishaandatta <ishaandatta50@gmail.com>
2023-11-29 20:54:22 +01:00
Phi Long
aabdb046ae Add docker compose (#1277)
Co-authored-by: philongn <philongn@theugroup.co>
Co-authored-by: Pablo Orgaz <pabloogc@gmail.com>
2023-11-29 16:46:40 +01:00
Iván Martínez
64ed9cd872 Allow passing a system prompt (#1318) 2023-11-29 15:51:19 +01:00
Gianni Acquisto
9c192ddd73 Added max_new_tokens as a config option to llm yaml block (#1317)
* added max_new_tokens as a configuration option to the llm block in settings

* Update fern/docs/pages/manual/settings.mdx

Co-authored-by: lopagela <lpglm@orange.fr>

* Update private_gpt/settings/settings.py

Add default value for max_new_tokens = 256

Co-authored-by: lopagela <lpglm@orange.fr>

* Addressed location of docs comment

* reformatting from running 'make check'

* remove default config value from settings.yaml

---------

Co-authored-by: lopagela <lpglm@orange.fr>
2023-11-26 19:17:29 +01:00
Gianni Acquisto
baf29f06fa Adding docs about embeddings settings + adding the embedding.mode: local in mock profile (#1316) 2023-11-26 17:32:11 +01:00
lopagela
bafdd3baf1 Ingestion Speedup Multiple strategy (#1309) 2023-11-25 20:12:09 +01:00
Iván Martínez
546ba33e6f Update readme with supporters info (#1311) 2023-11-25 18:35:59 +01:00
Iván Martínez
944c43bfa8 Multi language support - fern debug (#1307)
---------

Co-authored-by: Louis <lpglm@orange.fr>
Co-authored-by: LeMoussel <cnhx27@gmail.com>
2023-11-25 14:34:23 +01:00
Iván Martínez
e8d88f8952 Update preview-docs.yml
Use pull_request_target to be able to access FERN publication secret
2023-11-25 10:14:04 +01:00
Iván Martínez
c6d6e0e71b Update preview-docs.yml to enable debug 2023-11-24 17:51:33 +01:00
Iván Martínez
510caa576b Make qdrant the default vector db (#1285)
* Make qdrant the default vector db

---------

Co-authored-by: Pablo Orgaz <pabloogc@gmail.com>
Co-authored-by: lopagela <lpglm@orange.fr>
2023-11-20 16:19:22 +01:00
Francisco García Sierra
f1cbff0fb7 fix: Windows permission error on ingest service tmp files (#1280) 2023-11-20 10:08:03 +01:00
lopagela
a09cd7a892 Update llama_index to 0.9.3 (#1278)
* Update llama_index to 0.9.3

Had to change some imports because of breaking change durin the llama_index update to 0.9.0

* Update poetry.lock after update of llama_index
2023-11-19 18:49:36 +01:00
lopagela
36f69eed0f Refactor documentation architecture (#1264)
* Refactor documentation architecture

Split into several `tab` and sections

* Fix Fern's docs.yml after PR review

Thank you Danny!

Co-authored-by: dannysheridan <danny@buildwithfern.com>

* Re-add quickstart in the overview tab

It went missing after a refactoring of the doc architecture

* Documentation writing

* Adapt Makefile to fern documentation

* Do not create overlapping page names in fern documentation

This is causing 500. Thank you to @dsinghvi for the troubleshooting and the help!

* Add a readme to help to understand how fern documentation work and how to add new pages

* Rework the welcome view

Redirects directly users to installation guide with links for people that are not familiar with documentation browsing.

* Simplify the quickstart guide

* PR feedback on installation guide

A ton of refactoring can still be made there

* PR feedback on ingestion

* PR feedback on ingestion splitting

* Rename section on LLM

* Fix missing word in list of LLMs

---------

Co-authored-by: dannysheridan <danny@buildwithfern.com>
2023-11-19 18:46:09 +01:00
Iván Martínez
57a829a8e8 Move fern workflows to root workflows folder (#1273)
* Move fern workflows to root workflows folder

* Only run fern actions when docu has changed
2023-11-18 20:47:44 +01:00
Iván Martínez
8af5ed3347 Delete CNAME 2023-11-18 20:23:05 +01:00
lopagela
224812f7f6 Update to gradio 4 and allow upload multiple files at once in UI (#1271) 2023-11-18 20:19:43 +01:00
Iván Martínez
adaa00ccc8 Fix/readme UI image (#1272) 2023-11-18 20:19:03 +01:00
lopagela
99dc670df0 Add badges in the README.md (#1261)
Using https://shields.io/

To have the complete list of badges available, c.f. to their documentation: https://shields.io/badges
2023-11-18 18:47:30 +01:00
lopagela
f7d7b6cd4b Fixed the avatar of the box by using a local file (#1266)
Now rendering a specific file inside the python code
2023-11-18 12:29:27 +01:00
Pablo Orgaz
0d520026a3 fix: Windows 11 failing to auto-delete tmp file (#1260) 2023-11-17 18:23:57 +01:00
Lai Zn
4197ada626 feat: enable resume download for hf_hub_download (#1249) 2023-11-17 00:13:11 +01:00
Iván Martínez
09d9a91946 Create CNAME 2023-11-17 00:07:50 +01:00
Iván Martínez
f339f7608c Move Docs to Fern (#1257) 2023-11-16 23:25:14 +01:00
Iván Martínez
ff7e2bc9dd Delete CNAME 2023-11-16 22:53:00 +01:00
Iván Martínez
2a417d2f61 Fix/qdrant support (#1253)
* Disable check same thread by default to enable disk-based Qdrant local client to work
2023-11-16 13:29:17 +01:00
Dominik Fuchs
23fa530c31 added wipe make command (#1215)
* added `wipe` make command

* Apply suggestions from code review

Thanks for your suggestions, I like to apply them.

Co-authored-by: lopagela <lpglm@orange.fr>

* added `wipe` command to the documentation

* rebased to generate valid openapi.json

---------

Co-authored-by: lopagela <lpglm@orange.fr>
2023-11-16 11:44:02 +01:00
Anush
03d1ae6d70 feat: Qdrant support (#1228)
* feat: Qdrant support

* Update private_gpt/components/vector_store/vector_store_component.py
2023-11-13 21:23:26 +01:00
Iván Martínez
86fc4781d8 Fix openai setting literal (#1221) 2023-11-12 22:29:26 +01:00
Pablo Orgaz
022bd718e3 fix: Remove global state (#1216)
* Remove all global settings state

* chore: remove autogenerated class

* chore: cleanup

* chore: merge conflicts
2023-11-12 22:20:36 +01:00
Iván Martínez
f394ca61bb Reuse existing stored index during ingestion (#1220) 2023-11-12 22:14:38 +01:00
lopagela
aa70d3d9f0 Add simple Basic auth (#1203)
* Add simple Basic auth

To enable the basic authentication, one must set `server.auth.enabled`
to true.

The static string defined in `server.auth.secret` must be set in the
header `Authorization`.

The health check endpoint will always be accessible, no matter the API
auth configuration.

* Fix linting and type check

* Fighting with mypy being too restrictive

Had to disable mypy in the `auth` as we are not using the same signature
for the authenticated method.

mypy was complaining that the signatures of `authenticated` must be
identical, no matter in which logical branch we are.
Given that fastapi is accomodating itself of method signatures (it will
inject the dependencies in the method call), this warning of mypy is
actually preventing us to do something legit.

mypy doc: https://mypy.readthedocs.io/en/stable/common_issues.html

* Write tests to verify that the simple auth is working
2023-11-12 19:05:00 +01:00
Iván Martínez
b7647542f4 Curate sources to avoid the UI crashing (#1212)
* Curate sources to avoid the UI crashing

* Remove sources from chat history to avoid confusing the LLM
2023-11-12 10:59:51 +01:00
lopagela
a579c9bdc5 Update poetry lock (#1209)
* Update the version of llama_index used to fix transient openai errors

* Update poetry.lock file

* Make `local` mode the default mode by default
2023-11-11 22:44:19 +01:00
Iván Martínez
a22969ad1f Add sources to completions APIs and UI (#1206) 2023-11-11 21:39:15 +01:00
César García
dbd99e7b4b Update description.md (#1107)
Added a section on how to customize low level args, proposing people to stick to suggested models.
2023-11-11 09:23:46 +01:00
lopagela
8487440a6f Add basic CORS (#1198) 2023-11-10 14:29:43 +01:00
lopagela
a666fd5b73 Refactor UI state management (#1191)
* Added logs at generation of the UI, and generate the UI in an object
* Make ingest script more verbose in case of an error at ingestion time
* Removed the explicit state in the UI containing ingested files
* Make script of ingestion a bit more verbose by displaying stack traces
* Change the browser tab title of privateGPT ui to `My Private GPT`
2023-11-10 10:42:43 +01:00
Iván Martínez
55e626eac7 Update README.md
Make installation docs and community more visibile
2023-11-10 10:33:17 +01:00
Iván Martínez
c81f4b2ebd Search in Docs to UI (#1186)
Move from Context Chunks JSON response to a more comprehensive Search in Docs functionality
2023-11-09 12:44:57 +01:00
Iván Martínez
1e96e3a29e Stake issues/PRs not updated in 15 days (#1181)
Moving temporarily to 15 days given all PRs and Issues were "updated" on Oct 19 to label them as "primordial"
2023-11-08 12:07:17 +01:00
Iván Martínez
f75f60b234 Create stale.yml (#1177)
Workflow to automatically mark and close stale issues and PRs
2023-11-07 15:41:05 +01:00
lopagela
23cd3fea10 Parse JSON files using llama_index JSONReader (#1176)
Patch the default list of llama_index to support JSON files.
This injection of JSON documents should improve the comprehension in
JSON files, as there is a parsing of JSON files.
2023-11-07 15:39:40 +01:00
lopagela
0c40cfb115 Endpoint to delete documents ingested (#1163)
A file that is ingested will be transformed into several documents (that
are organized into nodes).
This endpoint is deleting documents (bits of a file). These bits can be
retrieved thanks to the endpoint to list all the documents.
2023-11-06 15:47:42 +01:00
lopagela
6583dc84c0 feat: Disable Gradio Analytics (#1165)
* Disable Gradio Analytics

Gradio analytics can be disabled by either using the kwargs `enable_analytics` on `gr.Blocks`, or by setting the env variable `GRADIO_ANALYTICS_ENABLED` to something different from `True`.

Since that Gradio does not seem to respect their code contract (around `enable_analytics`), and that they are performing other operations only based on the value of `GRADIO_ANALYTICS_ENABLED` (c.f. `gradio.strings` https://github.com/gradio-app/gradio/blob/main/gradio/strings.py#L39), we are disabling gradio analytics by setting the required env variable to `False`.

Note: Setting an environment variables using `os.environ['foo'] = 'bar'` on system that are not based on unix might not work.

c.f. https://docs.python.org/3/library/os.html#os.environ for details on how `os.environ` works and all its caveats

* Update private_gpt/__init__.py
2023-11-06 14:31:26 +01:00
Pablo Orgaz
0d677e10b9 feat: move torch and transformers to local group (#1172) 2023-11-06 14:24:16 +01:00
Iván Martínez
ad512e3c42 Feature/sagemaker embedding (#1161)
* Sagemaker deployed embedding model support

---------

Co-authored-by: Pablo Orgaz <pabloogc@gmail.com>
2023-11-05 16:16:49 +01:00
Pierre Marais
f29df84301 Disable chromaDB anonymous information collection (#1144)
See https://docs.trychroma.com/telemetry
2023-11-02 12:45:48 +01:00
Pablo Orgaz
a517a588c4 fix: sagemaker config and chat methods (#1142) 2023-10-30 21:54:41 +01:00
NetroScript
b0e258265f Improve logging and error handling when ingesting an entire folder (#1132) 2023-10-30 21:54:09 +01:00
Pablo Orgaz
5d1be6e94c chore: only generate docker images on demand (#1134)
* chore: only generate docker images on demand

* chore: consistent naming
2023-10-29 21:48:16 +01:00
lopagela
64c5ae214a feat: Drop loguru and use builtin logging (#1133)
* Configure simple builtin logging

Changed the 2 existing `print` in the `private_gpt` code base into actual python logging, stop using loguru (dependency will be dropped in a later commit).
Try to use the `key=value` logging convention in logs (to indicate what dynamic values represents, and what is dynamic vs not).
Using `%s` log style, so that the string formatting is pushed inside the logger, giving the ability to the logger to determine if the string need to be formatted or not (i.e. strings from debug logs might not be formatted if the log level is not debug)
The (basic) builtin log configuration have been placed in `private_gpt/__init__.py` in order to initialize the logging system even before we start to launch any python code in `private_gpt` package (ensuring we get any initialization log formatted as we want to)
Disabled `uvicorn` custom logging format, resulting in having uvicorn logs being outputted in our formatted.

Some more concise format could be used if we want to, especially:
```
COMPACT_LOG_FORMAT = '%(asctime)s.%(msecs)03d [%(levelname)s] %(name)s - %(message)s'
```

Python documentation and cookbook on logging for reference:
* https://docs.python.org/3/library/logging.html
* https://docs.python.org/3/howto/logging.html

* Removing loguru from the dependencies

Result of `poetry remove loguru`

* PR feedback: using `logger` variable name instead of `log`

---------

Co-authored-by: Louis Melchior <louis@jaris.io>
2023-10-29 19:11:02 +01:00
Pablo Orgaz
24cfddd60f fix: fix pytorch version to avoid wheel bug (#1123)
* fix: fix pytorch version

* fix: settings env var regex and split

* fix: add models folder for docker user
2023-10-27 20:27:40 +02:00
Pablo Orgaz
895588b82a fix: Docker and sagemaker setup (#1118)
* fix: docker copying extra files

* feat: allow configuring mode through env vars

* feat: Attempt to build and tag a docker image

* fix: run docker on release

* fix: typing in prompt transformation

* chore: remove tutorial comments
2023-10-27 13:29:29 +02:00
Shivam Singh
768e5ff505 chore: Update README.md (#1102)
Removed Grammatical errors
2023-10-24 12:43:41 +02:00
Iván Martínez
78546524d0 Use OpenAI for embeddings when openai mode is selected (#1096) 2023-10-23 10:50:42 +02:00
Federico Grandi
769a047b54 chore: add GitHub metadata (#1085)
* chore: add citation file

* chore: add LICENSE

* chore: update email

Co-authored-by: Daniel Gallego <danielgallegovico@gmail.com>

* chore: update email in citation file

Co-authored-by: Pablo Orgaz <pabloogc@gmail.com>

* chore: add ORCIDs to citation file

* docs: update README with citation info

---------

Co-authored-by: Daniel Gallego <danielgallegovico@gmail.com>
Co-authored-by: Pablo Orgaz <pabloogc@gmail.com>
2023-10-23 10:49:02 +02:00
Ikko Eltociear Ashimine
ba23443a70 fix: typo in README.md (#1091)
componentes -> components
2023-10-23 08:54:12 +02:00
github-actions[bot]
b8383e00a6 chore(main): release 0.0.2 (#1088)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-10-20 18:29:21 +02:00
Iván Martínez
f5a9bf4e37 fix: chromadb max batch size (#1087) 2023-10-20 18:24:56 +02:00
Pablo Orgaz
b46c1087e2 chore: add linux instructions and C++ guide (#1082)
* fix: add linux instructions

Co-authored-by: BW-Projects

* chore: Add C++ as a base requirement in the docs

* chore: Add clang for OSX

* chore: Update docs for OSX and gcc

* chore: make docs

---------

Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>
2023-10-20 14:36:50 +02:00
github-actions[bot]
97d860a7c9 chore(main): release 0.0.1 (#1086)
* chore(main): release 0.0.1

* Update CHANGELOG.md

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Pablo Orgaz <pabloogc@gmail.com>
2023-10-20 13:08:51 +02:00
Iván Martínez
490d93fdc1 chore: Initial version
Release-As: 0.0.1
2023-10-20 12:57:52 +02:00
Pablo Orgaz
aa4bb17a2e fix: make docs more visible (#1081) 2023-10-19 22:12:30 +02:00
Iván Martínez
d249a17c33 feat(ui): add LLM mode to UI (#1080) 2023-10-19 19:21:29 +02:00
Pablo Orgaz
b7450911b2 feat: Release GitHub action (#1078)
* feat: add release-please.yml

* feat: add release-please.yml

---------

Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>
2023-10-19 17:34:41 +02:00
Iván Martínez
3ad1da019b Update README.md
Primordial branch correct url
2023-10-19 16:17:35 +02:00
Pablo Orgaz
51cc638758 Next version of PrivateGPT (#1077)
* Dockerize private-gpt

* Use port 8001 for local development

* Add setup script

* Add CUDA Dockerfile

* Create README.md

* Make the API use OpenAI response format

* Truncate prompt

* refactor: add models and __pycache__ to .gitignore

* Better naming

* Update readme

* Move models ignore to it's folder

* Add scaffolding

* Apply formatting

* Fix tests

* Working sagemaker custom llm

* Fix linting

* Fix linting

* Enable streaming

* Allow all 3.11 python versions

* Use llama 2 prompt format and fix completion

* Restructure (#3)

Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>

* Fix Dockerfile

* Use a specific build stage

* Cleanup

* Add FastAPI skeleton

* Cleanup openai package

* Fix DI and tests

* Split tests and tests with coverage

* Remove old scaffolding

* Add settings logic (#4)

* Add settings logic

* Add settings for sagemaker

---------

Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>

* Local LLM (#5)

* Add settings logic

* Add settings for sagemaker

* Add settings-local-example.yaml

* Delete terraform files

* Refactor tests to use fixtures

* Join deltas

* Add local model support

---------

Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>

* Update README.md

* Fix tests

* Version bump

* Enable simple llamaindex observability (#6)

* Enable simple llamaindex observability

* Improve code through linting

* Update README.md

* Move to async (#7)

* Migrate implementation to use asyncio

* Formatting

* Cleanup

* Linting

---------

Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>

* Query Docs and gradio UI

* Remove unnecessary files

* Git ignore chromadb folder

* Async migration + DI Cleanup

* Fix tests

* Add integration test

* Use fastapi responses

* Retrieval service with partial implementation

* Cleanup

* Run formatter

* Fix types

* Fetch nodes asynchronously

* Install local dependencies in tests

* Install ui dependencies in tests

* Install dependencies for llama-cpp

* Fix sudo

* Attempt to fix cuda issues

* Attempt to fix cuda issues

* Try to reclaim some space from ubuntu machine

* Retrieval with context

* Fix lint and imports

* Fix mypy

* Make retrieval API a POST

* Make Completions body a dataclass

* Fix LLM chat message order

* Add Query Chunks to Gradio UI

* Improve rag query prompt

* Rollback CI Changes

* Move to sync code

* Using Llamaindex abstraction for query retrieval

* Fix types

* Default to CONDENSED chat mode for contextualized chat

* Rename route function

* Add Chat endpoint

* Remove webhooks

* Add IntelliJ run config to gitignore

* .gitignore applied

* Sync chat completion

* Refactor total

* Typo in context_files.py

* Add embeddings component and service

* Remove wrong dataclass from IngestService

* Filter by context file id implementation

* Fix typing

* Implement context_filter and separate from the bool use_context in the API

* Change chunks api to avoid conceptual class of the context concept

* Deprecate completions and fix tests

* Remove remaining dataclasses

* Use embedding component in ingest service

* Fix ingestion to have multipart and local upload

* Fix ingestion API

* Add chunk tests

* Add configurable paths

* Cleaning up

* Add more docs

* IngestResponse includes a list of IngestedDocs

* Use IngestedDoc in the Chunk document reference

* Rename ingest routes to ingest_router.py

* Fix test working directory for intellij

* Set testpaths for pytest

* Remove unused as_chat_engine

* Add .fleet ide to gitignore

* Make LLM and Embedding model configurable

* Fix imports and checks

* Let local_data folder exist empty in the repository

* Don't use certain metadata in LLM

* Remove long lines

* Fix windows installation

* Typos

* Update poetry.lock

* Add TODO for linux

* Script and first version of docs

* No jekill build

* Fix relative url to openapi json

* Change default docs values

* Move chromadb dependency to the general group

* Fix tests to use separate local_data

* Create CNAME

* Update CNAME

* Fix openapi.json relative path

* PrivateGPT logo

* WIP OpenAPI documentation metadata

* Add ingest script (#11)

* Add ingest script

* Fix broken name refactor

* Add ingest docs and Makefile script

* Linting

* Move transformers to main dependency

* Move torch to main dependencies

* Don't load HuggingFaceEmbedding in tests

* Fix lint

---------

Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>

* Rename file to camel_case

* Commit settings-local.yaml

* Move documentation to public docs

* Fix docker image for linux

* Installation and Running the Server documentation

* Move back to docs folder, as it is the only supported by github pages

* Delete CNAME

* Create CNAME

* Delete CNAME

* Create CNAME

* Improved API documentation

* Fix lint

* Completions documentation

* Updated openapi scheme

* Ingestion API doc

* Minor doc changes

* Updated openapi scheme

* Chunks API documentation

* Embeddings and Health API, and homogeneous responses

* Revamp README with new skeleton of content

* More docs

* PrivateGPT logo

* Improve UI

* Update ingestion docu

* Update README with new sections

* Use context window in the retriever

* Gradio Documentation

* Add logo to UI

* Include Contributing and Community sections to README

* Update links to resources in the README

* Small README.md updates

* Wrap lines of README.md

* Don't put health under /v1

* Add copy button to Chat

* Architecture documentation

* Updated openapi.json

* Updated openapi.json

* Updated openapi.json

* Change UI label

* Update documentation

* Add releases link to README.md

* Gradio avatar and stop debug

* Readme update

* Clean old files

* Remove unused terraform checks

* Update twitter link.

* Disable minimum coverage

* Clean install message in README.md

---------

Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>
Co-authored-by: Iván Martínez <ivanmartit@gmail.com>
Co-authored-by: RubenGuerrero <ruben.guerrero@boopos.com>
Co-authored-by: Daniel Gallego Vico <daniel.gallego@bq.com>
2023-10-19 16:04:35 +02:00
171 changed files with 16812 additions and 3735 deletions

16
.docker/router.yml Normal file
View File

@@ -0,0 +1,16 @@
http:
services:
ollama:
loadBalancer:
healthCheck:
interval: 5s
path: /
servers:
- url: http://ollama-cpu:11434
- url: http://ollama-cuda:11434
- url: http://host.docker.internal:11434
routers:
ollama-router:
rule: "PathPrefix(`/`)"
service: ollama

12
.dockerignore Normal file
View File

@@ -0,0 +1,12 @@
.venv
models
.github
.vscode
.DS_Store
.mypy_cache
.ruff_cache
local_data
terraform
tests
Dockerfile
Dockerfile.*

105
.github/ISSUE_TEMPLATE/bug.yml vendored Normal file
View File

@@ -0,0 +1,105 @@
name: Bug Report
description: Report a bug or issue with the project.
title: "[BUG] "
labels: ["bug"]
body:
- type: markdown
attributes:
value: |
**Please describe the bug you encountered.**
- type: checkboxes
id: pre-check
attributes:
label: Pre-check
description: Please confirm that you have searched for duplicate issues before creating this one.
options:
- label: I have searched the existing issues and none cover this bug.
required: true
- type: textarea
id: description
attributes:
label: Description
description: Provide a detailed description of the bug.
placeholder: "Detailed description of the bug"
validations:
required: true
- type: textarea
id: steps
attributes:
label: Steps to Reproduce
description: Provide the steps to reproduce the bug.
placeholder: "1. Step one\n2. Step two\n3. Step three"
validations:
required: true
- type: input
id: expected
attributes:
label: Expected Behavior
description: Describe what you expected to happen.
placeholder: "Expected behavior"
validations:
required: true
- type: input
id: actual
attributes:
label: Actual Behavior
description: Describe what actually happened.
placeholder: "Actual behavior"
validations:
required: true
- type: input
id: environment
attributes:
label: Environment
description: Provide details about your environment (e.g., OS, GPU, profile, etc.).
placeholder: "Environment details"
validations:
required: true
- type: input
id: additional
attributes:
label: Additional Information
description: Provide any additional information that may be relevant (e.g., logs, screenshots).
placeholder: "Any additional information that may be relevant"
- type: input
id: version
attributes:
label: Version
description: Provide the version of the project where you encountered the bug.
placeholder: "Version number"
- type: markdown
attributes:
value: |
**Please ensure the following setup checklist has been reviewed before submitting the bug report.**
- type: checkboxes
id: general-setup-checklist
attributes:
label: Setup Checklist
description: Verify the following general aspects of your setup.
options:
- label: Confirm that you have followed the installation instructions in the projects documentation.
- label: Check that you are using the latest version of the project.
- label: Verify disk space availability for model storage and data processing.
- label: Ensure that you have the necessary permissions to run the project.
- type: checkboxes
id: nvidia-setup-checklist
attributes:
label: NVIDIA GPU Setup Checklist
description: Verify the following aspects of your NVIDIA GPU setup.
options:
- label: Check that the all CUDA dependencies are installed and are compatible with your GPU (refer to [CUDA's documentation](https://docs.nvidia.com/deploy/cuda-compatibility/#frequently-asked-questions))
- label: Ensure an NVIDIA GPU is installed and recognized by the system (run `nvidia-smi` to verify).
- label: Ensure proper permissions are set for accessing GPU resources.
- label: Docker users - Verify that the NVIDIA Container Toolkit is configured correctly (e.g. run `sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi`)

View File

@@ -1,24 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''
---
Note: if you'd like to *ask a question* or *open a discussion*, head over to the [Discussions](https://github.com/imartinez/privateGPT/discussions) section and post it there.
**Describe the bug and how to reproduce it**
A clear and concise description of what the bug is and the steps to reproduce the behavior.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Environment (please complete the following information):**
- OS / hardware: [e.g. macOS 12.6 / M1]
- Python version [e.g. 3.11.3]
- Other relevant information
**Additional context**
Add any other context about the problem here.

8
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: Documentation
url: https://docs.privategpt.dev
about: Please refer to our documentation for more details and guidance.
- name: Discord
url: https://discord.gg/bK6mRVpErU
about: Join our Discord community to ask questions and get help.

19
.github/ISSUE_TEMPLATE/docs.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
name: Documentation
description: Suggest a change or addition to the documentation.
title: "[DOCS] "
labels: ["documentation"]
body:
- type: markdown
attributes:
value: |
**Please describe the documentation change or addition you would like to suggest.**
- type: textarea
id: description
attributes:
label: Description
description: Provide a detailed description of the documentation change.
placeholder: "Detailed description of the documentation change"
validations:
required: true

37
.github/ISSUE_TEMPLATE/feature.yml vendored Normal file
View File

@@ -0,0 +1,37 @@
name: Enhancement
description: Suggest an enhancement or improvement to the project.
title: "[FEATURE] "
labels: ["enhancement"]
body:
- type: markdown
attributes:
value: |
**Please describe the enhancement or improvement you would like to suggest.**
- type: textarea
id: feature_description
attributes:
label: Feature Description
description: Provide a detailed description of the enhancement.
placeholder: "Detailed description of the enhancement"
validations:
required: true
- type: textarea
id: reason
attributes:
label: Reason
description: Explain the reason for this enhancement.
placeholder: "Reason for the enhancement"
validations:
required: true
- type: textarea
id: value
attributes:
label: Value of Feature
description: Describe the value or benefits this feature will bring.
placeholder: "Value or benefits of the feature"
validations:
required: true

View File

@@ -1,22 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
Note: if you'd like to *ask a question* or *open a discussion*, head over to the [Discussions](https://github.com/imartinez/privateGPT/discussions) section and post it there.
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

19
.github/ISSUE_TEMPLATE/question.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
name: Question
description: Ask a question about the project.
title: "[QUESTION] "
labels: ["question"]
body:
- type: markdown
attributes:
value: |
**Please describe your question in detail.**
- type: textarea
id: question
attributes:
label: Question
description: Provide a detailed description of your question.
placeholder: "Detailed description of the question"
validations:
required: true

37
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,37 @@
# Description
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
## Type of Change
Please delete options that are not relevant.
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update
## How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
- [ ] Added new unit/integration tests
- [ ] I stared at the code and made sure it makes sense
**Test Configuration**:
* Firmware version:
* Hardware:
* Toolchain:
* SDK:
## Checklist:
- [ ] My code follows the style guidelines of this project
- [ ] I have performed a self-review of my code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
- [ ] Any dependent changes have been merged and published in downstream modules
- [ ] I ran `make check; make test` to ensure mypy and tests pass

View File

@@ -0,0 +1,30 @@
name: "Install Dependencies"
description: "Action to build the project dependencies from the main versions"
inputs:
python_version:
required: true
type: string
default: "3.11.4"
poetry_version:
required: true
type: string
default: "1.8.3"
runs:
using: composite
steps:
- name: Install Poetry
uses: snok/install-poetry@v1
with:
version: ${{ inputs.poetry_version }}
virtualenvs-create: true
virtualenvs-in-project: false
installer-parallel: true
- uses: actions/setup-python@v4
with:
python-version: ${{ inputs.python_version }}
cache: "poetry"
- name: Install Dependencies
run: poetry install --extras "ui vector-stores-qdrant" --no-root
shell: bash

21
.github/workflows/fern-check.yml vendored Normal file
View File

@@ -0,0 +1,21 @@
name: fern check
on:
pull_request:
branches:
- main
paths:
- "fern/**"
jobs:
fern-check:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v4
- name: Install Fern
run: npm install -g fern-api
- name: Check Fern API is valid
run: fern check

83
.github/workflows/generate-release.yml vendored Normal file
View File

@@ -0,0 +1,83 @@
name: generate-release
on:
release:
types: [ published ]
workflow_dispatch:
env:
REGISTRY: docker.io
IMAGE_NAME: ${{ github.repository }}
platforms: linux/amd64,linux/arm64
DEFAULT_TYPE: "ollama"
jobs:
build-and-push-image:
runs-on: ubuntu-latest
strategy:
matrix:
type: [ llamacpp-cpu, ollama ]
permissions:
contents: read
packages: write
outputs:
version: ${{ steps.version.outputs.version }}
steps:
- name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@main
with:
tool-cache: false
android: true
dotnet: true
haskell: true
large-packages: true
docker-images: false
swap-storage: true
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=semver,pattern={{version}},enable=${{ matrix.type == env.DEFAULT_TYPE }}
type=semver,pattern={{version}}-${{ matrix.type }}
type=semver,pattern={{major}}.{{minor}},enable=${{ matrix.type == env.DEFAULT_TYPE }}
type=semver,pattern={{major}}.{{minor}}-${{ matrix.type }}
type=raw,value=latest,enable=${{ matrix.type == env.DEFAULT_TYPE }}
type=sha
flavor: |
latest=false
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .
file: Dockerfile.${{ matrix.type }}
platforms: ${{ env.platforms }}
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
- name: Version output
id: version
run: echo "version=${{ steps.meta.outputs.version }}" >> "$GITHUB_OUTPUT"

54
.github/workflows/preview-docs.yml vendored Normal file
View File

@@ -0,0 +1,54 @@
name: deploy preview docs
on:
pull_request_target:
branches:
- main
paths:
- "fern/**"
jobs:
preview-docs:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "18"
- name: Install Fern
run: npm install -g fern-api
- name: Generate Documentation Preview with Fern
id: generate_docs
env:
FERN_TOKEN: ${{ secrets.FERN_TOKEN }}
run: |
output=$(fern generate --docs --preview --log-level debug)
echo "$output"
# Extract the URL
preview_url=$(echo "$output" | grep -oP '(?<=Published docs to )https://[^\s]*')
# Set the output for the step
echo "::set-output name=preview_url::$preview_url"
- name: Comment PR with URL using github-actions bot
uses: actions/github-script@v7
if: ${{ steps.generate_docs.outputs.preview_url }}
with:
script: |
const preview_url = '${{ steps.generate_docs.outputs.preview_url }}';
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: `Published docs preview URL: ${preview_url}`
})

26
.github/workflows/publish-docs.yml vendored Normal file
View File

@@ -0,0 +1,26 @@
name: publish docs
on:
push:
branches:
- main
paths:
- "fern/**"
jobs:
publish-docs:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v4
- name: Setup node
uses: actions/setup-node@v3
- name: Download Fern
run: npm install -g fern-api
- name: Generate and Publish Docs
env:
FERN_TOKEN: ${{ secrets.FERN_TOKEN }}
run: fern generate --docs --log-level debug

19
.github/workflows/release-please.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
name: release-please
on:
push:
branches:
- main
permissions:
contents: write
pull-requests: write
jobs:
release-please:
runs-on: ubuntu-latest
steps:
- uses: google-github-actions/release-please-action@v3
with:
release-type: simple
version-file: version.txt

30
.github/workflows/stale.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
# This workflow warns and then closes issues and PRs that have had no activity for a specified amount of time.
#
# You can adjust the behavior by modifying this file.
# For more information, see:
# https://github.com/actions/stale
name: Mark stale issues and pull requests
on:
schedule:
- cron: '42 5 * * *'
jobs:
stale:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v8
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-stale: 15
stale-issue-message: 'Stale issue'
stale-pr-message: 'Stale pull request'
stale-issue-label: 'stale'
stale-pr-label: 'stale'
exempt-issue-labels: 'autorelease: pending'
exempt-pr-labels: 'autorelease: pending'

67
.github/workflows/tests.yml vendored Normal file
View File

@@ -0,0 +1,67 @@
name: tests
on:
push:
branches:
- main
pull_request:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.head_ref || github.ref }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
jobs:
setup:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: ./.github/workflows/actions/install_dependencies
checks:
needs: setup
runs-on: ubuntu-latest
name: ${{ matrix.quality-command }}
strategy:
matrix:
quality-command:
- black
- ruff
- mypy
steps:
- uses: actions/checkout@v3
- uses: ./.github/workflows/actions/install_dependencies
- name: run ${{ matrix.quality-command }}
run: make ${{ matrix.quality-command }}
test:
needs: setup
runs-on: ubuntu-latest
name: test
steps:
- uses: actions/checkout@v3
- uses: ./.github/workflows/actions/install_dependencies
- name: run test
run: make test-coverage
# Run even if make test fails for coverage reports
# TODO: select a better xml results displayer
- name: Archive test results coverage results
uses: actions/upload-artifact@v3
if: always()
with:
name: test_results
path: tests-results.xml
- name: Archive code coverage results
uses: actions/upload-artifact@v3
if: always()
with:
name: code-coverage-report
path: htmlcov/
all_checks_passed:
# Used to easily force requirements checks in GitHub
needs:
- checks
- test
runs-on: ubuntu-latest
steps:
- run: echo "All checks passed"

185
.gitignore vendored
View File

@@ -1,174 +1,31 @@
# OSX
.DS_STORE
.venv
.env
venv
# Models
models/
settings-me.yaml
# Local Chroma db
.chroma/
db/
persist_directory/chroma.sqlite
.ruff_cache
.pytest_cache
.mypy_cache
# Byte-compiled / optimized / DLL files
# byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# unit tests / coverage reports
/tests-results.xml
/.coverage
/coverage.xml
/htmlcov/
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
/.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# IDE
.idea/
.vscode/
/.run/
.fleet/
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
# vscode
.vscode/launch.json
# macOS
.DS_Store

View File

@@ -1,44 +1,43 @@
---
files: ^(.*\.(py|json|md|sh|yaml|cfg|txt))$
exclude: ^(\.[^/]*cache/.*|.*/_user.py|source_documents/)$
default_install_hook_types:
# Mandatory to install both pre-commit and pre-push hooks (see https://pre-commit.com/#top_level-default_install_hook_types)
# Add new hook types here to ensure automatic installation when running `pre-commit install`
- pre-commit
- pre-push
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
#- id: no-commit-to-branch
# args: [--branch, main]
- id: check-yaml
args: [--unsafe]
# - id: debug-statements
- id: end-of-file-fixer
- id: trailing-whitespace
exclude-files: \.md$
- id: check-json
- id: mixed-line-ending
# - id: check-builtin-literals
# - id: check-ast
- id: check-merge-conflict
- id: check-executables-have-shebangs
- id: check-shebang-scripts-are-executable
- id: check-docstring-first
- id: fix-byte-order-marker
- id: check-case-conflict
# - id: check-toml
- repo: https://github.com/adrienverge/yamllint.git
rev: v1.29.0
hooks:
- id: yamllint
args:
- --no-warnings
- -d
- '{extends: relaxed, rules: {line-length: {max: 90}}}'
- repo: https://github.com/codespell-project/codespell
rev: v2.2.2
hooks:
- id: codespell
args:
# - --builtin=clear,rare,informal,usage,code,names,en-GB_to_en-US
- --builtin=clear,rare,informal,usage,code,names
- --ignore-words-list=hass,master
- --skip="./.*"
- --quiet-level=2
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-json
- id: check-added-large-files
- repo: local
hooks:
- id: black
name: Formatting (black)
entry: black
language: system
types: [python]
stages: [commit]
- id: ruff
name: Linter (ruff)
entry: ruff
language: system
types: [python]
stages: [commit]
- id: mypy
name: Type checking (mypy)
entry: make mypy
pass_filenames: false
language: system
types: [python]
stages: [commit]
- id: test
name: Unit tests (pytest)
entry: make test
pass_filenames: false
language: system
types: [python]
stages: [push]

163
CHANGELOG.md Normal file
View File

@@ -0,0 +1,163 @@
# Changelog
## [0.6.1](https://github.com/zylon-ai/private-gpt/compare/v0.6.0...v0.6.1) (2024-08-05)
### Bug Fixes
* add built image from DockerHub ([#2042](https://github.com/zylon-ai/private-gpt/issues/2042)) ([f09f6dd](https://github.com/zylon-ai/private-gpt/commit/f09f6dd2553077d4566dbe6b48a450e05c2f049e))
* Adding azopenai to model list ([#2035](https://github.com/zylon-ai/private-gpt/issues/2035)) ([1c665f7](https://github.com/zylon-ai/private-gpt/commit/1c665f7900658144f62814b51f6e3434a6d7377f))
* **deploy:** generate docker release when new version is released ([#2038](https://github.com/zylon-ai/private-gpt/issues/2038)) ([1d4c14d](https://github.com/zylon-ai/private-gpt/commit/1d4c14d7a3c383c874b323d934be01afbaca899e))
* **deploy:** improve Docker-Compose and quickstart on Docker ([#2037](https://github.com/zylon-ai/private-gpt/issues/2037)) ([dae0727](https://github.com/zylon-ai/private-gpt/commit/dae0727a1b4abd35d2b0851fe30e0a4ed67e0fbb))
## [0.6.0](https://github.com/zylon-ai/private-gpt/compare/v0.5.0...v0.6.0) (2024-08-02)
### Features
* bump dependencies ([#1987](https://github.com/zylon-ai/private-gpt/issues/1987)) ([b687dc8](https://github.com/zylon-ai/private-gpt/commit/b687dc852413404c52d26dcb94536351a63b169d))
* **docs:** add privategpt-ts sdk ([#1924](https://github.com/zylon-ai/private-gpt/issues/1924)) ([d13029a](https://github.com/zylon-ai/private-gpt/commit/d13029a046f6e19e8ee65bef3acd96365c738df2))
* **docs:** Fix setup docu ([#1926](https://github.com/zylon-ai/private-gpt/issues/1926)) ([067a5f1](https://github.com/zylon-ai/private-gpt/commit/067a5f144ca6e605c99d7dbe9ca7d8207ac8808d))
* **docs:** update doc for ipex-llm ([#1968](https://github.com/zylon-ai/private-gpt/issues/1968)) ([19a7c06](https://github.com/zylon-ai/private-gpt/commit/19a7c065ef7f42b37f289dd28ac945f7afc0e73a))
* **docs:** update documentation and fix preview-docs ([#2000](https://github.com/zylon-ai/private-gpt/issues/2000)) ([4523a30](https://github.com/zylon-ai/private-gpt/commit/4523a30c8f004aac7a7ae224671e2c45ec0cb973))
* **llm:** add progress bar when ollama is pulling models ([#2031](https://github.com/zylon-ai/private-gpt/issues/2031)) ([cf61bf7](https://github.com/zylon-ai/private-gpt/commit/cf61bf780f8d122e4057d002abf03563bb45614a))
* **llm:** autopull ollama models ([#2019](https://github.com/zylon-ai/private-gpt/issues/2019)) ([20bad17](https://github.com/zylon-ai/private-gpt/commit/20bad17c9857809158e689e9671402136c1e3d84))
* **llm:** Support for Google Gemini LLMs and Embeddings ([#1965](https://github.com/zylon-ai/private-gpt/issues/1965)) ([fc13368](https://github.com/zylon-ai/private-gpt/commit/fc13368bc72d1f4c27644677431420ed77731c03))
* make llama3.1 as default ([#2022](https://github.com/zylon-ai/private-gpt/issues/2022)) ([9027d69](https://github.com/zylon-ai/private-gpt/commit/9027d695c11fbb01e62424b855665de71d513417))
* prompt_style applied to all LLMs + extra LLM params. ([#1835](https://github.com/zylon-ai/private-gpt/issues/1835)) ([e21bf20](https://github.com/zylon-ai/private-gpt/commit/e21bf20c10938b24711d9f2c765997f44d7e02a9))
* **recipe:** add our first recipe `Summarize` ([#2028](https://github.com/zylon-ai/private-gpt/issues/2028)) ([8119842](https://github.com/zylon-ai/private-gpt/commit/8119842ae6f1f5ecfaf42b06fa0d1ffec675def4))
* **vectordb:** Milvus vector db Integration ([#1996](https://github.com/zylon-ai/private-gpt/issues/1996)) ([43cc31f](https://github.com/zylon-ai/private-gpt/commit/43cc31f74015f8d8fcbf7a8ea7d7d9ecc66cf8c9))
* **vectorstore:** Add clickhouse support as vectore store ([#1883](https://github.com/zylon-ai/private-gpt/issues/1883)) ([2612928](https://github.com/zylon-ai/private-gpt/commit/26129288394c7483e6fc0496a11dc35679528cc1))
### Bug Fixes
* "no such group" error in Dockerfile, added docx2txt and cryptography deps ([#1841](https://github.com/zylon-ai/private-gpt/issues/1841)) ([947e737](https://github.com/zylon-ai/private-gpt/commit/947e737f300adf621d2261d527192f36f3387f8e))
* **config:** make tokenizer optional and include a troubleshooting doc ([#1998](https://github.com/zylon-ai/private-gpt/issues/1998)) ([01b7ccd](https://github.com/zylon-ai/private-gpt/commit/01b7ccd0648be032846647c9a184925d3682f612))
* **docs:** Fix concepts.mdx referencing to installation page ([#1779](https://github.com/zylon-ai/private-gpt/issues/1779)) ([dde0224](https://github.com/zylon-ai/private-gpt/commit/dde02245bcd51a7ede7b6789c82ae217cac53d92))
* **docs:** Update installation.mdx ([#1866](https://github.com/zylon-ai/private-gpt/issues/1866)) ([c1802e7](https://github.com/zylon-ai/private-gpt/commit/c1802e7cf0e56a2603213ec3b6a4af8fadb8a17a))
* ffmpy dependency ([#2020](https://github.com/zylon-ai/private-gpt/issues/2020)) ([dabf556](https://github.com/zylon-ai/private-gpt/commit/dabf556dae9cb00fe0262270e5138d982585682e))
* light mode ([#2025](https://github.com/zylon-ai/private-gpt/issues/2025)) ([1020cd5](https://github.com/zylon-ai/private-gpt/commit/1020cd53288af71a17882781f392512568f1b846))
* **LLM:** mistral ignoring assistant messages ([#1954](https://github.com/zylon-ai/private-gpt/issues/1954)) ([c7212ac](https://github.com/zylon-ai/private-gpt/commit/c7212ac7cc891f9e3c713cc206ae9807c5dfdeb6))
* **llm:** special tokens and leading space ([#1831](https://github.com/zylon-ai/private-gpt/issues/1831)) ([347be64](https://github.com/zylon-ai/private-gpt/commit/347be643f7929c56382a77c3f45f0867605e0e0a))
* make embedding_api_base match api_base when on docker ([#1859](https://github.com/zylon-ai/private-gpt/issues/1859)) ([2a432bf](https://github.com/zylon-ai/private-gpt/commit/2a432bf9c5582a94eb4052b1e80cabdb118d298e))
* nomic embeddings ([#2030](https://github.com/zylon-ai/private-gpt/issues/2030)) ([5465958](https://github.com/zylon-ai/private-gpt/commit/54659588b5b109a3dd17cca835e275240464d275))
* prevent to ingest local files (by default) ([#2010](https://github.com/zylon-ai/private-gpt/issues/2010)) ([e54a8fe](https://github.com/zylon-ai/private-gpt/commit/e54a8fe0433252808d0a60f6a08a43c9f5a42f3b))
* Replacing unsafe `eval()` with `json.loads()` ([#1890](https://github.com/zylon-ai/private-gpt/issues/1890)) ([9d0d614](https://github.com/zylon-ai/private-gpt/commit/9d0d614706581a8bfa57db45f62f84ab23d26f15))
* **settings:** enable cors by default so it will work when using ts sdk (spa) ([#1925](https://github.com/zylon-ai/private-gpt/issues/1925)) ([966af47](https://github.com/zylon-ai/private-gpt/commit/966af4771dbe5cf3fdf554b5fdf8f732407859c4))
* **ui:** gradio bug fixes ([#2021](https://github.com/zylon-ai/private-gpt/issues/2021)) ([d4375d0](https://github.com/zylon-ai/private-gpt/commit/d4375d078f18ba53562fd71651159f997fff865f))
* unify embedding models ([#2027](https://github.com/zylon-ai/private-gpt/issues/2027)) ([40638a1](https://github.com/zylon-ai/private-gpt/commit/40638a18a5713d60fec8fe52796dcce66d88258c))
## [0.5.0](https://github.com/zylon-ai/private-gpt/compare/v0.4.0...v0.5.0) (2024-04-02)
### Features
* **code:** improve concat of strings in ui ([#1785](https://github.com/zylon-ai/private-gpt/issues/1785)) ([bac818a](https://github.com/zylon-ai/private-gpt/commit/bac818add51b104cda925b8f1f7b51448e935ca1))
* **docker:** set default Docker to use Ollama ([#1812](https://github.com/zylon-ai/private-gpt/issues/1812)) ([f83abff](https://github.com/zylon-ai/private-gpt/commit/f83abff8bc955a6952c92cc7bcb8985fcec93afa))
* **docs:** Add guide Llama-CPP Linux AMD GPU support ([#1782](https://github.com/zylon-ai/private-gpt/issues/1782)) ([8a836e4](https://github.com/zylon-ai/private-gpt/commit/8a836e4651543f099c59e2bf497ab8c55a7cd2e5))
* **docs:** Feature/upgrade docs ([#1741](https://github.com/zylon-ai/private-gpt/issues/1741)) ([5725181](https://github.com/zylon-ai/private-gpt/commit/572518143ac46532382db70bed6f73b5082302c1))
* **docs:** upgrade fern ([#1596](https://github.com/zylon-ai/private-gpt/issues/1596)) ([84ad16a](https://github.com/zylon-ai/private-gpt/commit/84ad16af80191597a953248ce66e963180e8ddec))
* **ingest:** Created a faster ingestion mode - pipeline ([#1750](https://github.com/zylon-ai/private-gpt/issues/1750)) ([134fc54](https://github.com/zylon-ai/private-gpt/commit/134fc54d7d636be91680dc531f5cbe2c5892ac56))
* **llm - embed:** Add support for Azure OpenAI ([#1698](https://github.com/zylon-ai/private-gpt/issues/1698)) ([1efac6a](https://github.com/zylon-ai/private-gpt/commit/1efac6a3fe19e4d62325e2c2915cd84ea277f04f))
* **llm:** adds serveral settings for llamacpp and ollama ([#1703](https://github.com/zylon-ai/private-gpt/issues/1703)) ([02dc83e](https://github.com/zylon-ai/private-gpt/commit/02dc83e8e9f7ada181ff813f25051bbdff7b7c6b))
* **llm:** Ollama LLM-Embeddings decouple + longer keep_alive settings ([#1800](https://github.com/zylon-ai/private-gpt/issues/1800)) ([b3b0140](https://github.com/zylon-ai/private-gpt/commit/b3b0140e244e7a313bfaf4ef10eb0f7e4192710e))
* **llm:** Ollama timeout setting ([#1773](https://github.com/zylon-ai/private-gpt/issues/1773)) ([6f6c785](https://github.com/zylon-ai/private-gpt/commit/6f6c785dac2bbad37d0b67fda215784298514d39))
* **local:** tiktoken cache within repo for offline ([#1467](https://github.com/zylon-ai/private-gpt/issues/1467)) ([821bca3](https://github.com/zylon-ai/private-gpt/commit/821bca32e9ee7c909fd6488445ff6a04463bf91b))
* **nodestore:** add Postgres for the doc and index store ([#1706](https://github.com/zylon-ai/private-gpt/issues/1706)) ([68b3a34](https://github.com/zylon-ai/private-gpt/commit/68b3a34b032a08ca073a687d2058f926032495b3))
* **rag:** expose similarity_top_k and similarity_score to settings ([#1771](https://github.com/zylon-ai/private-gpt/issues/1771)) ([087cb0b](https://github.com/zylon-ai/private-gpt/commit/087cb0b7b74c3eb80f4f60b47b3a021c81272ae1))
* **RAG:** Introduce SentenceTransformer Reranker ([#1810](https://github.com/zylon-ai/private-gpt/issues/1810)) ([83adc12](https://github.com/zylon-ai/private-gpt/commit/83adc12a8ef0fa0c13a0dec084fa596445fc9075))
* **scripts:** Wipe qdrant and obtain db Stats command ([#1783](https://github.com/zylon-ai/private-gpt/issues/1783)) ([ea153fb](https://github.com/zylon-ai/private-gpt/commit/ea153fb92f1f61f64c0d04fff0048d4d00b6f8d0))
* **ui:** Add Model Information to ChatInterface label ([f0b174c](https://github.com/zylon-ai/private-gpt/commit/f0b174c097c2d5e52deae8ef88de30a0d9013a38))
* **ui:** add sources check to not repeat identical sources ([#1705](https://github.com/zylon-ai/private-gpt/issues/1705)) ([290b9fb](https://github.com/zylon-ai/private-gpt/commit/290b9fb084632216300e89bdadbfeb0380724b12))
* **UI:** Faster startup and document listing ([#1763](https://github.com/zylon-ai/private-gpt/issues/1763)) ([348df78](https://github.com/zylon-ai/private-gpt/commit/348df781b51606b2f9810bcd46f850e54192fd16))
* **ui:** maintain score order when curating sources ([#1643](https://github.com/zylon-ai/private-gpt/issues/1643)) ([410bf7a](https://github.com/zylon-ai/private-gpt/commit/410bf7a71f17e77c4aec723ab80c233b53765964))
* unify settings for vector and nodestore connections to PostgreSQL ([#1730](https://github.com/zylon-ai/private-gpt/issues/1730)) ([63de7e4](https://github.com/zylon-ai/private-gpt/commit/63de7e4930ac90dd87620225112a22ffcbbb31ee))
* wipe per storage type ([#1772](https://github.com/zylon-ai/private-gpt/issues/1772)) ([c2d6948](https://github.com/zylon-ai/private-gpt/commit/c2d694852b4696834962a42fde047b728722ad74))
### Bug Fixes
* **docs:** Minor documentation amendment ([#1739](https://github.com/zylon-ai/private-gpt/issues/1739)) ([258d02d](https://github.com/zylon-ai/private-gpt/commit/258d02d87c5cb81d6c3a6f06aa69339b670dffa9))
* Fixed docker-compose ([#1758](https://github.com/zylon-ai/private-gpt/issues/1758)) ([774e256](https://github.com/zylon-ai/private-gpt/commit/774e2560520dc31146561d09a2eb464c68593871))
* **ingest:** update script label ([#1770](https://github.com/zylon-ai/private-gpt/issues/1770)) ([7d2de5c](https://github.com/zylon-ai/private-gpt/commit/7d2de5c96fd42e339b26269b3155791311ef1d08))
* **settings:** set default tokenizer to avoid running make setup fail ([#1709](https://github.com/zylon-ai/private-gpt/issues/1709)) ([d17c34e](https://github.com/zylon-ai/private-gpt/commit/d17c34e81a84518086b93605b15032e2482377f7))
## [0.4.0](https://github.com/imartinez/privateGPT/compare/v0.3.0...v0.4.0) (2024-03-06)
### Features
* Upgrade to LlamaIndex to 0.10 ([#1663](https://github.com/imartinez/privateGPT/issues/1663)) ([45f0571](https://github.com/imartinez/privateGPT/commit/45f05711eb71ffccdedb26f37e680ced55795d44))
* **Vector:** support pgvector ([#1624](https://github.com/imartinez/privateGPT/issues/1624)) ([cd40e39](https://github.com/imartinez/privateGPT/commit/cd40e3982b780b548b9eea6438c759f1c22743a8))
## [0.3.0](https://github.com/imartinez/privateGPT/compare/v0.2.0...v0.3.0) (2024-02-16)
### Features
* add mistral + chatml prompts ([#1426](https://github.com/imartinez/privateGPT/issues/1426)) ([e326126](https://github.com/imartinez/privateGPT/commit/e326126d0d4cd7e46a79f080c442c86f6dd4d24b))
* Add stream information to generate SDKs ([#1569](https://github.com/imartinez/privateGPT/issues/1569)) ([24fae66](https://github.com/imartinez/privateGPT/commit/24fae660e6913aac6b52745fb2c2fe128ba2eb79))
* **API:** Ingest plain text ([#1417](https://github.com/imartinez/privateGPT/issues/1417)) ([6eeb95e](https://github.com/imartinez/privateGPT/commit/6eeb95ec7f17a618aaa47f5034ee5bccae02b667))
* **bulk-ingest:** Add --ignored Flag to Exclude Specific Files and Directories During Ingestion ([#1432](https://github.com/imartinez/privateGPT/issues/1432)) ([b178b51](https://github.com/imartinez/privateGPT/commit/b178b514519550e355baf0f4f3f6beb73dca7df2))
* **llm:** Add openailike llm mode ([#1447](https://github.com/imartinez/privateGPT/issues/1447)) ([2d27a9f](https://github.com/imartinez/privateGPT/commit/2d27a9f956d672cb1fe715cf0acdd35c37f378a5)), closes [#1424](https://github.com/imartinez/privateGPT/issues/1424)
* **llm:** Add support for Ollama LLM ([#1526](https://github.com/imartinez/privateGPT/issues/1526)) ([6bbec79](https://github.com/imartinez/privateGPT/commit/6bbec79583b7f28d9bea4b39c099ebef149db843))
* **settings:** Configurable context_window and tokenizer ([#1437](https://github.com/imartinez/privateGPT/issues/1437)) ([4780540](https://github.com/imartinez/privateGPT/commit/47805408703c23f0fd5cab52338142c1886b450b))
* **settings:** Update default model to TheBloke/Mistral-7B-Instruct-v0.2-GGUF ([#1415](https://github.com/imartinez/privateGPT/issues/1415)) ([8ec7cf4](https://github.com/imartinez/privateGPT/commit/8ec7cf49f40701a4f2156c48eb2fad9fe6220629))
* **ui:** make chat area stretch to fill the screen ([#1397](https://github.com/imartinez/privateGPT/issues/1397)) ([c71ae7c](https://github.com/imartinez/privateGPT/commit/c71ae7cee92463bbc5ea9c434eab9f99166e1363))
* **UI:** Select file to Query or Delete + Delete ALL ([#1612](https://github.com/imartinez/privateGPT/issues/1612)) ([aa13afd](https://github.com/imartinez/privateGPT/commit/aa13afde07122f2ddda3942f630e5cadc7e4e1ee))
### Bug Fixes
* Adding an LLM param to fix broken generator from llamacpp ([#1519](https://github.com/imartinez/privateGPT/issues/1519)) ([869233f](https://github.com/imartinez/privateGPT/commit/869233f0e4f03dc23e5fae43cf7cb55350afdee9))
* **deploy:** fix local and external dockerfiles ([fde2b94](https://github.com/imartinez/privateGPT/commit/fde2b942bc03688701ed563be6d7d597c75e4e4e))
* **docker:** docker broken copy ([#1419](https://github.com/imartinez/privateGPT/issues/1419)) ([059f358](https://github.com/imartinez/privateGPT/commit/059f35840adbc3fb93d847d6decf6da32d08670c))
* **docs:** Update quickstart doc and set version in pyproject.toml to 0.2.0 ([0a89d76](https://github.com/imartinez/privateGPT/commit/0a89d76cc5ed4371ffe8068858f23dfbb5e8cc37))
* minor bug in chat stream output - python error being serialized ([#1449](https://github.com/imartinez/privateGPT/issues/1449)) ([6191bcd](https://github.com/imartinez/privateGPT/commit/6191bcdbd6e92b6f4d5995967dc196c9348c5954))
* **settings:** correct yaml multiline string ([#1403](https://github.com/imartinez/privateGPT/issues/1403)) ([2564f8d](https://github.com/imartinez/privateGPT/commit/2564f8d2bb8c4332a6a0ab6d722a2ac15006b85f))
* **tests:** load the test settings only when running tests ([d3acd85](https://github.com/imartinez/privateGPT/commit/d3acd85fe34030f8cfd7daf50b30c534087bdf2b))
* **UI:** Updated ui.py. Frees up the CPU to not be bottlenecked. ([24fb80c](https://github.com/imartinez/privateGPT/commit/24fb80ca38f21910fe4fd81505d14960e9ed4faa))
## [0.2.0](https://github.com/imartinez/privateGPT/compare/v0.1.0...v0.2.0) (2023-12-10)
### Features
* **llm:** drop default_system_prompt ([#1385](https://github.com/imartinez/privateGPT/issues/1385)) ([a3ed14c](https://github.com/imartinez/privateGPT/commit/a3ed14c58f77351dbd5f8f2d7868d1642a44f017))
* **ui:** Allows User to Set System Prompt via "Additional Options" in Chat Interface ([#1353](https://github.com/imartinez/privateGPT/issues/1353)) ([145f3ec](https://github.com/imartinez/privateGPT/commit/145f3ec9f41c4def5abf4065a06fb0786e2d992a))
## [0.1.0](https://github.com/imartinez/privateGPT/compare/v0.0.2...v0.1.0) (2023-11-30)
### Features
* Disable Gradio Analytics ([#1165](https://github.com/imartinez/privateGPT/issues/1165)) ([6583dc8](https://github.com/imartinez/privateGPT/commit/6583dc84c082773443fc3973b1cdf8095fa3fec3))
* Drop loguru and use builtin `logging` ([#1133](https://github.com/imartinez/privateGPT/issues/1133)) ([64c5ae2](https://github.com/imartinez/privateGPT/commit/64c5ae214a9520151c9c2d52ece535867d799367))
* enable resume download for hf_hub_download ([#1249](https://github.com/imartinez/privateGPT/issues/1249)) ([4197ada](https://github.com/imartinez/privateGPT/commit/4197ada6267c822f32c1d7ba2be6e7ce145a3404))
* move torch and transformers to local group ([#1172](https://github.com/imartinez/privateGPT/issues/1172)) ([0d677e1](https://github.com/imartinez/privateGPT/commit/0d677e10b970aec222ec04837d0f08f1631b6d4a))
* Qdrant support ([#1228](https://github.com/imartinez/privateGPT/issues/1228)) ([03d1ae6](https://github.com/imartinez/privateGPT/commit/03d1ae6d70dffdd2411f0d4e92f65080fff5a6e2))
### Bug Fixes
* Docker and sagemaker setup ([#1118](https://github.com/imartinez/privateGPT/issues/1118)) ([895588b](https://github.com/imartinez/privateGPT/commit/895588b82a06c2bc71a9e22fb840c7f6442a3b5b))
* fix pytorch version to avoid wheel bug ([#1123](https://github.com/imartinez/privateGPT/issues/1123)) ([24cfddd](https://github.com/imartinez/privateGPT/commit/24cfddd60f74aadd2dade4c63f6012a2489938a1))
* Remove global state ([#1216](https://github.com/imartinez/privateGPT/issues/1216)) ([022bd71](https://github.com/imartinez/privateGPT/commit/022bd718e3dfc197027b1e24fb97e5525b186db4))
* sagemaker config and chat methods ([#1142](https://github.com/imartinez/privateGPT/issues/1142)) ([a517a58](https://github.com/imartinez/privateGPT/commit/a517a588c4927aa5c5c2a93e4f82a58f0599d251))
* typo in README.md ([#1091](https://github.com/imartinez/privateGPT/issues/1091)) ([ba23443](https://github.com/imartinez/privateGPT/commit/ba23443a70d323cd4f9a242b33fd9dce1bacd2db))
* Windows 11 failing to auto-delete tmp file ([#1260](https://github.com/imartinez/privateGPT/issues/1260)) ([0d52002](https://github.com/imartinez/privateGPT/commit/0d520026a3d5b08a9b8487be992d3095b21e710c))
* Windows permission error on ingest service tmp files ([#1280](https://github.com/imartinez/privateGPT/issues/1280)) ([f1cbff0](https://github.com/imartinez/privateGPT/commit/f1cbff0fb7059432d9e71473cbdd039032dab60d))
## [0.0.2](https://github.com/imartinez/privateGPT/compare/v0.0.1...v0.0.2) (2023-10-20)
### Bug Fixes
* chromadb max batch size ([#1087](https://github.com/imartinez/privateGPT/issues/1087)) ([f5a9bf4](https://github.com/imartinez/privateGPT/commit/f5a9bf4e374b2d4c76438cf8a97cccf222ec8e6f))
## 0.0.1 (2023-10-20)
### Miscellaneous Chores
* Initial version ([490d93f](https://github.com/imartinez/privateGPT/commit/490d93fdc1977443c92f6c42e57a1c585aa59430))

16
CITATION.cff Normal file
View File

@@ -0,0 +1,16 @@
# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0
title: PrivateGPT
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- name: Zylon by PrivateGPT
address: hello@zylon.ai
website: 'https://www.zylon.ai/'
repository-code: 'https://github.com/zylon-ai/private-gpt'
license: Apache-2.0
date-released: '2023-05-02'

62
Dockerfile.llamacpp-cpu Normal file
View File

@@ -0,0 +1,62 @@
### IMPORTANT, THIS IMAGE CAN ONLY BE RUN IN LINUX DOCKER
### You will run into a segfault in mac
FROM python:3.11.6-slim-bookworm as base
# Install poetry
RUN pip install pipx
RUN python3 -m pipx ensurepath
RUN pipx install poetry==1.8.3
ENV PATH="/root/.local/bin:$PATH"
ENV PATH=".venv/bin/:$PATH"
# Dependencies to build llama-cpp
RUN apt update && apt install -y \
libopenblas-dev\
ninja-build\
build-essential\
pkg-config\
wget
# https://python-poetry.org/docs/configuration/#virtualenvsin-project
ENV POETRY_VIRTUALENVS_IN_PROJECT=true
FROM base as dependencies
WORKDIR /home/worker/app
COPY pyproject.toml poetry.lock ./
ARG POETRY_EXTRAS="ui embeddings-huggingface llms-llama-cpp vector-stores-qdrant"
RUN poetry install --no-root --extras "${POETRY_EXTRAS}"
FROM base as app
ENV PYTHONUNBUFFERED=1
ENV PORT=8080
ENV APP_ENV=prod
ENV PYTHONPATH="$PYTHONPATH:/home/worker/app/private_gpt/"
EXPOSE 8080
# Prepare a non-root user
# More info about how to configure UIDs and GIDs in Docker:
# https://github.com/systemd/systemd/blob/main/docs/UIDS-GIDS.md
# Define the User ID (UID) for the non-root user
# UID 100 is chosen to avoid conflicts with existing system users
ARG UID=100
# Define the Group ID (GID) for the non-root user
# GID 65534 is often used for the 'nogroup' or 'nobody' group
ARG GID=65534
RUN adduser --system --gid ${GID} --uid ${UID} --home /home/worker worker
WORKDIR /home/worker/app
RUN chown worker /home/worker/app
RUN mkdir local_data && chown worker local_data
RUN mkdir models && chown worker models
COPY --chown=worker --from=dependencies /home/worker/app/.venv/ .venv
COPY --chown=worker private_gpt/ private_gpt
COPY --chown=worker *.yaml ./
COPY --chown=worker scripts/ scripts
USER worker
ENTRYPOINT python -m private_gpt

51
Dockerfile.ollama Normal file
View File

@@ -0,0 +1,51 @@
FROM python:3.11.6-slim-bookworm as base
# Install poetry
RUN pip install pipx
RUN python3 -m pipx ensurepath
RUN pipx install poetry==1.8.3
ENV PATH="/root/.local/bin:$PATH"
ENV PATH=".venv/bin/:$PATH"
# https://python-poetry.org/docs/configuration/#virtualenvsin-project
ENV POETRY_VIRTUALENVS_IN_PROJECT=true
FROM base as dependencies
WORKDIR /home/worker/app
COPY pyproject.toml poetry.lock ./
ARG POETRY_EXTRAS="ui vector-stores-qdrant llms-ollama embeddings-ollama"
RUN poetry install --no-root --extras "${POETRY_EXTRAS}"
FROM base as app
ENV PYTHONUNBUFFERED=1
ENV PORT=8080
ENV APP_ENV=prod
ENV PYTHONPATH="$PYTHONPATH:/home/worker/app/private_gpt/"
EXPOSE 8080
# Prepare a non-root user
# More info about how to configure UIDs and GIDs in Docker:
# https://github.com/systemd/systemd/blob/main/docs/UIDS-GIDS.md
# Define the User ID (UID) for the non-root user
# UID 100 is chosen to avoid conflicts with existing system users
ARG UID=100
# Define the Group ID (GID) for the non-root user
# GID 65534 is often used for the 'nogroup' or 'nobody' group
ARG GID=65534
RUN adduser --system --gid ${GID} --uid ${UID} --home /home/worker worker
WORKDIR /home/worker/app
RUN chown worker /home/worker/app
RUN mkdir local_data && chown worker local_data
RUN mkdir models && chown worker models
COPY --chown=worker --from=dependencies /home/worker/app/.venv/ .venv
COPY --chown=worker private_gpt/ private_gpt
COPY --chown=worker *.yaml .
COPY --chown=worker scripts/ scripts
USER worker
ENTRYPOINT python -m private_gpt

78
Makefile Normal file
View File

@@ -0,0 +1,78 @@
# Any args passed to the make script, use with $(call args, default_value)
args = `arg="$(filter-out $@,$(MAKECMDGOALS))" && echo $${arg:-${1}}`
########################################################################################################################
# Quality checks
########################################################################################################################
test:
PYTHONPATH=. poetry run pytest tests
test-coverage:
PYTHONPATH=. poetry run pytest tests --cov private_gpt --cov-report term --cov-report=html --cov-report xml --junit-xml=tests-results.xml
black:
poetry run black . --check
ruff:
poetry run ruff check private_gpt tests
format:
poetry run black .
poetry run ruff check private_gpt tests --fix
mypy:
poetry run mypy private_gpt
check:
make format
make mypy
########################################################################################################################
# Run
########################################################################################################################
run:
poetry run python -m private_gpt
dev-windows:
(set PGPT_PROFILES=local & poetry run python -m uvicorn private_gpt.main:app --reload --port 8001)
dev:
PYTHONUNBUFFERED=1 PGPT_PROFILES=local poetry run python -m uvicorn private_gpt.main:app --reload --port 8001
########################################################################################################################
# Misc
########################################################################################################################
api-docs:
PGPT_PROFILES=mock poetry run python scripts/extract_openapi.py private_gpt.main:app --out fern/openapi/openapi.json
ingest:
@poetry run python scripts/ingest_folder.py $(call args)
stats:
poetry run python scripts/utils.py stats
wipe:
poetry run python scripts/utils.py wipe
setup:
poetry run python scripts/setup
list:
@echo "Available commands:"
@echo " test : Run tests using pytest"
@echo " test-coverage : Run tests with coverage report"
@echo " black : Check code format with black"
@echo " ruff : Check code with ruff"
@echo " format : Format code with black and ruff"
@echo " mypy : Run mypy for type checking"
@echo " check : Run format and mypy commands"
@echo " run : Run the application"
@echo " dev-windows : Run the application in development mode on Windows"
@echo " dev : Run the application in development mode"
@echo " api-docs : Generate API documentation"
@echo " ingest : Ingest data using specified script"
@echo " wipe : Wipe data using specified script"
@echo " setup : Setup the application"

290
README.md
View File

@@ -1,152 +1,158 @@
# privateGPT
Ask questions to your documents without an internet connection, using the power of LLMs. 100% private, no data leaves your execution environment at any point. You can ingest documents and ask questions without an internet connection!
# 🔒 PrivateGPT 📑
> :ear: **Need help applying PrivateGPT to your specific use case?** [Let us know more about it](https://forms.gle/4cSDmH13RZBHV9at7) and we'll try to help! We are refining PrivateGPT through your feedback.
[![Tests](https://github.com/zylon-ai/private-gpt/actions/workflows/tests.yml/badge.svg)](https://github.com/zylon-ai/private-gpt/actions/workflows/tests.yml?query=branch%3Amain)
[![Website](https://img.shields.io/website?up_message=check%20it&down_message=down&url=https%3A%2F%2Fdocs.privategpt.dev%2F&label=Documentation)](https://docs.privategpt.dev/)
[![Discord](https://img.shields.io/discord/1164200432894234644?logo=discord&label=PrivateGPT)](https://discord.gg/bK6mRVpErU)
[![X (formerly Twitter) Follow](https://img.shields.io/twitter/follow/ZylonPrivateGPT)](https://twitter.com/ZylonPrivateGPT)
<img width="902" alt="demo" src="https://user-images.githubusercontent.com/721666/236942256-985801c9-25b9-48ef-80be-3acbb4575164.png">
![Gradio UI](/fern/docs/assets/ui.png?raw=true)
Built with [LangChain](https://github.com/hwchase17/langchain), [LlamaIndex](https://www.llamaindex.ai/), [GPT4All](https://github.com/nomic-ai/gpt4all), [LlamaCpp](https://github.com/ggerganov/llama.cpp), [Chroma](https://www.trychroma.com/) and [SentenceTransformers](https://www.sbert.net/).
PrivateGPT is a production-ready AI project that allows you to ask questions about your documents using the power
of Large Language Models (LLMs), even in scenarios without an Internet connection. 100% private, no data leaves your
execution environment at any point.
# Environment Setup
In order to set your environment up to run the code here, first install all requirements:
>[!TIP]
> If you are looking for an **enterprise-ready, fully private AI workspace**
> check out [Zylon's website](https://zylon.ai) or [request a demo](https://cal.com/zylon/demo?source=pgpt-readme).
> Crafted by the team behind PrivateGPT, Zylon is a best-in-class AI collaborative
> workspace that can be easily deployed on-premise (data center, bare metal...) or in your private cloud (AWS, GCP, Azure...).
```shell
pip3 install -r requirements.txt
The project provides an API offering all the primitives required to build private, context-aware AI applications.
It follows and extends the [OpenAI API standard](https://openai.com/blog/openai-api),
and supports both normal and streaming responses.
The API is divided into two logical blocks:
**High-level API**, which abstracts all the complexity of a RAG (Retrieval Augmented Generation)
pipeline implementation:
- Ingestion of documents: internally managing document parsing,
splitting, metadata extraction, embedding generation and storage.
- Chat & Completions using context from ingested documents:
abstracting the retrieval of context, the prompt engineering and the response generation.
**Low-level API**, which allows advanced users to implement their own complex pipelines:
- Embeddings generation: based on a piece of text.
- Contextual chunks retrieval: given a query, returns the most relevant chunks of text from the ingested documents.
In addition to this, a working [Gradio UI](https://www.gradio.app/)
client is provided to test the API, together with a set of useful tools such as bulk model
download script, ingestion script, documents folder watch, etc.
## 🎞️ Overview
>[!WARNING]
> This README is not updated as frequently as the [documentation](https://docs.privategpt.dev/).
> Please check it out for the latest updates!
### Motivation behind PrivateGPT
Generative AI is a game changer for our society, but adoption in companies of all sizes and data-sensitive
domains like healthcare or legal is limited by a clear concern: **privacy**.
Not being able to ensure that your data is fully under your control when using third-party AI tools
is a risk those industries cannot take.
### Primordial version
The first version of PrivateGPT was launched in May 2023 as a novel approach to address the privacy
concerns by using LLMs in a complete offline way.
That version, which rapidly became a go-to project for privacy-sensitive setups and served as the seed
for thousands of local-focused generative AI projects, was the foundation of what PrivateGPT is becoming nowadays;
thus a simpler and more educational implementation to understand the basic concepts required
to build a fully local -and therefore, private- chatGPT-like tool.
If you want to keep experimenting with it, we have saved it in the
[primordial branch](https://github.com/zylon-ai/private-gpt/tree/primordial) of the project.
> It is strongly recommended to do a clean clone and install of this new version of
PrivateGPT if you come from the previous, primordial version.
### Present and Future of PrivateGPT
PrivateGPT is now evolving towards becoming a gateway to generative AI models and primitives, including
completions, document ingestion, RAG pipelines and other low-level building blocks.
We want to make it easier for any developer to build AI applications and experiences, as well as provide
a suitable extensive architecture for the community to keep contributing.
Stay tuned to our [releases](https://github.com/zylon-ai/private-gpt/releases) to check out all the new features and changes included.
## 📄 Documentation
Full documentation on installation, dependencies, configuration, running the server, deployment options,
ingesting local documents, API details and UI features can be found here: https://docs.privategpt.dev/
## 🧩 Architecture
Conceptually, PrivateGPT is an API that wraps a RAG pipeline and exposes its
primitives.
* The API is built using [FastAPI](https://fastapi.tiangolo.com/) and follows
[OpenAI's API scheme](https://platform.openai.com/docs/api-reference).
* The RAG pipeline is based on [LlamaIndex](https://www.llamaindex.ai/).
The design of PrivateGPT allows to easily extend and adapt both the API and the
RAG implementation. Some key architectural decisions are:
* Dependency Injection, decoupling the different components and layers.
* Usage of LlamaIndex abstractions such as `LLM`, `BaseEmbedding` or `VectorStore`,
making it immediate to change the actual implementations of those abstractions.
* Simplicity, adding as few layers and new abstractions as possible.
* Ready to use, providing a full implementation of the API and RAG
pipeline.
Main building blocks:
* APIs are defined in `private_gpt:server:<api>`. Each package contains an
`<api>_router.py` (FastAPI layer) and an `<api>_service.py` (the
service implementation). Each *Service* uses LlamaIndex base abstractions instead
of specific implementations,
decoupling the actual implementation from its usage.
* Components are placed in
`private_gpt:components:<component>`. Each *Component* is in charge of providing
actual implementations to the base abstractions used in the Services - for example
`LLMComponent` is in charge of providing an actual implementation of an `LLM`
(for example `LlamaCPP` or `OpenAI`).
## 💡 Contributing
Contributions are welcomed! To ensure code quality we have enabled several format and
typing checks, just run `make check` before committing to make sure your code is ok.
Remember to test your code! You'll find a tests folder with helpers, and you can run
tests using `make test` command.
Don't know what to contribute? Here is the public
[Project Board](https://github.com/users/imartinez/projects/3) with several ideas.
Head over to Discord
#contributors channel and ask for write permissions on that GitHub project.
## 💬 Community
Join the conversation around PrivateGPT on our:
- [Twitter (aka X)](https://twitter.com/PrivateGPT_AI)
- [Discord](https://discord.gg/bK6mRVpErU)
## 📖 Citation
If you use PrivateGPT in a paper, check out the [Citation file](CITATION.cff) for the correct citation.
You can also use the "Cite this repository" button in this repo to get the citation in different formats.
Here are a couple of examples:
#### BibTeX
```bibtex
@software{Zylon_PrivateGPT_2023,
author = {Zylon by PrivateGPT},
license = {Apache-2.0},
month = may,
title = {{PrivateGPT}},
url = {https://github.com/zylon-ai/private-gpt},
year = {2023}
}
```
*Alternative requirements installation with poetry*
1. Install [poetry](https://python-poetry.org/docs/#installation)
2. Run this commands
```shell
cd privateGPT
poetry install
poetry shell
#### APA
```
Zylon by PrivateGPT (2023). PrivateGPT [Computer software]. https://github.com/zylon-ai/private-gpt
```
Then, download the LLM model and place it in a directory of your choice:
- LLM: default to [ggml-gpt4all-j-v1.3-groovy.bin](https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin). If you prefer a different GPT4All-J compatible model, just download it and reference it in your `.env` file.
## 🤗 Partners & Supporters
PrivateGPT is actively supported by the teams behind:
* [Qdrant](https://qdrant.tech/), providing the default vector database
* [Fern](https://buildwithfern.com/), providing Documentation and SDKs
* [LlamaIndex](https://www.llamaindex.ai/), providing the base RAG framework and abstractions
Copy the `example.env` template into `.env`
```shell
cp example.env .env
```
and edit the variables appropriately in the `.env` file.
```
MODEL_TYPE: supports LlamaCpp or GPT4All
PERSIST_DIRECTORY: is the folder you want your vectorstore in
MODEL_PATH: Path to your GPT4All or LlamaCpp supported LLM
MODEL_N_CTX: Maximum token limit for the LLM model
MODEL_N_BATCH: Number of tokens in the prompt that are fed into the model at a time. Optimal value differs a lot depending on the model (8 works well for GPT4All, and 1024 is better for LlamaCpp)
EMBEDDINGS_MODEL_NAME: SentenceTransformers embeddings model name (see https://www.sbert.net/docs/pretrained_models.html)
TARGET_SOURCE_CHUNKS: The amount of chunks (sources) that will be used to answer a question
```
Note: because of the way `langchain` loads the `SentenceTransformers` embeddings, the first time you run the script it will require internet connection to download the embeddings model itself.
## Test dataset
This repo uses a [state of the union transcript](https://github.com/imartinez/privateGPT/blob/main/source_documents/state_of_the_union.txt) as an example.
## Instructions for ingesting your own dataset
Put any and all your files into the `source_documents` directory
The supported extensions are:
- `.csv`: CSV,
- `.docx`: Word Document,
- `.doc`: Word Document,
- `.enex`: EverNote,
- `.eml`: Email,
- `.epub`: EPub,
- `.html`: HTML File,
- `.md`: Markdown,
- `.msg`: Outlook Message,
- `.odt`: Open Document Text,
- `.pdf`: Portable Document Format (PDF),
- `.pptx` : PowerPoint Document,
- `.ppt` : PowerPoint Document,
- `.txt`: Text file (UTF-8),
Run the following command to ingest all the data.
```shell
python ingest.py
```
Output should look like this:
```shell
Creating new vectorstore
Loading documents from source_documents
Loading new documents: 100%|██████████████████████| 1/1 [00:01<00:00, 1.73s/it]
Loaded 1 new documents from source_documents
Split into 90 chunks of text (max. 500 tokens each)
Creating embeddings. May take some minutes...
Using embedded DuckDB with persistence: data will be stored in: db
Ingestion complete! You can now run privateGPT.py to query your documents
```
It will create a `db` folder containing the local vectorstore. Will take 20-30 seconds per document, depending on the size of the document.
You can ingest as many documents as you want, and all will be accumulated in the local embeddings database.
If you want to start from an empty database, delete the `db` folder.
Note: during the ingest process no data leaves your local environment. You could ingest without an internet connection, except for the first time you run the ingest script, when the embeddings model is downloaded.
## Ask questions to your documents, locally!
In order to ask a question, run a command like:
```shell
python privateGPT.py
```
And wait for the script to require your input.
```plaintext
> Enter a query:
```
Hit enter. You'll need to wait 20-30 seconds (depending on your machine) while the LLM model consumes the prompt and prepares the answer. Once done, it will print the answer and the 4 sources it used as context from your documents; you can then ask another question without re-running the script, just wait for the prompt again.
Note: you could turn off your internet connection, and the script inference would still work. No data gets out of your local environment.
Type `exit` to finish the script.
### CLI
The script also supports optional command-line arguments to modify its behavior. You can see a full list of these arguments by running the command ```python privateGPT.py --help``` in your terminal.
# How does it work?
Selecting the right local models and the power of `LangChain` you can run the entire pipeline locally, without any data leaving your environment, and with reasonable performance.
- `ingest.py` uses `LangChain` tools to parse the document and create embeddings locally using `HuggingFaceEmbeddings` (`SentenceTransformers`). It then stores the result in a local vector database using `Chroma` vector store.
- `privateGPT.py` uses a local LLM based on `GPT4All-J` or `LlamaCpp` to understand questions and create answers. The context for the answers is extracted from the local vector store using a similarity search to locate the right piece of context from the docs.
- `GPT4All-J` wrapper was introduced in LangChain 0.0.162.
# System Requirements
## Python Version
To use this software, you must have Python 3.10 or later installed. Earlier versions of Python will not compile.
## C++ Compiler
If you encounter an error while building a wheel during the `pip install` process, you may need to install a C++ compiler on your computer.
### For Windows 10/11
To install a C++ compiler on Windows 10/11, follow these steps:
1. Install Visual Studio 2022.
2. Make sure the following components are selected:
* Universal Windows Platform development
* C++ CMake tools for Windows
3. Download the MinGW installer from the [MinGW website](https://sourceforge.net/projects/mingw/).
4. Run the installer and select the `gcc` component.
## Mac Running Intel
When running a Mac with Intel hardware (not M1), you may run into _clang: error: the clang compiler does not support '-march=native'_ during pip install.
If so set your archflags during pip install. eg: _ARCHFLAGS="-arch x86_64" pip3 install -r requirements.txt_
# Disclaimer
This is a test project to validate the feasibility of a fully private solution for question answering using LLMs and Vector embeddings. It is not production ready, and it is not meant to be used in production. The models selection is not optimized for performance, but for privacy; but it is possible to use different models and vectorstores to improve performance.
This project has been strongly influenced and supported by other amazing projects like
[LangChain](https://github.com/hwchase17/langchain),
[GPT4All](https://github.com/nomic-ai/gpt4all),
[LlamaCpp](https://github.com/ggerganov/llama.cpp),
[Chroma](https://www.trychroma.com/)
and [SentenceTransformers](https://www.sbert.net/).

View File

@@ -1,16 +0,0 @@
import os
from dotenv import load_dotenv
from chromadb.config import Settings
load_dotenv()
# Define the folder for storing database
PERSIST_DIRECTORY = os.environ.get('PERSIST_DIRECTORY')
if PERSIST_DIRECTORY is None:
raise Exception("Please set the PERSIST_DIRECTORY environment variable")
# Define the Chroma settings
CHROMA_SETTINGS = Settings(
persist_directory=PERSIST_DIRECTORY,
anonymized_telemetry=False
)

102
docker-compose.yaml Normal file
View File

@@ -0,0 +1,102 @@
services:
#-----------------------------------
#---- Private-GPT services ---------
#-----------------------------------
# Private-GPT service for the Ollama CPU and GPU modes
# This service builds from an external Dockerfile and runs the Ollama mode.
private-gpt-ollama:
image: ${PGPT_IMAGE:-zylonai/private-gpt}${PGPT_TAG:-0.6.1}-ollama
build:
context: .
dockerfile: Dockerfile.ollama
volumes:
- ./local_data/:/home/worker/app/local_data
ports:
- "8001:8001"
environment:
PORT: 8001
PGPT_PROFILES: docker
PGPT_MODE: ollama
PGPT_EMBED_MODE: ollama
PGPT_OLLAMA_API_BASE: http://ollama:11434
HF_TOKEN: ${HF_TOKEN:-}
profiles:
- ""
- ollama-cpu
- ollama-cuda
- ollama-api
# Private-GPT service for the local mode
# This service builds from a local Dockerfile and runs the application in local mode.
private-gpt-llamacpp-cpu:
image: ${PGPT_IMAGE:-zylonai/private-gpt}${PGPT_TAG:-0.6.1}-llamacpp-cpu
build:
context: .
dockerfile: Dockerfile.llamacpp-cpu
volumes:
- ./local_data/:/home/worker/app/local_data
- ./models/:/home/worker/app/models
entrypoint: sh -c ".venv/bin/python scripts/setup && .venv/bin/python -m private_gpt"
ports:
- "8001:8001"
environment:
PORT: 8001
PGPT_PROFILES: local
HF_TOKEN: ${HF_TOKEN}
profiles:
- llamacpp-cpu
#-----------------------------------
#---- Ollama services --------------
#-----------------------------------
# Traefik reverse proxy for the Ollama service
# This will route requests to the Ollama service based on the profile.
ollama:
image: traefik:v2.10
ports:
- "11435:11434"
- "8081:8080"
command:
- "--providers.file.filename=/etc/router.yml"
- "--log.level=ERROR"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:11434"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./.docker/router.yml:/etc/router.yml:ro
extra_hosts:
- "host.docker.internal:host-gateway"
profiles:
- ""
- ollama-cpu
- ollama-cuda
- ollama-api
# Ollama service for the CPU mode
ollama-cpu:
image: ollama/ollama:latest
volumes:
- ./models:/root/.ollama
profiles:
- ""
- ollama
# Ollama service for the CUDA mode
ollama-cuda:
image: ollama/ollama:latest
volumes:
- ./models:/root/.ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
profiles:
- ollama-cuda

View File

@@ -1,7 +0,0 @@
PERSIST_DIRECTORY=db
MODEL_TYPE=GPT4All
MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin
EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2
MODEL_N_CTX=1000
MODEL_N_BATCH=8
TARGET_SOURCE_CHUNKS=4

39
fern/README.md Normal file
View File

@@ -0,0 +1,39 @@
# Documentation of PrivateGPT
The documentation of this project is being rendered thanks to [fern](https://github.com/fern-api/fern).
Fern is basically transforming your `.md` and `.mdx` files into a static website: your documentation.
The configuration of your documentation is done in the `./docs.yml` file.
There, you can configure the navbar, tabs, sections and pages being rendered.
The documentation of fern (and the syntax of its configuration `docs.yml`) is
available there [docs.buildwithfern.com](https://docs.buildwithfern.com/).
## How to run fern
**You cannot render your documentation locally without fern credentials.**
To see how your documentation looks like, you **have to** use the CICD of this
repository (by opening a PR, CICD job will be executed, and a preview of
your PR's documentation will be deployed in vercel automatically, through fern).
The only thing you can do locally, is to run `fern check`, which check the syntax of
your `docs.yml` file.
## How to add a new page
Add in the `docs.yml` a new `page`, with the following syntax:
```yml
navigation:
# ...
- tab: my-existing-tab
layout:
# ...
- section: My Existing Section
contents:
# ...
- page: My new page display name
# The path of the page, relative to `fern/`
path: ./docs/pages/my-existing-tab/new-page-content.mdx
```

129
fern/docs.yml Normal file
View File

@@ -0,0 +1,129 @@
# Main Fern configuration file
instances:
- url: privategpt.docs.buildwithfern.com
custom-domain: docs.privategpt.dev
title: PrivateGPT | Docs
# The tabs definition, in the top left corner
tabs:
overview:
display-name: Overview
icon: "fa-solid fa-home"
quickstart:
display-name: Quickstart
icon: "fa-solid fa-rocket"
installation:
display-name: Installation
icon: "fa-solid fa-download"
manual:
display-name: Manual
icon: "fa-solid fa-book"
recipes:
display-name: Recipes
icon: "fa-solid fa-flask"
api-reference:
display-name: API Reference
icon: "fa-solid fa-file-contract"
# Definition of tabs contents, will be displayed on the left side of the page, below all tabs
navigation:
# The default tab
- tab: overview
layout:
- section: Welcome
contents:
- page: Introduction
path: ./docs/pages/overview/welcome.mdx
- tab: quickstart
layout:
- section: Getting started
contents:
- page: Quickstart
path: ./docs/pages/quickstart/quickstart.mdx
# How to install PrivateGPT, with FAQ and troubleshooting
- tab: installation
layout:
- section: Getting started
contents:
- page: Main Concepts
path: ./docs/pages/installation/concepts.mdx
- page: Installation
path: ./docs/pages/installation/installation.mdx
- page: Troubleshooting
path: ./docs/pages/installation/troubleshooting.mdx
# Manual of PrivateGPT: how to use it and configure it
- tab: manual
layout:
- section: General configuration
contents:
- page: Configuration
path: ./docs/pages/manual/settings.mdx
- section: Document management
contents:
- page: Ingestion
path: ./docs/pages/manual/ingestion.mdx
- page: Deletion
path: ./docs/pages/manual/ingestion-reset.mdx
- section: Storage
contents:
- page: Vector Stores
path: ./docs/pages/manual/vectordb.mdx
- page: Node Stores
path: ./docs/pages/manual/nodestore.mdx
- section: Advanced Setup
contents:
- page: LLM Backends
path: ./docs/pages/manual/llms.mdx
- page: Reranking
path: ./docs/pages/manual/reranker.mdx
- section: User Interface
contents:
- page: Gradio Manual
path: ./docs/pages/ui/gradio.mdx
- page: Alternatives
path: ./docs/pages/ui/alternatives.mdx
- tab: recipes
layout:
- section: Getting started
contents:
- page: Quickstart
path: ./docs/pages/recipes/quickstart.mdx
- section: General use cases
contents:
- page: Summarize
path: ./docs/pages/recipes/summarize.mdx
# More advanced usage of PrivateGPT, by API
- tab: api-reference
layout:
- section: Overview
contents:
- page : API Reference overview
path: ./docs/pages/api-reference/api-reference.mdx
- page: SDKs
path: ./docs/pages/api-reference/sdks.mdx
- api: API Reference
# Definition of the navbar, will be displayed in the top right corner.
# `type:primary` is always displayed at the most right side of the navbar
navbar-links:
- type: secondary
text: Contact us
url: "mailto:hello@zylon.ai"
- type: github
value: "https://github.com/zylon-ai/private-gpt"
- type: primary
text: Join the Discord
url: https://discord.com/invite/bK6mRVpErU
colors:
accentPrimary:
dark: "#C6BBFF"
light: "#756E98"
logo:
dark: ./docs/assets/logo_light.png
light: ./docs/assets/logo_dark.png
height: 50
favicon: ./docs/assets/favicon.ico

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

BIN
fern/docs/assets/ui.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 154 KiB

View File

@@ -0,0 +1,14 @@
# API Reference
The API is divided in two logical blocks:
1. High-level API, abstracting all the complexity of a RAG (Retrieval Augmented Generation) pipeline implementation:
- Ingestion of documents: internally managing document parsing, splitting, metadata extraction,
embedding generation and storage.
- Chat & Completions using context from ingested documents: abstracting the retrieval of context, the prompt
engineering and the response generation.
2. Low-level API, allowing advanced users to implement their own complex pipelines:
- Embeddings generation: based on a piece of text.
- Contextual chunks retrieval: given a query, returns the most relevant chunks of text from the ingested
documents.

View File

@@ -0,0 +1,38 @@
We use [Fern](www.buildwithfern.com) to offer API clients for Node.js, Python, Go, and Java.
We recommend using these clients to interact with our endpoints.
The clients are kept up to date automatically, so we encourage you to use the latest version.
## SDKs
*Coming soon!*
<Cards>
<Card
title="TypeScript"
icon="fa-brands fa-node"
href="https://github.com/zylon-ai/privategpt-ts"
/>
<Card
title="Python"
icon="fa-brands fa-python"
href="https://github.com/zylon-ai/pgpt-python"
/>
<br />
</Cards>
<br />
<Cards>
<Card
title="Java - WIP"
icon="fa-brands fa-java"
href="https://github.com/zylon-ai/private-gpt-java"
/>
<Card
title="Go - WIP"
icon="fa-brands fa-golang"
href="https://github.com/zylon-ai/private-gpt-go"
/>
</Cards>
<br />

View File

@@ -0,0 +1,67 @@
PrivateGPT is a service that wraps a set of AI RAG primitives in a comprehensive set of APIs providing a private, secure, customizable and easy to use GenAI development framework.
It uses FastAPI and LLamaIndex as its core frameworks. Those can be customized by changing the codebase itself.
It supports a variety of LLM providers, embeddings providers, and vector stores, both local and remote. Those can be easily changed without changing the codebase.
# Different Setups support
## Setup configurations available
You get to decide the setup for these 3 main components:
- **LLM**: the large language model provider used for inference. It can be local, or remote, or even OpenAI.
- **Embeddings**: the embeddings provider used to encode the input, the documents and the users' queries. Same as the LLM, it can be local, or remote, or even OpenAI.
- **Vector store**: the store used to index and retrieve the documents.
There is an extra component that can be enabled or disabled: the UI. It is a Gradio UI that allows to interact with the API in a more user-friendly way.
<Callout intent = "warning">
A working **Gradio UI client** is provided to test the API, together with a set of useful tools such as bulk
model download script, ingestion script, documents folder watch, etc. Please refer to the [UI alternatives](/manual/user-interface/alternatives) page for more UI alternatives.
</Callout>
### Setups and Dependencies
Your setup will be the combination of the different options available. You'll find recommended setups in the [installation](./installation) section.
PrivateGPT uses poetry to manage its dependencies. You can install the dependencies for the different setups by running `poetry install --extras "<extra1> <extra2>..."`.
Extras are the different options available for each component. For example, to install the dependencies for a a local setup with UI and qdrant as vector database, Ollama as LLM and local embeddings, you would run:
```bash
poetry install --extras "ui vector-stores-qdrant llms-ollama embeddings-ollama"
```
Refer to the [installation](./installation) section for more details.
### Setups and Configuration
PrivateGPT uses yaml to define its configuration in files named `settings-<profile>.yaml`.
Different configuration files can be created in the root directory of the project.
PrivateGPT will load the configuration at startup from the profile specified in the `PGPT_PROFILES` environment variable.
For example, running:
```bash
PGPT_PROFILES=ollama make run
```
will load the configuration from `settings.yaml` and `settings-ollama.yaml`.
- `settings.yaml` is always loaded and contains the default configuration.
- `settings-ollama.yaml` is loaded if the `ollama` profile is specified in the `PGPT_PROFILES` environment variable. It can override configuration from the default `settings.yaml`
## About Fully Local Setups
In order to run PrivateGPT in a fully local setup, you will need to run the LLM, Embeddings and Vector Store locally.
### LLM
For local LLM there are two options:
* (Recommended) You can use the 'ollama' option in PrivateGPT, which will connect to your local Ollama instance. Ollama simplifies a lot the installation of local LLMs.
* You can use the 'llms-llama-cpp' option in PrivateGPT, which will use LlamaCPP. It works great on Mac with Metal most of the times (leverages Metal GPU), but it can be tricky in certain Linux and Windows distributions, depending on the GPU. In the installation document you'll find guides and troubleshooting.
In order for LlamaCPP powered LLM to work (the second option), you need to download the LLM model to the `models` folder. You can do so by running the `setup` script:
```bash
poetry run python scripts/setup
```
### Embeddings
For local Embeddings there are two options:
* (Recommended) You can use the 'ollama' option in PrivateGPT, which will connect to your local Ollama instance. Ollama simplifies a lot the installation of local LLMs.
* You can use the 'embeddings-huggingface' option in PrivateGPT, which will use HuggingFace.
In order for HuggingFace LLM to work (the second option), you need to download the embeddings model to the `models` folder. You can do so by running the `setup` script:
```bash
poetry run python scripts/setup
```
### Vector stores
The vector stores supported (Qdrant, Milvus, ChromaDB and Postgres) run locally by default.

View File

@@ -0,0 +1,431 @@
It is important that you review the [Main Concepts](../concepts) section to understand the different components of PrivateGPT and how they interact with each other.
## Base requirements to run PrivateGPT
### 1. Clone the PrivateGPT Repository
Clone the repository and navigate to it:
```bash
git clone https://github.com/zylon-ai/private-gpt
cd private-gpt
```
### 2. Install Python 3.11
If you do not have Python 3.11 installed, install it using a Python version manager like `pyenv`. Earlier Python versions are not supported.
#### macOS/Linux
Install and set Python 3.11 using [pyenv](https://github.com/pyenv/pyenv):
```bash
pyenv install 3.11
pyenv local 3.11
```
#### Windows
Install and set Python 3.11 using [pyenv-win](https://github.com/pyenv-win/pyenv-win):
```bash
pyenv install 3.11
pyenv local 3.11
```
### 3. Install `Poetry`
Install [Poetry](https://python-poetry.org/docs/#installing-with-the-official-installer) for dependency management:
Follow the instructions on the official Poetry website to install it.
<Callout intent="warning">
A bug exists in Poetry versions 1.7.0 and earlier. We strongly recommend upgrading to a tested version.
To upgrade Poetry to latest tested version, run `poetry self update 1.8.3` after installing it.
</Callout>
### 4. Optional: Install `make`
To run various scripts, you need to install `make`. Follow the instructions for your operating system:
#### macOS
(Using Homebrew):
```bash
brew install make
```
#### Windows
(Using Chocolatey):
```bash
choco install make
```
## Install and Run Your Desired Setup
PrivateGPT allows customization of the setup, from fully local to cloud-based, by deciding the modules to use. To install only the required dependencies, PrivateGPT offers different `extras` that can be combined during the installation process:
```bash
poetry install --extras "<extra1> <extra2>..."
```
Where `<extra>` can be any of the following options described below.
### Available Modules
You need to choose one option per category (LLM, Embeddings, Vector Stores, UI). Below are the tables listing the available options for each category.
#### LLM
| **Option** | **Description** | **Extra** |
|--------------|------------------------------------------------------------------------|---------------------|
| **ollama** | Adds support for Ollama LLM, requires Ollama running locally | llms-ollama |
| llama-cpp | Adds support for local LLM using LlamaCPP | llms-llama-cpp |
| sagemaker | Adds support for Amazon Sagemaker LLM, requires Sagemaker endpoints | llms-sagemaker |
| openai | Adds support for OpenAI LLM, requires OpenAI API key | llms-openai |
| openailike | Adds support for 3rd party LLM providers compatible with OpenAI's API | llms-openai-like |
| azopenai | Adds support for Azure OpenAI LLM, requires Azure endpoints | llms-azopenai |
| gemini | Adds support for Gemini LLM, requires Gemini API key | llms-gemini |
#### Embeddings
| **Option** | **Description** | **Extra** |
|------------------|--------------------------------------------------------------------------------|-------------------------|
| **ollama** | Adds support for Ollama Embeddings, requires Ollama running locally | embeddings-ollama |
| huggingface | Adds support for local Embeddings using HuggingFace | embeddings-huggingface |
| openai | Adds support for OpenAI Embeddings, requires OpenAI API key | embeddings-openai |
| sagemaker | Adds support for Amazon Sagemaker Embeddings, requires Sagemaker endpoints | embeddings-sagemaker |
| azopenai | Adds support for Azure OpenAI Embeddings, requires Azure endpoints | embeddings-azopenai |
| gemini | Adds support for Gemini Embeddings, requires Gemini API key | embeddings-gemini |
#### Vector Stores
| **Option** | **Description** | **Extra** |
|------------------|-----------------------------------------|-------------------------|
| **qdrant** | Adds support for Qdrant vector store | vector-stores-qdrant |
| milvus | Adds support for Milvus vector store | vector-stores-milvus |
| chroma | Adds support for Chroma DB vector store | vector-stores-chroma |
| postgres | Adds support for Postgres vector store | vector-stores-postgres |
| clickhouse | Adds support for Clickhouse vector store| vector-stores-clickhouse|
#### UI
| **Option** | **Description** | **Extra** |
|--------------|------------------------------------------|-----------|
| Gradio | Adds support for UI using Gradio | ui |
<Callout intent = "warning">
A working **Gradio UI client** is provided to test the API, together with a set of useful tools such as bulk
model download script, ingestion script, documents folder watch, etc. Please refer to the [UI alternatives](/manual/user-interface/alternatives) page for more UI alternatives.
</Callout>
## Recommended Setups
There are just some examples of recommended setups. You can mix and match the different options to fit your needs.
You'll find more information in the Manual section of the documentation.
> **Important for Windows**: In the examples below or how to run PrivateGPT with `make run`, `PGPT_PROFILES` env var is being set inline following Unix command line syntax (works on MacOS and Linux).
If you are using Windows, you'll need to set the env var in a different way, for example:
```powershell
# Powershell
$env:PGPT_PROFILES="ollama"
make run
```
or
```cmd
# CMD
set PGPT_PROFILES=ollama
make run
```
Refer to the [troubleshooting](./troubleshooting) section for specific issues you might encounter.
### Local, Ollama-powered setup - RECOMMENDED
**The easiest way to run PrivateGPT fully locally** is to depend on Ollama for the LLM. Ollama provides local LLM and Embeddings super easy to install and use, abstracting the complexity of GPU support. It's the recommended setup for local development.
Go to [ollama.ai](https://ollama.ai/) and follow the instructions to install Ollama on your machine.
After the installation, make sure the Ollama desktop app is closed.
Now, start Ollama service (it will start a local inference server, serving both the LLM and the Embeddings):
```bash
ollama serve
```
Install the models to be used, the default settings-ollama.yaml is configured to user llama3.1 8b LLM (~4GB) and nomic-embed-text Embeddings (~275MB)
By default, PGPT will automatically pull models as needed. This behavior can be changed by modifying the `ollama.autopull_models` property.
In any case, if you want to manually pull models, run the following commands:
```bash
ollama pull llama3.1
ollama pull nomic-embed-text
```
Once done, on a different terminal, you can install PrivateGPT with the following command:
```bash
poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant"
```
Once installed, you can run PrivateGPT. Make sure you have a working Ollama running locally before running the following command.
```bash
PGPT_PROFILES=ollama make run
```
PrivateGPT will use the already existing `settings-ollama.yaml` settings file, which is already configured to use Ollama LLM and Embeddings, and Qdrant. Review it and adapt it to your needs (different models, different Ollama port, etc.)
The UI will be available at http://localhost:8001
### Private, Sagemaker-powered setup
If you need more performance, you can run a version of PrivateGPT that relies on powerful AWS Sagemaker machines to serve the LLM and Embeddings.
You need to have access to sagemaker inference endpoints for the LLM and / or the embeddings, and have AWS credentials properly configured.
Edit the `settings-sagemaker.yaml` file to include the correct Sagemaker endpoints.
Then, install PrivateGPT with the following command:
```bash
poetry install --extras "ui llms-sagemaker embeddings-sagemaker vector-stores-qdrant"
```
Once installed, you can run PrivateGPT. Make sure you have a working Ollama running locally before running the following command.
```bash
PGPT_PROFILES=sagemaker make run
```
PrivateGPT will use the already existing `settings-sagemaker.yaml` settings file, which is already configured to use Sagemaker LLM and Embeddings endpoints, and Qdrant.
The UI will be available at http://localhost:8001
### Non-Private, OpenAI-powered test setup
If you want to test PrivateGPT with OpenAI's LLM and Embeddings -taking into account your data is going to OpenAI!- you can run the following command:
You need an OPENAI API key to run this setup.
Edit the `settings-openai.yaml` file to include the correct API KEY. Never commit it! It's a secret! As an alternative to editing `settings-openai.yaml`, you can just set the env var OPENAI_API_KEY.
Then, install PrivateGPT with the following command:
```bash
poetry install --extras "ui llms-openai embeddings-openai vector-stores-qdrant"
```
Once installed, you can run PrivateGPT.
```bash
PGPT_PROFILES=openai make run
```
PrivateGPT will use the already existing `settings-openai.yaml` settings file, which is already configured to use OpenAI LLM and Embeddings endpoints, and Qdrant.
The UI will be available at http://localhost:8001
### Non-Private, Azure OpenAI-powered test setup
If you want to test PrivateGPT with Azure OpenAI's LLM and Embeddings -taking into account your data is going to Azure OpenAI!- you can run the following command:
You need to have access to Azure OpenAI inference endpoints for the LLM and / or the embeddings, and have Azure OpenAI credentials properly configured.
Edit the `settings-azopenai.yaml` file to include the correct Azure OpenAI endpoints.
Then, install PrivateGPT with the following command:
```bash
poetry install --extras "ui llms-azopenai embeddings-azopenai vector-stores-qdrant"
```
Once installed, you can run PrivateGPT.
```bash
PGPT_PROFILES=azopenai make run
```
PrivateGPT will use the already existing `settings-azopenai.yaml` settings file, which is already configured to use Azure OpenAI LLM and Embeddings endpoints, and Qdrant.
The UI will be available at http://localhost:8001
### Local, Llama-CPP powered setup
If you want to run PrivateGPT fully locally without relying on Ollama, you can run the following command:
```bash
poetry install --extras "ui llms-llama-cpp embeddings-huggingface vector-stores-qdrant"
```
In order for local LLM and embeddings to work, you need to download the models to the `models` folder. You can do so by running the `setup` script:
```bash
poetry run python scripts/setup
```
Once installed, you can run PrivateGPT with the following command:
```bash
PGPT_PROFILES=local make run
```
PrivateGPT will load the already existing `settings-local.yaml` file, which is already configured to use LlamaCPP LLM, HuggingFace embeddings and Qdrant.
The UI will be available at http://localhost:8001
#### Llama-CPP support
For PrivateGPT to run fully locally without Ollama, Llama.cpp is required and in
particular [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
is used.
You'll need to have a valid C++ compiler like gcc installed. See [Troubleshooting: C++ Compiler](#troubleshooting-c-compiler) for more details.
> It's highly encouraged that you fully read llama-cpp and llama-cpp-python documentation relevant to your platform.
> Running into installation issues is very likely, and you'll need to troubleshoot them yourself.
##### Llama-CPP OSX GPU support
You will need to build [llama.cpp](https://github.com/ggerganov/llama.cpp) with metal support.
To do that, you need to install `llama.cpp` python's binding `llama-cpp-python` through pip, with the compilation flag
that activate `METAL`: you have to pass `-DLLAMA_METAL=on` to the CMake command tha `pip` runs for you (see below).
In other words, one should simply run:
```bash
CMAKE_ARGS="-DLLAMA_METAL=on" pip install --force-reinstall --no-cache-dir llama-cpp-python
```
The above command will force the re-installation of `llama-cpp-python` with `METAL` support by compiling
`llama.cpp` locally with your `METAL` libraries (shipped by default with your macOS).
More information is available in the documentation of the libraries themselves:
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python#installation-with-hardware-acceleration)
* [llama-cpp-python's documentation](https://llama-cpp-python.readthedocs.io/en/latest/#installation-with-hardware-acceleration)
* [llama.cpp](https://github.com/ggerganov/llama.cpp#build)
##### Llama-CPP Windows NVIDIA GPU support
Windows GPU support is done through CUDA.
Follow the instructions on the original [llama.cpp](https://github.com/ggerganov/llama.cpp) repo to install the required
dependencies.
Some tips to get it working with an NVIDIA card and CUDA (Tested on Windows 10 with CUDA 11.5 RTX 3070):
* Install latest VS2022 (and build tools) https://visualstudio.microsoft.com/vs/community/
* Install CUDA toolkit https://developer.nvidia.com/cuda-downloads
* Verify your installation is correct by running `nvcc --version` and `nvidia-smi`, ensure your CUDA version is up to
date and your GPU is detected.
* [Optional] Install CMake to troubleshoot building issues by compiling llama.cpp directly https://cmake.org/download/
If you have all required dependencies properly configured running the
following powershell command should succeed.
```powershell
$env:CMAKE_ARGS='-DLLAMA_CUBLAS=on'; poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python
```
If your installation was correct, you should see a message similar to the following next
time you start the server `BLAS = 1`.
```console
llama_new_context_with_model: total VRAM used: 4857.93 MB (model: 4095.05 MB, context: 762.87 MB)
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 |
```
Note that llama.cpp offloads matrix calculations to the GPU but the performance is
still hit heavily due to latency between CPU and GPU communication. You might need to tweak
batch sizes and other parameters to get the best performance for your particular system.
##### Llama-CPP Linux NVIDIA GPU support and Windows-WSL
Linux GPU support is done through CUDA.
Follow the instructions on the original [llama.cpp](https://github.com/ggerganov/llama.cpp) repo to install the required
external
dependencies.
Some tips:
* Make sure you have an up-to-date C++ compiler
* Install CUDA toolkit https://developer.nvidia.com/cuda-downloads
* Verify your installation is correct by running `nvcc --version` and `nvidia-smi`, ensure your CUDA version is up to
date and your GPU is detected.
After that running the following command in the repository will install llama.cpp with GPU support:
```bash
CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python
```
If your installation was correct, you should see a message similar to the following next
time you start the server `BLAS = 1`.
```
llama_new_context_with_model: total VRAM used: 4857.93 MB (model: 4095.05 MB, context: 762.87 MB)
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 |
```
##### Llama-CPP Linux AMD GPU support
Linux GPU support is done through ROCm.
Some tips:
* Install ROCm from [quick-start install guide](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/quick-start.html)
* [Install PyTorch for ROCm](https://rocm.docs.amd.com/projects/radeon/en/latest/docs/install/install-pytorch.html)
```bash
wget https://repo.radeon.com/rocm/manylinux/rocm-rel-6.0/torch-2.1.1%2Brocm6.0-cp311-cp311-linux_x86_64.whl
poetry run pip install --force-reinstall --no-cache-dir torch-2.1.1+rocm6.0-cp311-cp311-linux_x86_64.whl
```
* Install bitsandbytes for ROCm
```bash
PYTORCH_ROCM_ARCH=gfx900,gfx906,gfx908,gfx90a,gfx1030,gfx1100,gfx1101,gfx940,gfx941,gfx942
BITSANDBYTES_VERSION=62353b0200b8557026c176e74ac48b84b953a854
git clone https://github.com/arlo-phoenix/bitsandbytes-rocm-5.6
cd bitsandbytes-rocm-5.6
git checkout ${BITSANDBYTES_VERSION}
make hip ROCM_TARGET=${PYTORCH_ROCM_ARCH} ROCM_HOME=/opt/rocm/
pip install . --extra-index-url https://download.pytorch.org/whl/nightly
```
After that running the following command in the repository will install llama.cpp with GPU support:
```bash
LLAMA_CPP_PYTHON_VERSION=0.2.56
DAMDGPU_TARGETS=gfx900;gfx906;gfx908;gfx90a;gfx1030;gfx1100;gfx1101;gfx940;gfx941;gfx942
CMAKE_ARGS="-DLLAMA_HIPBLAS=ON -DCMAKE_C_COMPILER=/opt/rocm/llvm/bin/clang -DCMAKE_CXX_COMPILER=/opt/rocm/llvm/bin/clang++ -DAMDGPU_TARGETS=${DAMDGPU_TARGETS}" poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python==${LLAMA_CPP_PYTHON_VERSION}
```
If your installation was correct, you should see a message similar to the following next time you start the server `BLAS = 1`.
```
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
```
##### Llama-CPP Known issues and Troubleshooting
Execution of LLMs locally still has a lot of sharp edges, specially when running on non Linux platforms.
You might encounter several issues:
* Performance: RAM or VRAM usage is very high, your computer might experience slowdowns or even crashes.
* GPU Virtualization on Windows and OSX: Simply not possible with docker desktop, you have to run the server directly on
the host.
* Building errors: Some of PrivateGPT dependencies need to build native code, and they might fail on some platforms.
Most likely you are missing some dev tools in your machine (updated C++ compiler, CUDA is not on PATH, etc.).
If you encounter any of these issues, please open an issue and we'll try to help.
One of the first reflex to adopt is: get more information.
If, during your installation, something does not go as planned, retry in *verbose* mode, and see what goes wrong.
For example, when installing packages with `pip install`, you can add the option `-vvv` to show the details of the installation.
##### Llama-CPP Troubleshooting: C++ Compiler
If you encounter an error while building a wheel during the `pip install` process, you may need to install a C++
compiler on your computer.
**For Windows 10/11**
To install a C++ compiler on Windows 10/11, follow these steps:
1. Install Visual Studio 2022.
2. Make sure the following components are selected:
* Universal Windows Platform development
* C++ CMake tools for Windows
3. Download the MinGW installer from the [MinGW website](https://sourceforge.net/projects/mingw/).
4. Run the installer and select the `gcc` component.
**For OSX**
1. Check if you have a C++ compiler installed, `Xcode` should have done it for you. To install Xcode, go to the App
Store and search for Xcode and install it. **Or** you can install the command line tools by running `xcode-select --install`.
2. If not, you can install clang or gcc with homebrew `brew install gcc`
##### Llama-CPP Troubleshooting: Mac Running Intel
When running a Mac with Intel hardware (not M1), you may run into _clang: error: the clang compiler does not support '
-march=native'_ during pip install.
If so set your archflags during pip install. eg: _ARCHFLAGS="-arch x86_64" pip3 install -r requirements.txt_

View File

@@ -0,0 +1,49 @@
# Downloading Gated and Private Models
Many models are gated or private, requiring special access to use them. Follow these steps to gain access and set up your environment for using these models.
## Accessing Gated Models
1. **Request Access:**
Follow the instructions provided [here](https://huggingface.co/docs/hub/en/models-gated) to request access to the gated model.
2. **Generate a Token:**
Once you have access, generate a token by following the instructions [here](https://huggingface.co/docs/hub/en/security-tokens).
3. **Set the Token:**
Add the generated token to your `settings.yaml` file:
```yaml
huggingface:
access_token: <your-token>
```
Alternatively, set the `HF_TOKEN` environment variable:
```bash
export HF_TOKEN=<your-token>
```
# Tokenizer Setup
PrivateGPT uses the `AutoTokenizer` library to tokenize input text accurately. It connects to HuggingFace's API to download the appropriate tokenizer for the specified model.
## Configuring the Tokenizer
1. **Specify the Model:**
In your `settings.yaml` file, specify the model you want to use:
```yaml
llm:
tokenizer: meta-llama/Meta-Llama-3.1-8B-Instruct
```
2. **Set Access Token for Gated Models:**
If you are using a gated model, ensure the `access_token` is set as mentioned in the previous section.
This configuration ensures that PrivateGPT can download and use the correct tokenizer for the model you are working with.
# Embedding dimensions mismatch
If you encounter an error message like `Embedding dimensions mismatch`, it is likely due to the embedding model and
current vector dimension mismatch. To resolve this issue, ensure that the model and the input data have the same vector dimensions.
By default, PrivateGPT uses `nomic-embed-text` embeddings, which have a vector dimension of 768.
If you are using a different embedding model, ensure that the vector dimensions match the model's output.
<Callout intent = "warning">
In versions below to 0.6.0, the default embedding model was `BAAI/bge-small-en-v1.5` in `huggingface` setup.
If you plan to reuse the old generated embeddings, you need to update the `settings.yaml` file to use the correct embedding model:
```yaml
huggingface:
embedding_hf_model_name: BAAI/bge-small-en-v1.5
embedding:
embed_dim: 384
```
</Callout>

View File

@@ -0,0 +1,14 @@
# Reset Local documents database
When running in a local setup, you can remove all ingested documents by simply
deleting all contents of `local_data` folder (except .gitignore).
To simplify this process, you can use the command:
```bash
make wipe
```
# Advanced usage
You can actually delete your documents from your storage by using the
API endpoint `DELETE` in the Ingestion API.

View File

@@ -0,0 +1,137 @@
# Ingesting & Managing Documents
The ingestion of documents can be done in different ways:
* Using the `/ingest` API
* Using the Gradio UI
* Using the Bulk Local Ingestion functionality (check next section)
## Bulk Local Ingestion
You will need to activate `data.local_ingestion.enabled` in your setting file to use this feature. Additionally,
it is probably a good idea to set `data.local_ingestion.allow_ingest_from` to specify which folders are allowed to be ingested.
<Callout intent = "warning">
Be careful enabling this feature in a production environment, as it can be a security risk, as it allows users to
ingest any local file with permissions.
</Callout>
When you are running PrivateGPT in a fully local setup, you can ingest a complete folder for convenience (containing
pdf, text files, etc.)
and optionally watch changes on it with the command:
```bash
make ingest /path/to/folder -- --watch
```
To log the processed and failed files to an additional file, use:
```bash
make ingest /path/to/folder -- --watch --log-file /path/to/log/file.log
```
**Note for Windows Users:** Depending on your Windows version and whether you are using PowerShell to execute
PrivateGPT API calls, you may need to include the parameter name before passing the folder path for consumption:
```bash
make ingest arg=/path/to/folder -- --watch --log-file /path/to/log/file.log
```
After ingestion is complete, you should be able to chat with your documents
by navigating to http://localhost:8001 and using the option `Query documents`,
or using the completions / chat API.
## Ingestion troubleshooting
### Running out of memory
To do not run out of memory, you should ingest your documents without the LLM loaded in your (video) memory.
To do so, you should change your configuration to set `llm.mode: mock`.
You can also use the existing `PGPT_PROFILES=mock` that will set the following configuration for you:
```yaml
llm:
mode: mock
embedding:
mode: local
```
This configuration allows you to use hardware acceleration for creating embeddings while avoiding loading the full LLM into (video) memory.
Once your documents are ingested, you can set the `llm.mode` value back to `local` (or your previous custom value).
### Ingestion speed
The ingestion speed depends on the number of documents you are ingesting, and the size of each document.
To speed up the ingestion, you can change the ingestion mode in configuration.
The following ingestion mode exist:
* `simple`: historic behavior, ingest one document at a time, sequentially
* `batch`: read, parse, and embed multiple documents using batches (batch read, and then batch parse, and then batch embed)
* `parallel`: read, parse, and embed multiple documents in parallel. This is the fastest ingestion mode for local setup.
* `pipeline`: Alternative to parallel.
To change the ingestion mode, you can use the `embedding.ingest_mode` configuration value. The default value is `simple`.
To configure the number of workers used for parallel or batched ingestion, you can use
the `embedding.count_workers` configuration value. If you set this value too high, you might run out of
memory, so be mindful when setting this value. The default value is `2`.
For `batch` mode, you can easily set this value to your number of threads available on your CPU without
running out of memory. For `parallel` mode, you should be more careful, and set this value to a lower value.
The configuration below should be enough for users who want to stress more their hardware:
```yaml
embedding:
ingest_mode: parallel
count_workers: 4
```
If your hardware is powerful enough, and that you are loading heavy documents, you can increase the number of workers.
It is recommended to do your own tests to find the optimal value for your hardware.
If you have a `bash` shell, you can use this set of command to do your own benchmark:
```bash
# Wipe your local data, to put yourself in a clean state
# This will delete all your ingested documents
make wipe
time PGPT_PROFILES=mock python ./scripts/ingest_folder.py ~/my-dir/to-ingest/
```
## Supported file formats
PrivateGPT by default supports all the file formats that contains clear text (for example, `.txt` files, `.html`, etc.).
However, these text based file formats as only considered as text files, and are not pre-processed in any other way.
It also supports the following file formats:
* `.hwp`
* `.pdf`
* `.docx`
* `.pptx`
* `.ppt`
* `.pptm`
* `.jpg`
* `.png`
* `.jpeg`
* `.mp3`
* `.mp4`
* `.csv`
* `.epub`
* `.md`
* `.mbox`
* `.ipynb`
* `.json`
<Callout intent = "info">
While `PrivateGPT` supports these file formats, it **might** require additional
dependencies to be installed in your python's virtual environment.
For example, if you try to ingest `.epub` files, `PrivateGPT` might fail to do it, and will instead display an
explanatory error asking you to download the necessary dependencies to install this file format.
</Callout>
<Callout intent = "info">
**Other file formats might work**, but they will be considered as plain text
files (in other words, they will be ingested as `.txt` files).
</Callout>

View File

@@ -0,0 +1,234 @@
## Running the Server
PrivateGPT supports running with different LLMs & setups.
### Local models
Both the LLM and the Embeddings model will run locally.
Make sure you have followed the *Local LLM requirements* section before moving on.
This command will start PrivateGPT using the `settings.yaml` (default profile) together with the `settings-local.yaml`
configuration files. By default, it will enable both the API and the Gradio UI. Run:
```bash
PGPT_PROFILES=local make run
```
or
```bash
PGPT_PROFILES=local poetry run python -m private_gpt
```
When the server is started it will print a log *Application startup complete*.
Navigate to http://localhost:8001 to use the Gradio UI or to http://localhost:8001/docs (API section) to try the API
using Swagger UI.
#### Customizing low level parameters
Currently, not all the parameters of `llama.cpp` and `llama-cpp-python` are available at PrivateGPT's `settings.yaml` file.
In case you need to customize parameters such as the number of layers loaded into the GPU, you might change
these at the `llm_component.py` file under the `private_gpt/components/llm/llm_component.py`.
##### Available LLM config options
The `llm` section of the settings allows for the following configurations:
- `mode`: how to run your llm
- `max_new_tokens`: this lets you configure the number of new tokens the LLM will generate and add to the context window (by default Llama.cpp uses `256`)
Example:
```yaml
llm:
mode: local
max_new_tokens: 256
```
If you are getting an out of memory error, you might also try a smaller model or stick to the proposed
recommended models, instead of custom tuning the parameters.
### Using OpenAI
If you cannot run a local model (because you don't have a GPU, for example) or for testing purposes, you may
decide to run PrivateGPT using OpenAI as the LLM and Embeddings model.
In order to do so, create a profile `settings-openai.yaml` with the following contents:
```yaml
llm:
mode: openai
openai:
api_base: <openai-api-base-url> # Defaults to https://api.openai.com/v1
api_key: <your_openai_api_key> # You could skip this configuration and use the OPENAI_API_KEY env var instead
model: <openai_model_to_use> # Optional model to use. Default is "gpt-3.5-turbo"
# Note: Open AI Models are listed here: https://platform.openai.com/docs/models
```
And run PrivateGPT loading that profile you just created:
`PGPT_PROFILES=openai make run`
or
`PGPT_PROFILES=openai poetry run python -m private_gpt`
When the server is started it will print a log *Application startup complete*.
Navigate to http://localhost:8001 to use the Gradio UI or to http://localhost:8001/docs (API section) to try the API.
You'll notice the speed and quality of response is higher, given you are using OpenAI's servers for the heavy
computations.
### Using OpenAI compatible API
Many tools, including [LocalAI](https://localai.io/) and [vLLM](https://docs.vllm.ai/en/latest/),
support serving local models with an OpenAI compatible API. Even when overriding the `api_base`,
using the `openai` mode doesn't allow you to use custom models. Instead, you should use the `openailike` mode:
```yaml
llm:
mode: openailike
```
This mode uses the same settings as the `openai` mode.
As an example, you can follow the [vLLM quickstart guide](https://docs.vllm.ai/en/latest/getting_started/quickstart.html#openai-compatible-server)
to run an OpenAI compatible server. Then, you can run PrivateGPT using the `settings-vllm.yaml` profile:
`PGPT_PROFILES=vllm make run`
### Using Azure OpenAI
If you cannot run a local model (because you don't have a GPU, for example) or for testing purposes, you may
decide to run PrivateGPT using Azure OpenAI as the LLM and Embeddings model.
In order to do so, create a profile `settings-azopenai.yaml` with the following contents:
```yaml
llm:
mode: azopenai
embedding:
mode: azopenai
azopenai:
api_key: <your_azopenai_api_key> # You could skip this configuration and use the AZ_OPENAI_API_KEY env var instead
azure_endpoint: <your_azopenai_endpoint> # You could skip this configuration and use the AZ_OPENAI_ENDPOINT env var instead
api_version: <api_version> # The API version to use. Default is "2023_05_15"
embedding_deployment_name: <your_embedding_deployment_name> # You could skip this configuration and use the AZ_OPENAI_EMBEDDING_DEPLOYMENT_NAME env var instead
embedding_model: <openai_embeddings_to_use> # Optional model to use. Default is "text-embedding-ada-002"
llm_deployment_name: <your_model_deployment_name> # You could skip this configuration and use the AZ_OPENAI_LLM_DEPLOYMENT_NAME env var instead
llm_model: <openai_model_to_use> # Optional model to use. Default is "gpt-35-turbo"
```
And run PrivateGPT loading that profile you just created:
`PGPT_PROFILES=azopenai make run`
or
`PGPT_PROFILES=azopenai poetry run python -m private_gpt`
When the server is started it will print a log *Application startup complete*.
Navigate to http://localhost:8001 to use the Gradio UI or to http://localhost:8001/docs (API section) to try the API.
You'll notice the speed and quality of response is higher, given you are using Azure OpenAI's servers for the heavy
computations.
### Using AWS Sagemaker
For a fully private & performant setup, you can choose to have both your LLM and Embeddings model deployed using Sagemaker.
Note: how to deploy models on Sagemaker is out of the scope of this documentation.
In order to do so, create a profile `settings-sagemaker.yaml` with the following contents (remember to
update the values of the llm_endpoint_name and embedding_endpoint_name to yours):
```yaml
llm:
mode: sagemaker
sagemaker:
llm_endpoint_name: huggingface-pytorch-tgi-inference-2023-09-25-19-53-32-140
embedding_endpoint_name: huggingface-pytorch-inference-2023-11-03-07-41-36-479
```
And run PrivateGPT loading that profile you just created:
`PGPT_PROFILES=sagemaker make run`
or
`PGPT_PROFILES=sagemaker poetry run python -m private_gpt`
When the server is started it will print a log *Application startup complete*.
Navigate to http://localhost:8001 to use the Gradio UI or to http://localhost:8001/docs (API section) to try the API.
### Using Ollama
Another option for a fully private setup is using [Ollama](https://ollama.ai/).
Note: how to deploy Ollama and pull models onto it is out of the scope of this documentation.
In order to do so, create a profile `settings-ollama.yaml` with the following contents:
```yaml
llm:
mode: ollama
ollama:
model: <ollama_model_to_use> # Required Model to use.
# Note: Ollama Models are listed here: https://ollama.ai/library
# Be sure to pull the model to your Ollama server
api_base: <ollama-api-base-url> # Defaults to http://localhost:11434
```
And run PrivateGPT loading that profile you just created:
`PGPT_PROFILES=ollama make run`
or
`PGPT_PROFILES=ollama poetry run python -m private_gpt`
When the server is started it will print a log *Application startup complete*.
Navigate to http://localhost:8001 to use the Gradio UI or to http://localhost:8001/docs (API section) to try the API.
### Using IPEX-LLM
For a fully private setup on Intel GPUs (such as a local PC with an iGPU, or discrete GPUs like Arc, Flex, and Max), you can use [IPEX-LLM](https://github.com/intel-analytics/ipex-llm).
To deploy Ollama and pull models using IPEX-LLM, please refer to [this guide](https://ipex-llm.readthedocs.io/en/latest/doc/LLM/Quickstart/ollama_quickstart.html). Then, follow the same steps outlined in the [Using Ollama](#using-ollama) section to create a `settings-ollama.yaml` profile and run the private-GPT server.
### Using Gemini
If you cannot run a local model (because you don't have a GPU, for example) or for testing purposes, you may
decide to run PrivateGPT using Gemini as the LLM and Embeddings model. In addition, you will benefit from
multimodal inputs, such as text and images, in a very large contextual window.
In order to do so, create a profile `settings-gemini.yaml` with the following contents:
```yaml
llm:
mode: gemini
embedding:
mode: gemini
gemini:
api_key: <your_gemini_api_key> # You could skip this configuration and use the GEMINI_API_KEY env var instead
model: <gemini_model_to_use> # Optional model to use. Default is models/gemini-pro"
embedding_model: <gemini_embeddings_to_use> # Optional model to use. Default is "models/embedding-001"
```
And run PrivateGPT loading that profile you just created:
`PGPT_PROFILES=gemini make run`
or
`PGPT_PROFILES=gemini poetry run python -m private_gpt`
When the server is started it will print a log *Application startup complete*.
Navigate to http://localhost:8001 to use the Gradio UI or to http://localhost:8001/docs (API section) to try the API.

View File

@@ -0,0 +1,66 @@
## NodeStores
PrivateGPT supports **Simple** and [Postgres](https://www.postgresql.org/) providers. Simple being the default.
In order to select one or the other, set the `nodestore.database` property in the `settings.yaml` file to `simple` or `postgres`.
```yaml
nodestore:
database: simple
```
### Simple Document Store
Setting up simple document store: Persist data with in-memory and disk storage.
Enabling the simple document store is an excellent choice for small projects or proofs of concept where you need to persist data while maintaining minimal setup complexity. To get started, set the nodestore.database property in your settings.yaml file as follows:
```yaml
nodestore:
database: simple
```
The beauty of the simple document store is its flexibility and ease of implementation. It provides a solid foundation for managing and retrieving data without the need for complex setup or configuration. The combination of in-memory processing and disk persistence ensures that you can efficiently handle small to medium-sized datasets while maintaining data consistency across runs.
### Postgres Document Store
To enable Postgres, set the `nodestore.database` property in the `settings.yaml` file to `postgres` and install the `storage-nodestore-postgres` extra. Note: Vector Embeddings Storage in Postgres is configured separately
```bash
poetry install --extras storage-nodestore-postgres
```
The available configuration options are:
| Field | Description |
|---------------|-----------------------------------------------------------|
| **host** | The server hosting the Postgres database. Default is `localhost` |
| **port** | The port on which the Postgres database is accessible. Default is `5432` |
| **database** | The specific database to connect to. Default is `postgres` |
| **user** | The username for database access. Default is `postgres` |
| **password** | The password for database access. (Required) |
| **schema_name** | The database schema to use. Default is `private_gpt` |
For example:
```yaml
nodestore:
database: postgres
postgres:
host: localhost
port: 5432
database: postgres
user: postgres
password: <PASSWORD>
schema_name: private_gpt
```
Given the above configuration, Two PostgreSQL tables will be created upon successful connection: one for storing metadata related to the index and another for document data itself.
```
postgres=# \dt private_gpt.*
List of relations
Schema | Name | Type | Owner
-------------+-----------------+-------+--------------
private_gpt | data_docstore | table | postgres
private_gpt | data_indexstore | table | postgres
postgres=#
```

View File

@@ -0,0 +1,36 @@
## Enhancing Response Quality with Reranking
PrivateGPT offers a reranking feature aimed at optimizing response generation by filtering out irrelevant documents, potentially leading to faster response times and enhanced relevance of answers generated by the LLM.
### Enabling Reranking
Document reranking can significantly improve the efficiency and quality of the responses by pre-selecting the most relevant documents before generating an answer. To leverage this feature, ensure that it is enabled in the RAG settings and consider adjusting the parameters to best fit your use case.
#### Additional Requirements
Before enabling reranking, you must install additional dependencies:
```bash
poetry install --extras rerank-sentence-transformers
```
This command installs dependencies for the cross-encoder reranker from sentence-transformers, which is currently the only supported method by PrivateGPT for document reranking.
#### Configuration
To enable and configure reranking, adjust the `rag` section within the `settings.yaml` file. Here are the key settings to consider:
- `similarity_top_k`: Determines the number of documents to initially retrieve and consider for reranking. This value should be larger than `top_n`.
- `rerank`:
- `enabled`: Set to `true` to activate the reranking feature.
- `top_n`: Specifies the number of documents to use in the final answer generation process, chosen from the top-ranked documents provided by `similarity_top_k`.
Example configuration snippet:
```yaml
rag:
similarity_top_k: 10 # Number of documents to retrieve and consider for reranking
rerank:
enabled: true
top_n: 3 # Number of top-ranked documents to use for generating the answer
```

View File

@@ -0,0 +1,85 @@
# Settings and profiles for your private GPT
The configuration of your private GPT server is done thanks to `settings` files (more precisely `settings.yaml`).
These text files are written using the [YAML](https://en.wikipedia.org/wiki/YAML) syntax.
While PrivateGPT is distributing safe and universal configuration files, you might want to quickly customize your
PrivateGPT, and this can be done using the `settings` files.
This project is defining the concept of **profiles** (or configuration profiles).
This mechanism, using your environment variables, is giving you the ability to easily switch between
configuration you've made.
A typical use case of profile is to easily switch between LLM and embeddings.
To be a bit more precise, you can change the language (to French, Spanish, Italian, English, etc) by simply changing
the profile you've selected; no code changes required!
PrivateGPT is configured through *profiles* that are defined using yaml files, and selected through env variables.
The full list of properties configurable can be found in `settings.yaml`.
## How to know which profiles exist
Given that a profile `foo_bar` points to the file `settings-foo_bar.yaml` and vice-versa, you simply have to look
at the files starting with `settings` and ending in `.yaml`.
## How to use an existing profiles
**Please note that the syntax to set the value of an environment variables depends on your OS**.
You have to set environment variable `PGPT_PROFILES` to the name of the profile you want to use.
For example, on **linux and macOS**, this gives:
```bash
export PGPT_PROFILES=my_profile_name_here
```
Windows Command Prompt (cmd) has a different syntax:
```shell
set PGPT_PROFILES=my_profile_name_here
```
Windows Powershell has a different syntax:
```shell
$env:PGPT_PROFILES="my_profile_name_here"
```
If the above is not working, you might want to try other ways to set an env variable in your window's terminal.
---
Once you've set this environment variable to the desired profile, you can simply launch your PrivateGPT,
and it will run using your profile on top of the default configuration.
## Reference
Additional details on the profiles are described in this section
### Environment variable `PGPT_SETTINGS_FOLDER`
The location of the settings folder. Defaults to the root of the project.
Should contain the default `settings.yaml` and any other `settings-{profile}.yaml`.
### Environment variable `PGPT_PROFILES`
By default, the profile definition in `settings.yaml` is loaded.
Using this env var you can load additional profiles; format is a comma separated list of profile names.
This will merge `settings-{profile}.yaml` on top of the base settings file.
For example:
`PGPT_PROFILES=local,cuda` will load `settings-local.yaml`
and `settings-cuda.yaml`, their contents will be merged with
later profiles properties overriding values of earlier ones like `settings.yaml`.
During testing, the `test` profile will be active along with the default, therefore `settings-test.yaml`
file is required.
### Environment variables expansion
Configuration files can contain environment variables,
they will be expanded at runtime.
Expansion must follow the pattern `${VARIABLE_NAME:default_value}`.
For example, the following configuration will use the value of the `PORT`
environment variable or `8001` if it's not set.
Missing variables with no default will produce an error.
```yaml
server:
port: ${PORT:8001}
```

View File

@@ -0,0 +1,187 @@
## Vectorstores
PrivateGPT supports [Qdrant](https://qdrant.tech/), [Milvus](https://milvus.io/), [Chroma](https://www.trychroma.com/), [PGVector](https://github.com/pgvector/pgvector) and [ClickHouse](https://github.com/ClickHouse/ClickHouse) as vectorstore providers. Qdrant being the default.
In order to select one or the other, set the `vectorstore.database` property in the `settings.yaml` file to `qdrant`, `milvus`, `chroma`, `postgres` and `clickhouse`.
```yaml
vectorstore:
database: qdrant
```
### Qdrant configuration
To enable Qdrant, set the `vectorstore.database` property in the `settings.yaml` file to `qdrant`.
Qdrant settings can be configured by setting values to the `qdrant` property in the `settings.yaml` file.
The available configuration options are:
| Field | Description |
|--------------|-------------|
| location | If `:memory:` - use in-memory Qdrant instance. If `str` - use it as a `url` parameter.|
| url | Either host or str of 'Optional[scheme], host, Optional[port], Optional[prefix]'. Eg. `http://localhost:6333` |
| port | Port of the REST API interface. Default: `6333` |
| grpc_port | Port of the gRPC interface. Default: `6334` |
| prefer_grpc | If `true` - use gRPC interface whenever possible in custom methods. |
| https | If `true` - use HTTPS(SSL) protocol.|
| api_key | API key for authentication in Qdrant Cloud.|
| prefix | If set, add `prefix` to the REST URL path. Example: `service/v1` will result in `http://localhost:6333/service/v1/{qdrant-endpoint}` for REST API.|
| timeout | Timeout for REST and gRPC API requests. Default: 5.0 seconds for REST and unlimited for gRPC |
| host | Host name of Qdrant service. If url and host are not set, defaults to 'localhost'.|
| path | Persistence path for QdrantLocal. Eg. `local_data/private_gpt/qdrant`|
| force_disable_check_same_thread | Force disable check_same_thread for QdrantLocal sqlite connection, defaults to True.|
By default Qdrant tries to connect to an instance of Qdrant server at `http://localhost:3000`.
To obtain a local setup (disk-based database) without running a Qdrant server, configure the `qdrant.path` value in settings.yaml:
```yaml
qdrant:
path: local_data/private_gpt/qdrant
```
### Milvus configuration
To enable Milvus, set the `vectorstore.database` property in the `settings.yaml` file to `milvus` and install the `milvus` extra.
```bash
poetry install --extras vector-stores-milvus
```
The available configuration options are:
| Field | Description |
|--------------|-------------|
| uri | Default is set to "local_data/private_gpt/milvus/milvus_local.db" as a local file; you can also set up a more performant Milvus server on docker or k8s e.g.http://localhost:19530, as your uri; To use Zilliz Cloud, adjust the uri and token to Endpoint and Api key in Zilliz Cloud.|
| token | Pair with Milvus server on docker or k8s or zilliz cloud api key.|
| collection_name | The name of the collection, set to default "milvus_db".|
| overwrite | Overwrite the data in collection if it existed, set to default as True. |
To obtain a local setup (disk-based database) without running a Milvus server, configure the uri value in settings.yaml, to store in local_data/private_gpt/milvus/milvus_local.db.
### Chroma configuration
To enable Chroma, set the `vectorstore.database` property in the `settings.yaml` file to `chroma` and install the `chroma` extra.
```bash
poetry install --extras chroma
```
By default `chroma` will use a disk-based database stored in local_data_path / "chroma_db" (being local_data_path defined in settings.yaml)
### PGVector
To use the PGVector store a [postgreSQL](https://www.postgresql.org/) database with the PGVector extension must be used.
To enable PGVector, set the `vectorstore.database` property in the `settings.yaml` file to `postgres` and install the `vector-stores-postgres` extra.
```bash
poetry install --extras vector-stores-postgres
```
PGVector settings can be configured by setting values to the `postgres` property in the `settings.yaml` file.
The available configuration options are:
| Field | Description |
|---------------|-----------------------------------------------------------|
| **host** | The server hosting the Postgres database. Default is `localhost` |
| **port** | The port on which the Postgres database is accessible. Default is `5432` |
| **database** | The specific database to connect to. Default is `postgres` |
| **user** | The username for database access. Default is `postgres` |
| **password** | The password for database access. (Required) |
| **schema_name** | The database schema to use. Default is `private_gpt` |
For example:
```yaml
vectorstore:
database: postgres
postgres:
host: localhost
port: 5432
database: postgres
user: postgres
password: <PASSWORD>
schema_name: private_gpt
```
The following table will be created in the database
```
postgres=# \d private_gpt.data_embeddings
Table "private_gpt.data_embeddings"
Column | Type | Collation | Nullable | Default
-----------+-------------------+-----------+----------+---------------------------------------------------------
id | bigint | | not null | nextval('private_gpt.data_embeddings_id_seq'::regclass)
text | character varying | | not null |
metadata_ | json | | |
node_id | character varying | | |
embedding | vector(768) | | |
Indexes:
"data_embeddings_pkey" PRIMARY KEY, btree (id)
postgres=#
```
The dimensions of the embeddings columns will be set based on the `embedding.embed_dim` value. If the embedding model changes this table may need to be dropped and recreated to avoid a dimension mismatch.
### ClickHouse
To utilize ClickHouse as the vector store, a [ClickHouse](https://github.com/ClickHouse/ClickHouse) database must be employed.
To enable ClickHouse, set the `vectorstore.database` property in the `settings.yaml` file to `clickhouse` and install the `vector-stores-clickhouse` extra.
```bash
poetry install --extras vector-stores-clickhouse
```
ClickHouse settings can be configured by setting values to the `clickhouse` property in the `settings.yaml` file.
The available configuration options are:
| Field | Description |
|----------------------|----------------------------------------------------------------|
| **host** | The server hosting the ClickHouse database. Default is `localhost` |
| **port** | The port on which the ClickHouse database is accessible. Default is `8123` |
| **username** | The username for database access. Default is `default` |
| **password** | The password for database access. (Optional) |
| **database** | The specific database to connect to. Default is `__default__` |
| **secure** | Use https/TLS for secure connection to the server. Default is `false` |
| **interface** | The protocol used for the connection, either 'http' or 'https'. (Optional) |
| **settings** | Specific ClickHouse server settings to be used with the session. (Optional) |
| **connect_timeout** | Timeout in seconds for establishing a connection. (Optional) |
| **send_receive_timeout** | Read timeout in seconds for http connection. (Optional) |
| **verify** | Verify the server certificate in secure/https mode. (Optional) |
| **ca_cert** | Path to Certificate Authority root certificate (.pem format). (Optional) |
| **client_cert** | Path to TLS Client certificate (.pem format). (Optional) |
| **client_cert_key** | Path to the private key for the TLS Client certificate. (Optional) |
| **http_proxy** | HTTP proxy address. (Optional) |
| **https_proxy** | HTTPS proxy address. (Optional) |
| **server_host_name** | Server host name to be checked against the TLS certificate. (Optional) |
For example:
```yaml
vectorstore:
database: clickhouse
clickhouse:
host: localhost
port: 8443
username: admin
password: <PASSWORD>
database: embeddings
secure: false
```
The following table will be created in the database:
```
clickhouse-client
:) \d embeddings.llama_index
Table "llama_index"
№ | name | type | default_type | default_expression | comment | codec_expression | ttl_expression
----|-----------|----------------------------------------------|--------------|--------------------|---------|------------------|---------------
1 | id | String | | | | |
2 | doc_id | String | | | | |
3 | text | String | | | | |
4 | vector | Array(Float32) | | | | |
5 | node_info | Tuple(start Nullable(UInt64), end Nullable(UInt64)) | | | | |
6 | metadata | String | | | | |
clickhouse-client
```
The dimensions of the embeddings columns will be set based on the `embedding.embed_dim` value. If the embedding model changes, this table may need to be dropped and recreated to avoid a dimension mismatch.

View File

@@ -0,0 +1,42 @@
PrivateGPT provides an **API** containing all the building blocks required to
build **private, context-aware AI applications**.
<Callout intent = "tip">
If you are looking for an **enterprise-ready, fully private AI workspace**
check out [Zylon's website](https://zylon.ai) or [request a demo](https://cal.com/zylon/demo?source=pgpt-docs).
Crafted by the team behind PrivateGPT, Zylon is a best-in-class AI collaborative
workspace that can be easily deployed on-premise (data center, bare metal...) or in your private cloud (AWS, GCP, Azure...).
</Callout>
The API follows and extends OpenAI API standard, and supports both normal and streaming responses.
That means that, if you can use OpenAI API in one of your tools, you can use your own PrivateGPT API instead,
with no code changes, **and for free** if you are running PrivateGPT in a `local` setup.
Get started by understanding the [Main Concepts and Installation](/installation) and then dive into the [API Reference](/api-reference).
## Frequently Visited Resources
<Cards>
<Card
title="Main Concepts"
icon="fa-solid fa-lines-leaning"
href="/installation"
/>
<Card
title="API Reference"
icon="fa-solid fa-code"
href="/api-reference"
/>
<Card
title="Twitter"
icon="fa-brands fa-twitter"
href="https://twitter.com/PrivateGPT_AI"
/>
<Card
title="Discord Server"
icon="fa-brands fa-discord"
href="https://discord.gg/bK6mRVpErU"
/>
</Cards>
<br />

View File

@@ -0,0 +1,105 @@
This guide provides a quick start for running different profiles of PrivateGPT using Docker Compose.
The profiles cater to various environments, including Ollama setups (CPU, CUDA, MacOS), and a fully local setup.
By default, Docker Compose will download pre-built images from a remote registry when starting the services. However, you have the option to build the images locally if needed. Details on building Docker image locally are provided at the end of this guide.
If you want to run PrivateGPT locally without Docker, refer to the [Local Installation Guide](/installation).
## Prerequisites
- **Docker and Docker Compose:** Ensure both are installed on your system.
[Installation Guide for Docker](https://docs.docker.com/get-docker/), [Installation Guide for Docker Compose](https://docs.docker.com/compose/install/).
- **Clone PrivateGPT Repository:** Clone the PrivateGPT repository to your machine and navigate to the directory:
```sh
git clone https://github.com/zylon-ai/private-gpt.git
cd private-gpt
```
## Setups
### Ollama Setups (Recommended)
#### 1. Default/Ollama CPU
**Description:**
This profile runs the Ollama service using CPU resources. It is the standard configuration for running Ollama-based Private-GPT services without GPU acceleration.
**Run:**
To start the services using pre-built images, run:
```sh
docker-compose up
```
or with a specific profile:
```sh
docker-compose --profile ollama-cpu up
```
#### 2. Ollama Nvidia CUDA
**Description:**
This profile leverages GPU acceleration with CUDA support, suitable for computationally intensive tasks that benefit from GPU resources.
**Requirements:**
Ensure that your system has compatible GPU hardware and the necessary NVIDIA drivers installed. The installation process is detailed [here](https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html).
**Run:**
To start the services with CUDA support using pre-built images, run:
```sh
docker-compose --profile ollama-cuda up
```
#### 3. Ollama External API
**Description:**
This profile is designed for running PrivateGPT using Ollama installed on the host machine. This setup is particularly useful for MacOS users, as Docker does not yet support Metal GPU.
**Requirements:**
Install Ollama on your machine by following the instructions at [ollama.ai](https://ollama.ai/).
**Run:**
To start the Ollama service, use:
```sh
OLLAMA_HOST=0.0.0.0 ollama serve
```
To start the services with the host configuration using pre-built images, run:
```sh
docker-compose --profile ollama-api up
```
### Fully Local Setups
#### 1. LlamaCPP CPU
**Description:**
This profile runs the Private-GPT services locally using `llama-cpp` and Hugging Face models.
**Requirements:**
A **Hugging Face Token (HF_TOKEN)** is required for accessing Hugging Face models. Obtain your token following [this guide](/installation/getting-started/troubleshooting#downloading-gated-and-private-models).
**Run:**
Start the services with your Hugging Face token using pre-built images:
```sh
HF_TOKEN=<your_hf_token> docker-compose --profile llamacpp-cpu up
```
Replace `<your_hf_token>` with your actual Hugging Face token.
## Building Locally
If you prefer to build Docker images locally, which is useful when making changes to the codebase or the Dockerfiles, follow these steps:
### Building Locally
To build the Docker images locally, navigate to the cloned repository directory and run:
```sh
docker-compose build
```
This command compiles the necessary Docker images based on the current codebase and Dockerfile configurations.
### Forcing a Rebuild with --build
If you have made changes and need to ensure these changes are reflected in the Docker images, you can force a rebuild before starting the services:
```sh
docker-compose up --build
```
or with a specific profile:
```sh
docker-compose --profile <profile_name> up --build
```
Replace `<profile_name>` with the desired profile.

View File

@@ -0,0 +1,23 @@
# Recipes
Recipes are predefined use cases that help users solve very specific tasks using PrivateGPT.
They provide a streamlined approach to achieve common goals with the platform, offering both a starting point and inspiration for further exploration.
The main goal of Recipes is to empower the community to create and share solutions, expanding the capabilities of PrivateGPT.
## How to Create a New Recipe
1. **Identify the Task**: Define a specific task or problem that the Recipe will address.
2. **Develop the Solution**: Create a clear and concise guide, including any necessary code snippets or configurations.
3. **Submit a PR**: Fork the PrivateGPT repository, add your Recipe to the appropriate section, and submit a PR for review.
We encourage you to be creative and think outside the box! Your contributions help shape the future of PrivateGPT.
## Available Recipes
<Cards>
<Card
title="Summarize"
icon="fa-solid fa-file-alt"
href="/recipes/general-use-cases/summarize"
/>
</Cards>

View File

@@ -0,0 +1,20 @@
The Summarize Recipe provides a method to extract concise summaries from ingested documents or texts using PrivateGPT.
This tool is particularly useful for quickly understanding large volumes of information by distilling key points and main ideas.
## Use Case
The primary use case for the `Summarize` tool is to automate the summarization of lengthy documents,
making it easier for users to grasp the essential information without reading through entire texts.
This can be applied in various scenarios, such as summarizing research papers, news articles, or business reports.
## Key Features
1. **Ingestion-compatible**: The user provides the text to be summarized. The text can be directly inputted or retrieved from ingested documents within the system.
2. **Customization**: The summary generation can be influenced by providing specific `instructions` or a `prompt`. These inputs guide the model on how to frame the summary, allowing for customization according to user needs.
3. **Streaming Support**: The tool supports streaming, allowing for real-time summary generation, which can be particularly useful for handling large texts or providing immediate feedback.
## Contributing
If you have ideas for improving the Summarize or want to add new features, feel free to contribute!
You can submit your enhancements via a pull request on our [GitHub repository](https://github.com/zylon-ai/private-gpt).

View File

@@ -0,0 +1,21 @@
This page aims to present different user interface (UI) alternatives for integrating and using PrivateGPT. These alternatives range from demo applications to fully customizable UI setups that can be adapted to your specific needs.
**Do you have any working demo project using PrivateGPT?**
Please open a PR to add it to the list, and come on our Discord to tell us about it!
<Callout intent = "note">
WIP: This page provides an overview of one of the UI alternatives available for PrivateGPT. More alternatives will be added to this page as they become available.
</Callout>
## [PrivateGPT SDK Demo App](https://github.com/frgarciames/privategpt-react)
The PrivateGPT SDK demo app is a robust starting point for developers looking to integrate and customize PrivateGPT in their applications. Leveraging modern technologies like Tailwind, shadcn/ui, and Biomejs, it provides a smooth development experience and a highly customizable user interface. Refer to the [repository](https://github.com/frgarciames/privategpt-react) for more details and to get started.
**Tech Stack:**
- **Tailwind:** A utility-first CSS framework for rapid UI development.
- **shadcn/ui:** A set of high-quality, customizable UI components.
- **PrivateGPT Web SDK:** The core SDK for interacting with PrivateGPT.
- **Biomejs formatter/linter:** A tool for maintaining code quality and consistency.

View File

@@ -0,0 +1,71 @@
## Gradio UI user manual
Gradio UI is a ready to use way of testing most of PrivateGPT API functionalities.
![Gradio PrivateGPT](https://github.com/zylon-ai/private-gpt/raw/main/fern/docs/assets/ui.png?raw=true)
<Callout intent = "warning">
A working **Gradio UI client** is provided to test the API, together with a set of useful tools such as bulk
model download script, ingestion script, documents folder watch, etc. Please refer to the [UI alternatives](/manual/user-interface/alternatives) page for more UI alternatives.
</Callout>
### Execution Modes
It has 3 modes of execution (you can select in the top-left):
* Query Docs: uses the context from the
ingested documents to answer the questions posted in the chat. It also takes
into account previous chat messages as context.
* Makes use of `/chat/completions` API with `use_context=true` and no
`context_filter`.
* Search in Docs: fast search that returns the 4 most related text
chunks, together with their source document and page.
* Makes use of `/chunks` API with no `context_filter`, `limit=4` and
`prev_next_chunks=0`.
* LLM Chat: simple, non-contextual chat with the LLM. The ingested documents won't
be taken into account, only the previous messages.
* Makes use of `/chat/completions` API with `use_context=false`.
### Document Ingestion
Ingest documents by using the `Upload a File` button. You can check the progress of
the ingestion in the console logs of the server.
The list of ingested files is shown below the button.
If you want to delete the ingested documents, refer to *Reset Local documents
database* section in the documentation.
### Chat
Normal chat interface, self-explanatory ;)
#### System Prompt
You can view and change the system prompt being passed to the LLM by clicking "Additional Inputs"
in the chat interface. The system prompt is also logged on the server.
By default, the `Query Docs` mode uses the setting value `ui.default_query_system_prompt`.
The `LLM Chat` mode attempts to use the optional settings value `ui.default_chat_system_prompt`.
If no system prompt is entered, the UI will display the default system prompt being used
for the active mode.
##### System Prompt Examples:
The system prompt can effectively provide your chat bot specialized roles, and results tailored to the prompt
you have given the model. Examples of system prompts can be be found
[here](https://www.w3schools.com/gen_ai/chatgpt-3-5/chatgpt-3-5_roles.php).
Some interesting examples to try include:
* You are -X-. You have all the knowledge and personality of -X-. Answer as if you were -X- using
their manner of speaking and vocabulary.
* Example: You are Shakespeare. You have all the knowledge and personality of Shakespeare.
Answer as if you were Shakespeare using their manner of speaking and vocabulary.
* You are an expert (at) -role-. Answer all questions using your expertise on -specific domain topic-.
* Example: You are an expert software engineer. Answer all questions using your expertise on Python.
* You are a -role- bot, respond with -response criteria needed-. If no -response criteria- is needed,
respond with -alternate response-.
* Example: You are a grammar checking bot, respond with any grammatical corrections needed. If no corrections
are needed, respond with "verified".

4
fern/fern.config.json Normal file
View File

@@ -0,0 +1,4 @@
{
"organization": "privategpt",
"version": "0.31.17"
}

8
fern/generators.yml Normal file
View File

@@ -0,0 +1,8 @@
groups:
public:
generators:
- name: fernapi/fern-python-sdk
version: 0.6.2
output:
location: local-file-system
path: ../../pgpt-sdk/python

1270
fern/openapi/openapi.json Normal file

File diff suppressed because it is too large Load Diff

185
ingest.py
View File

@@ -1,185 +0,0 @@
#!/usr/bin/env python3
import os
import glob
from typing import List
from dotenv import load_dotenv
from multiprocessing import Pool
from tqdm import tqdm
from langchain.document_loaders import (
CSVLoader,
EverNoteLoader,
PyMuPDFLoader,
TextLoader,
UnstructuredEmailLoader,
UnstructuredEPubLoader,
UnstructuredHTMLLoader,
UnstructuredMarkdownLoader,
UnstructuredODTLoader,
UnstructuredPowerPointLoader,
UnstructuredWordDocumentLoader,
)
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.docstore.document import Document
if not load_dotenv():
print("Could not load .env file or it is empty. Please check if it exists and is readable.")
exit(1)
from constants import CHROMA_SETTINGS
import chromadb
from chromadb.api.segment import API
# Load environment variables
persist_directory = os.environ.get('PERSIST_DIRECTORY')
source_directory = os.environ.get('SOURCE_DIRECTORY', 'source_documents')
embeddings_model_name = os.environ.get('EMBEDDINGS_MODEL_NAME')
chunk_size = 500
chunk_overlap = 50
# Custom document loaders
class MyElmLoader(UnstructuredEmailLoader):
"""Wrapper to fallback to text/plain when default does not work"""
def load(self) -> List[Document]:
"""Wrapper adding fallback for elm without html"""
try:
try:
doc = UnstructuredEmailLoader.load(self)
except ValueError as e:
if 'text/html content not found in email' in str(e):
# Try plain text
self.unstructured_kwargs["content_source"]="text/plain"
doc = UnstructuredEmailLoader.load(self)
else:
raise
except Exception as e:
# Add file_path to exception message
raise type(e)(f"{self.file_path}: {e}") from e
return doc
# Map file extensions to document loaders and their arguments
LOADER_MAPPING = {
".csv": (CSVLoader, {}),
# ".docx": (Docx2txtLoader, {}),
".doc": (UnstructuredWordDocumentLoader, {}),
".docx": (UnstructuredWordDocumentLoader, {}),
".enex": (EverNoteLoader, {}),
".eml": (MyElmLoader, {}),
".epub": (UnstructuredEPubLoader, {}),
".html": (UnstructuredHTMLLoader, {}),
".md": (UnstructuredMarkdownLoader, {}),
".odt": (UnstructuredODTLoader, {}),
".pdf": (PyMuPDFLoader, {}),
".ppt": (UnstructuredPowerPointLoader, {}),
".pptx": (UnstructuredPowerPointLoader, {}),
".txt": (TextLoader, {"encoding": "utf8"}),
# Add more mappings for other file extensions and loaders as needed
}
def load_single_document(file_path: str) -> List[Document]:
ext = "." + file_path.rsplit(".", 1)[-1].lower()
if ext in LOADER_MAPPING:
loader_class, loader_args = LOADER_MAPPING[ext]
loader = loader_class(file_path, **loader_args)
return loader.load()
raise ValueError(f"Unsupported file extension '{ext}'")
def load_documents(source_dir: str, ignored_files: List[str] = []) -> List[Document]:
"""
Loads all documents from the source documents directory, ignoring specified files
"""
all_files = []
for ext in LOADER_MAPPING:
all_files.extend(
glob.glob(os.path.join(source_dir, f"**/*{ext.lower()}"), recursive=True)
)
all_files.extend(
glob.glob(os.path.join(source_dir, f"**/*{ext.upper()}"), recursive=True)
)
filtered_files = [file_path for file_path in all_files if file_path not in ignored_files]
with Pool(processes=os.cpu_count()) as pool:
results = []
with tqdm(total=len(filtered_files), desc='Loading new documents', ncols=80) as pbar:
for i, docs in enumerate(pool.imap_unordered(load_single_document, filtered_files)):
results.extend(docs)
pbar.update()
return results
def process_documents(ignored_files: List[str] = []) -> List[Document]:
"""
Load documents and split in chunks
"""
print(f"Loading documents from {source_directory}")
documents = load_documents(source_directory, ignored_files)
if not documents:
print("No new documents to load")
exit(0)
print(f"Loaded {len(documents)} new documents from {source_directory}")
text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap)
documents = text_splitter.split_documents(documents)
print(f"Split into {len(documents)} chunks of text (max. {chunk_size} tokens each)")
return documents
def batch_chromadb_insertions(chroma_client: API, documents: List[Document]) -> List[Document]:
"""
Split the total documents to be inserted into batches of documents that the local chroma client can process
"""
# Get max batch size.
max_batch_size = chroma_client.max_batch_size
for i in range(0, len(documents), max_batch_size):
yield documents[i:i + max_batch_size]
def does_vectorstore_exist(persist_directory: str, embeddings: HuggingFaceEmbeddings) -> bool:
"""
Checks if vectorstore exists
"""
db = Chroma(persist_directory=persist_directory, embedding_function=embeddings)
if not db.get()['documents']:
return False
return True
def main():
# Create embeddings
embeddings = HuggingFaceEmbeddings(model_name=embeddings_model_name)
# Chroma client
chroma_client = chromadb.PersistentClient(settings=CHROMA_SETTINGS , path=persist_directory)
if does_vectorstore_exist(persist_directory, embeddings):
# Update and store locally vectorstore
print(f"Appending to existing vectorstore at {persist_directory}")
db = Chroma(persist_directory=persist_directory, embedding_function=embeddings, client_settings=CHROMA_SETTINGS, client=chroma_client)
collection = db.get()
documents = process_documents([metadata['source'] for metadata in collection['metadatas']])
print(f"Creating embeddings. May take some minutes...")
for batched_chromadb_insertion in batch_chromadb_insertions(chroma_client, documents):
db.add_documents(batched_chromadb_insertion)
else:
# Create and store locally vectorstore
print("Creating new vectorstore")
documents = process_documents()
print(f"Creating embeddings. May take some minutes...")
# Create the db with the first batch of documents to insert
batched_chromadb_insertions = batch_chromadb_insertions(chroma_client, documents)
first_insertion = next(batched_chromadb_insertions)
db = Chroma.from_documents(first_insertion, embeddings, persist_directory=persist_directory, client_settings=CHROMA_SETTINGS, client=chroma_client)
# Add the rest of batches of documents
for batched_chromadb_insertion in batched_chromadb_insertions:
db.add_documents(batched_chromadb_insertion)
print(f"Ingestion complete! You can now run privateGPT.py to query your documents")
if __name__ == "__main__":
main()

2
local_data/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
*
!.gitignore

2
models/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
*
!.gitignore

7765
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,87 +0,0 @@
#!/usr/bin/env python3
from dotenv import load_dotenv
from langchain.chains import RetrievalQA
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.vectorstores import Chroma
from langchain.llms import GPT4All, LlamaCpp
import chromadb
import os
import argparse
import time
if not load_dotenv():
print("Could not load .env file or it is empty. Please check if it exists and is readable.")
exit(1)
embeddings_model_name = os.environ.get("EMBEDDINGS_MODEL_NAME")
persist_directory = os.environ.get('PERSIST_DIRECTORY')
model_type = os.environ.get('MODEL_TYPE')
model_path = os.environ.get('MODEL_PATH')
model_n_ctx = os.environ.get('MODEL_N_CTX')
model_n_batch = int(os.environ.get('MODEL_N_BATCH',8))
target_source_chunks = int(os.environ.get('TARGET_SOURCE_CHUNKS',4))
from constants import CHROMA_SETTINGS
def main():
# Parse the command line arguments
args = parse_arguments()
embeddings = HuggingFaceEmbeddings(model_name=embeddings_model_name)
chroma_client = chromadb.PersistentClient(settings=CHROMA_SETTINGS , path=persist_directory)
db = Chroma(persist_directory=persist_directory, embedding_function=embeddings, client_settings=CHROMA_SETTINGS, client=chroma_client)
retriever = db.as_retriever(search_kwargs={"k": target_source_chunks})
# activate/deactivate the streaming StdOut callback for LLMs
callbacks = [] if args.mute_stream else [StreamingStdOutCallbackHandler()]
# Prepare the LLM
match model_type:
case "LlamaCpp":
llm = LlamaCpp(model_path=model_path, max_tokens=model_n_ctx, n_batch=model_n_batch, callbacks=callbacks, verbose=False)
case "GPT4All":
llm = GPT4All(model=model_path, max_tokens=model_n_ctx, backend='gptj', n_batch=model_n_batch, callbacks=callbacks, verbose=False)
case _default:
# raise exception if model_type is not supported
raise Exception(f"Model type {model_type} is not supported. Please choose one of the following: LlamaCpp, GPT4All")
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever, return_source_documents= not args.hide_source)
# Interactive questions and answers
while True:
query = input("\nEnter a query: ")
if query == "exit":
break
if query.strip() == "":
continue
# Get the answer from the chain
start = time.time()
res = qa(query)
answer, docs = res['result'], [] if args.hide_source else res['source_documents']
end = time.time()
# Print the result
print("\n\n> Question:")
print(query)
print(f"\n> Answer (took {round(end - start, 2)} s.):")
print(answer)
# Print the relevant sources used for the answer
for document in docs:
print("\n> " + document.metadata["source"] + ":")
print(document.page_content)
def parse_arguments():
parser = argparse.ArgumentParser(description='privateGPT: Ask questions to your documents without an internet connection, '
'using the power of LLMs.')
parser.add_argument("--hide-source", "-S", action='store_true',
help='Use this flag to disable printing of source documents used for answers.')
parser.add_argument("--mute-stream", "-M",
action='store_true',
help='Use this flag to disable the streaming StdOut callback for LLMs.')
return parser.parse_args()
if __name__ == "__main__":
main()

27
private_gpt/__init__.py Normal file
View File

@@ -0,0 +1,27 @@
"""private-gpt."""
import logging
import os
# Set to 'DEBUG' to have extensive logging turned on, even for libraries
ROOT_LOG_LEVEL = "INFO"
PRETTY_LOG_FORMAT = (
"%(asctime)s.%(msecs)03d [%(levelname)-8s] %(name)+25s - %(message)s"
)
logging.basicConfig(level=ROOT_LOG_LEVEL, format=PRETTY_LOG_FORMAT, datefmt="%H:%M:%S")
logging.captureWarnings(True)
# Disable gradio analytics
# This is done this way because gradio does not solely rely on what values are
# passed to gr.Blocks(enable_analytics=...) but also on the environment
# variable GRADIO_ANALYTICS_ENABLED. `gradio.strings` actually reads this env
# directly, so to fully disable gradio analytics we need to set this env var.
os.environ["GRADIO_ANALYTICS_ENABLED"] = "False"
# Disable chromaDB telemetry
# It is already disabled, see PR#1144
# os.environ["ANONYMIZED_TELEMETRY"] = "False"
# adding tiktoken cache path within repo to be able to run in offline environment.
os.environ["TIKTOKEN_CACHE_DIR"] = "tiktoken_cache"

11
private_gpt/__main__.py Normal file
View File

@@ -0,0 +1,11 @@
# start a fastapi server with uvicorn
import uvicorn
from private_gpt.main import app
from private_gpt.settings.settings import settings
# Set log_config=None to do not use the uvicorn logging configuration, and
# use ours instead. For reference, see below:
# https://github.com/tiangolo/fastapi/discussions/7457#discussioncomment-5141108
uvicorn.run(app, host="0.0.0.0", port=settings().server.port, log_config=None)

View File

View File

@@ -0,0 +1,82 @@
# mypy: ignore-errors
import json
from typing import Any
import boto3
from llama_index.core.base.embeddings.base import BaseEmbedding
from pydantic import Field, PrivateAttr
class SagemakerEmbedding(BaseEmbedding):
"""Sagemaker Embedding Endpoint.
To use, you must supply the endpoint name from your deployed
Sagemaker embedding model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
"""
endpoint_name: str = Field(description="")
_boto_client: Any = boto3.client(
"sagemaker-runtime",
) # TODO make it an optional field
_async_not_implemented_warned: bool = PrivateAttr(default=False)
@classmethod
def class_name(cls) -> str:
return "SagemakerEmbedding"
def _async_not_implemented_warn_once(self) -> None:
if not self._async_not_implemented_warned:
print("Async embedding not available, falling back to sync method.")
self._async_not_implemented_warned = True
def _embed(self, sentences: list[str]) -> list[list[float]]:
request_params = {
"inputs": sentences,
}
resp = self._boto_client.invoke_endpoint(
EndpointName=self.endpoint_name,
Body=json.dumps(request_params),
ContentType="application/json",
)
response_body = resp["Body"]
response_str = response_body.read().decode("utf-8")
response_json = json.loads(response_str)
return response_json["vectors"]
def _get_query_embedding(self, query: str) -> list[float]:
"""Get query embedding."""
return self._embed([query])[0]
async def _aget_query_embedding(self, query: str) -> list[float]:
# Warn the user that sync is being used
self._async_not_implemented_warn_once()
return self._get_query_embedding(query)
async def _aget_text_embedding(self, text: str) -> list[float]:
# Warn the user that sync is being used
self._async_not_implemented_warn_once()
return self._get_text_embedding(text)
def _get_text_embedding(self, text: str) -> list[float]:
"""Get text embedding."""
return self._embed([text])[0]
def _get_text_embeddings(self, texts: list[str]) -> list[list[float]]:
"""Get text embeddings."""
return self._embed(texts)

View File

@@ -0,0 +1,150 @@
import logging
from injector import inject, singleton
from llama_index.core.embeddings import BaseEmbedding, MockEmbedding
from private_gpt.paths import models_cache_path
from private_gpt.settings.settings import Settings
logger = logging.getLogger(__name__)
@singleton
class EmbeddingComponent:
embedding_model: BaseEmbedding
@inject
def __init__(self, settings: Settings) -> None:
embedding_mode = settings.embedding.mode
logger.info("Initializing the embedding model in mode=%s", embedding_mode)
match embedding_mode:
case "huggingface":
try:
from llama_index.embeddings.huggingface import ( # type: ignore
HuggingFaceEmbedding,
)
except ImportError as e:
raise ImportError(
"Local dependencies not found, install with `poetry install --extras embeddings-huggingface`"
) from e
self.embedding_model = HuggingFaceEmbedding(
model_name=settings.huggingface.embedding_hf_model_name,
cache_folder=str(models_cache_path),
trust_remote_code=settings.huggingface.trust_remote_code,
)
case "sagemaker":
try:
from private_gpt.components.embedding.custom.sagemaker import (
SagemakerEmbedding,
)
except ImportError as e:
raise ImportError(
"Sagemaker dependencies not found, install with `poetry install --extras embeddings-sagemaker`"
) from e
self.embedding_model = SagemakerEmbedding(
endpoint_name=settings.sagemaker.embedding_endpoint_name,
)
case "openai":
try:
from llama_index.embeddings.openai import ( # type: ignore
OpenAIEmbedding,
)
except ImportError as e:
raise ImportError(
"OpenAI dependencies not found, install with `poetry install --extras embeddings-openai`"
) from e
api_base = (
settings.openai.embedding_api_base or settings.openai.api_base
)
api_key = settings.openai.embedding_api_key or settings.openai.api_key
model = settings.openai.embedding_model
self.embedding_model = OpenAIEmbedding(
api_base=api_base,
api_key=api_key,
model=model,
)
case "ollama":
try:
from llama_index.embeddings.ollama import ( # type: ignore
OllamaEmbedding,
)
from ollama import Client # type: ignore
except ImportError as e:
raise ImportError(
"Local dependencies not found, install with `poetry install --extras embeddings-ollama`"
) from e
ollama_settings = settings.ollama
# Calculate embedding model. If not provided tag, it will be use latest
model_name = (
ollama_settings.embedding_model + ":latest"
if ":" not in ollama_settings.embedding_model
else ollama_settings.embedding_model
)
self.embedding_model = OllamaEmbedding(
model_name=model_name,
base_url=ollama_settings.embedding_api_base,
)
if ollama_settings.autopull_models:
if ollama_settings.autopull_models:
from private_gpt.utils.ollama import (
check_connection,
pull_model,
)
# TODO: Reuse llama-index client when llama-index is updated
client = Client(
host=ollama_settings.embedding_api_base,
timeout=ollama_settings.request_timeout,
)
if not check_connection(client):
raise ValueError(
f"Failed to connect to Ollama, "
f"check if Ollama server is running on {ollama_settings.api_base}"
)
pull_model(client, model_name)
case "azopenai":
try:
from llama_index.embeddings.azure_openai import ( # type: ignore
AzureOpenAIEmbedding,
)
except ImportError as e:
raise ImportError(
"Azure OpenAI dependencies not found, install with `poetry install --extras embeddings-azopenai`"
) from e
azopenai_settings = settings.azopenai
self.embedding_model = AzureOpenAIEmbedding(
model=azopenai_settings.embedding_model,
deployment_name=azopenai_settings.embedding_deployment_name,
api_key=azopenai_settings.api_key,
azure_endpoint=azopenai_settings.azure_endpoint,
api_version=azopenai_settings.api_version,
)
case "gemini":
try:
from llama_index.embeddings.gemini import ( # type: ignore
GeminiEmbedding,
)
except ImportError as e:
raise ImportError(
"Gemini dependencies not found, install with `poetry install --extras embeddings-gemini`"
) from e
self.embedding_model = GeminiEmbedding(
api_key=settings.gemini.api_key,
model_name=settings.gemini.embedding_model,
)
case "mock":
# Not a random number, is the dimensionality used by
# the default embedding model
self.embedding_model = MockEmbedding(384)

View File

@@ -0,0 +1,517 @@
import abc
import itertools
import logging
import multiprocessing
import multiprocessing.pool
import os
import threading
from pathlib import Path
from queue import Queue
from typing import Any
from llama_index.core.data_structs import IndexDict
from llama_index.core.embeddings.utils import EmbedType
from llama_index.core.indices import VectorStoreIndex, load_index_from_storage
from llama_index.core.indices.base import BaseIndex
from llama_index.core.ingestion import run_transformations
from llama_index.core.schema import BaseNode, Document, TransformComponent
from llama_index.core.storage import StorageContext
from private_gpt.components.ingest.ingest_helper import IngestionHelper
from private_gpt.paths import local_data_path
from private_gpt.settings.settings import Settings
from private_gpt.utils.eta import eta
logger = logging.getLogger(__name__)
class BaseIngestComponent(abc.ABC):
def __init__(
self,
storage_context: StorageContext,
embed_model: EmbedType,
transformations: list[TransformComponent],
*args: Any,
**kwargs: Any,
) -> None:
logger.debug("Initializing base ingest component type=%s", type(self).__name__)
self.storage_context = storage_context
self.embed_model = embed_model
self.transformations = transformations
@abc.abstractmethod
def ingest(self, file_name: str, file_data: Path) -> list[Document]:
pass
@abc.abstractmethod
def bulk_ingest(self, files: list[tuple[str, Path]]) -> list[Document]:
pass
@abc.abstractmethod
def delete(self, doc_id: str) -> None:
pass
class BaseIngestComponentWithIndex(BaseIngestComponent, abc.ABC):
def __init__(
self,
storage_context: StorageContext,
embed_model: EmbedType,
transformations: list[TransformComponent],
*args: Any,
**kwargs: Any,
) -> None:
super().__init__(storage_context, embed_model, transformations, *args, **kwargs)
self.show_progress = True
self._index_thread_lock = (
threading.Lock()
) # Thread lock! Not Multiprocessing lock
self._index = self._initialize_index()
def _initialize_index(self) -> BaseIndex[IndexDict]:
"""Initialize the index from the storage context."""
try:
# Load the index with store_nodes_override=True to be able to delete them
index = load_index_from_storage(
storage_context=self.storage_context,
store_nodes_override=True, # Force store nodes in index and document stores
show_progress=self.show_progress,
embed_model=self.embed_model,
transformations=self.transformations,
)
except ValueError:
# There are no index in the storage context, creating a new one
logger.info("Creating a new vector store index")
index = VectorStoreIndex.from_documents(
[],
storage_context=self.storage_context,
store_nodes_override=True, # Force store nodes in index and document stores
show_progress=self.show_progress,
embed_model=self.embed_model,
transformations=self.transformations,
)
index.storage_context.persist(persist_dir=local_data_path)
return index
def _save_index(self) -> None:
self._index.storage_context.persist(persist_dir=local_data_path)
def delete(self, doc_id: str) -> None:
with self._index_thread_lock:
# Delete the document from the index
self._index.delete_ref_doc(doc_id, delete_from_docstore=True)
# Save the index
self._save_index()
class SimpleIngestComponent(BaseIngestComponentWithIndex):
def __init__(
self,
storage_context: StorageContext,
embed_model: EmbedType,
transformations: list[TransformComponent],
*args: Any,
**kwargs: Any,
) -> None:
super().__init__(storage_context, embed_model, transformations, *args, **kwargs)
def ingest(self, file_name: str, file_data: Path) -> list[Document]:
logger.info("Ingesting file_name=%s", file_name)
documents = IngestionHelper.transform_file_into_documents(file_name, file_data)
logger.info(
"Transformed file=%s into count=%s documents", file_name, len(documents)
)
logger.debug("Saving the documents in the index and doc store")
return self._save_docs(documents)
def bulk_ingest(self, files: list[tuple[str, Path]]) -> list[Document]:
saved_documents = []
for file_name, file_data in files:
documents = IngestionHelper.transform_file_into_documents(
file_name, file_data
)
saved_documents.extend(self._save_docs(documents))
return saved_documents
def _save_docs(self, documents: list[Document]) -> list[Document]:
logger.debug("Transforming count=%s documents into nodes", len(documents))
with self._index_thread_lock:
for document in documents:
self._index.insert(document, show_progress=True)
logger.debug("Persisting the index and nodes")
# persist the index and nodes
self._save_index()
logger.debug("Persisted the index and nodes")
return documents
class BatchIngestComponent(BaseIngestComponentWithIndex):
"""Parallelize the file reading and parsing on multiple CPU core.
This also makes the embeddings to be computed in batches (on GPU or CPU).
"""
def __init__(
self,
storage_context: StorageContext,
embed_model: EmbedType,
transformations: list[TransformComponent],
count_workers: int,
*args: Any,
**kwargs: Any,
) -> None:
super().__init__(storage_context, embed_model, transformations, *args, **kwargs)
# Make an efficient use of the CPU and GPU, the embedding
# must be in the transformations
assert (
len(self.transformations) >= 2
), "Embeddings must be in the transformations"
assert count_workers > 0, "count_workers must be > 0"
self.count_workers = count_workers
self._file_to_documents_work_pool = multiprocessing.Pool(
processes=self.count_workers
)
def ingest(self, file_name: str, file_data: Path) -> list[Document]:
logger.info("Ingesting file_name=%s", file_name)
documents = IngestionHelper.transform_file_into_documents(file_name, file_data)
logger.info(
"Transformed file=%s into count=%s documents", file_name, len(documents)
)
logger.debug("Saving the documents in the index and doc store")
return self._save_docs(documents)
def bulk_ingest(self, files: list[tuple[str, Path]]) -> list[Document]:
documents = list(
itertools.chain.from_iterable(
self._file_to_documents_work_pool.starmap(
IngestionHelper.transform_file_into_documents, files
)
)
)
logger.info(
"Transformed count=%s files into count=%s documents",
len(files),
len(documents),
)
return self._save_docs(documents)
def _save_docs(self, documents: list[Document]) -> list[Document]:
logger.debug("Transforming count=%s documents into nodes", len(documents))
nodes = run_transformations(
documents, # type: ignore[arg-type]
self.transformations,
show_progress=self.show_progress,
)
# Locking the index to avoid concurrent writes
with self._index_thread_lock:
logger.info("Inserting count=%s nodes in the index", len(nodes))
self._index.insert_nodes(nodes, show_progress=True)
for document in documents:
self._index.docstore.set_document_hash(
document.get_doc_id(), document.hash
)
logger.debug("Persisting the index and nodes")
# persist the index and nodes
self._save_index()
logger.debug("Persisted the index and nodes")
return documents
class ParallelizedIngestComponent(BaseIngestComponentWithIndex):
"""Parallelize the file ingestion (file reading, embeddings, and index insertion).
This use the CPU and GPU in parallel (both running at the same time), and
reduce the memory pressure by not loading all the files in memory at the same time.
"""
def __init__(
self,
storage_context: StorageContext,
embed_model: EmbedType,
transformations: list[TransformComponent],
count_workers: int,
*args: Any,
**kwargs: Any,
) -> None:
super().__init__(storage_context, embed_model, transformations, *args, **kwargs)
# To make an efficient use of the CPU and GPU, the embeddings
# must be in the transformations (to be computed in batches)
assert (
len(self.transformations) >= 2
), "Embeddings must be in the transformations"
assert count_workers > 0, "count_workers must be > 0"
self.count_workers = count_workers
# We are doing our own multiprocessing
# To do not collide with the multiprocessing of huggingface, we disable it
os.environ["TOKENIZERS_PARALLELISM"] = "false"
self._ingest_work_pool = multiprocessing.pool.ThreadPool(
processes=self.count_workers
)
self._file_to_documents_work_pool = multiprocessing.Pool(
processes=self.count_workers
)
def ingest(self, file_name: str, file_data: Path) -> list[Document]:
logger.info("Ingesting file_name=%s", file_name)
# Running in a single (1) process to release the current
# thread, and take a dedicated CPU core for computation
documents = self._file_to_documents_work_pool.apply(
IngestionHelper.transform_file_into_documents, (file_name, file_data)
)
logger.info(
"Transformed file=%s into count=%s documents", file_name, len(documents)
)
logger.debug("Saving the documents in the index and doc store")
return self._save_docs(documents)
def bulk_ingest(self, files: list[tuple[str, Path]]) -> list[Document]:
# Lightweight threads, used for parallelize the
# underlying IO calls made in the ingestion
documents = list(
itertools.chain.from_iterable(
self._ingest_work_pool.starmap(self.ingest, files)
)
)
return documents
def _save_docs(self, documents: list[Document]) -> list[Document]:
logger.debug("Transforming count=%s documents into nodes", len(documents))
nodes = run_transformations(
documents, # type: ignore[arg-type]
self.transformations,
show_progress=self.show_progress,
)
# Locking the index to avoid concurrent writes
with self._index_thread_lock:
logger.info("Inserting count=%s nodes in the index", len(nodes))
self._index.insert_nodes(nodes, show_progress=True)
for document in documents:
self._index.docstore.set_document_hash(
document.get_doc_id(), document.hash
)
logger.debug("Persisting the index and nodes")
# persist the index and nodes
self._save_index()
logger.debug("Persisted the index and nodes")
return documents
def __del__(self) -> None:
# We need to do the appropriate cleanup of the multiprocessing pools
# when the object is deleted. Using root logger to avoid
# the logger to be deleted before the pool
logging.debug("Closing the ingest work pool")
self._ingest_work_pool.close()
self._ingest_work_pool.join()
self._ingest_work_pool.terminate()
logging.debug("Closing the file to documents work pool")
self._file_to_documents_work_pool.close()
self._file_to_documents_work_pool.join()
self._file_to_documents_work_pool.terminate()
class PipelineIngestComponent(BaseIngestComponentWithIndex):
"""Pipeline ingestion - keeping the embedding worker pool as busy as possible.
This class implements a threaded ingestion pipeline, which comprises two threads
and two queues. The primary thread is responsible for reading and parsing files
into documents. These documents are then placed into a queue, which is
distributed to a pool of worker processes for embedding computation. After
embedding, the documents are transferred to another queue where they are
accumulated until a threshold is reached. Upon reaching this threshold, the
accumulated documents are flushed to the document store, index, and vector
store.
Exception handling ensures robustness against erroneous files. However, in the
pipelined design, one error can lead to the discarding of multiple files. Any
discarded files will be reported.
"""
NODE_FLUSH_COUNT = 5000 # Save the index every # nodes.
def __init__(
self,
storage_context: StorageContext,
embed_model: EmbedType,
transformations: list[TransformComponent],
count_workers: int,
*args: Any,
**kwargs: Any,
) -> None:
super().__init__(storage_context, embed_model, transformations, *args, **kwargs)
self.count_workers = count_workers
assert (
len(self.transformations) >= 2
), "Embeddings must be in the transformations"
assert count_workers > 0, "count_workers must be > 0"
self.count_workers = count_workers
# We are doing our own multiprocessing
# To do not collide with the multiprocessing of huggingface, we disable it
os.environ["TOKENIZERS_PARALLELISM"] = "false"
# doc_q stores parsed files as Document chunks.
# Using a shallow queue causes the filesystem parser to block
# when it reaches capacity. This ensures it doesn't outpace the
# computationally intensive embeddings phase, avoiding unnecessary
# memory consumption. The semaphore is used to bound the async worker
# embedding computations to cause the doc Q to fill and block.
self.doc_semaphore = multiprocessing.Semaphore(
self.count_workers
) # limit the doc queue to # items.
self.doc_q: Queue[tuple[str, str | None, list[Document] | None]] = Queue(20)
# node_q stores documents parsed into nodes (embeddings).
# Larger queue size so we don't block the embedding workers during a slow
# index update.
self.node_q: Queue[
tuple[str, str | None, list[Document] | None, list[BaseNode] | None]
] = Queue(40)
threading.Thread(target=self._doc_to_node, daemon=True).start()
threading.Thread(target=self._write_nodes, daemon=True).start()
def _doc_to_node(self) -> None:
# Parse documents into nodes
with multiprocessing.pool.ThreadPool(processes=self.count_workers) as pool:
while True:
try:
cmd, file_name, documents = self.doc_q.get(
block=True
) # Documents for a file
if cmd == "process":
# Push CPU/GPU embedding work to the worker pool
# Acquire semaphore to control access to worker pool
self.doc_semaphore.acquire()
pool.apply_async(
self._doc_to_node_worker, (file_name, documents)
)
elif cmd == "quit":
break
finally:
if cmd != "process":
self.doc_q.task_done() # unblock Q joins
def _doc_to_node_worker(self, file_name: str, documents: list[Document]) -> None:
# CPU/GPU intensive work in its own process
try:
nodes = run_transformations(
documents, # type: ignore[arg-type]
self.transformations,
show_progress=self.show_progress,
)
self.node_q.put(("process", file_name, documents, nodes))
finally:
self.doc_semaphore.release()
self.doc_q.task_done() # unblock Q joins
def _save_docs(
self, files: list[str], documents: list[Document], nodes: list[BaseNode]
) -> None:
try:
logger.info(
f"Saving {len(files)} files ({len(documents)} documents / {len(nodes)} nodes)"
)
self._index.insert_nodes(nodes)
for document in documents:
self._index.docstore.set_document_hash(
document.get_doc_id(), document.hash
)
self._save_index()
except Exception:
# Tell the user so they can investigate these files
logger.exception(f"Processing files {files}")
finally:
# Clearing work, even on exception, maintains a clean state.
nodes.clear()
documents.clear()
files.clear()
def _write_nodes(self) -> None:
# Save nodes to index. I/O intensive.
node_stack: list[BaseNode] = []
doc_stack: list[Document] = []
file_stack: list[str] = []
while True:
try:
cmd, file_name, documents, nodes = self.node_q.get(block=True)
if cmd in ("flush", "quit"):
if file_stack:
self._save_docs(file_stack, doc_stack, node_stack)
if cmd == "quit":
break
elif cmd == "process":
node_stack.extend(nodes) # type: ignore[arg-type]
doc_stack.extend(documents) # type: ignore[arg-type]
file_stack.append(file_name) # type: ignore[arg-type]
# Constant saving is heavy on I/O - accumulate to a threshold
if len(node_stack) >= self.NODE_FLUSH_COUNT:
self._save_docs(file_stack, doc_stack, node_stack)
finally:
self.node_q.task_done()
def _flush(self) -> None:
self.doc_q.put(("flush", None, None))
self.doc_q.join()
self.node_q.put(("flush", None, None, None))
self.node_q.join()
def ingest(self, file_name: str, file_data: Path) -> list[Document]:
documents = IngestionHelper.transform_file_into_documents(file_name, file_data)
self.doc_q.put(("process", file_name, documents))
self._flush()
return documents
def bulk_ingest(self, files: list[tuple[str, Path]]) -> list[Document]:
docs = []
for file_name, file_data in eta(files):
try:
documents = IngestionHelper.transform_file_into_documents(
file_name, file_data
)
self.doc_q.put(("process", file_name, documents))
docs.extend(documents)
except Exception:
logger.exception(f"Skipping {file_data.name}")
self._flush()
return docs
def get_ingestion_component(
storage_context: StorageContext,
embed_model: EmbedType,
transformations: list[TransformComponent],
settings: Settings,
) -> BaseIngestComponent:
"""Get the ingestion component for the given configuration."""
ingest_mode = settings.embedding.ingest_mode
if ingest_mode == "batch":
return BatchIngestComponent(
storage_context=storage_context,
embed_model=embed_model,
transformations=transformations,
count_workers=settings.embedding.count_workers,
)
elif ingest_mode == "parallel":
return ParallelizedIngestComponent(
storage_context=storage_context,
embed_model=embed_model,
transformations=transformations,
count_workers=settings.embedding.count_workers,
)
elif ingest_mode == "pipeline":
return PipelineIngestComponent(
storage_context=storage_context,
embed_model=embed_model,
transformations=transformations,
count_workers=settings.embedding.count_workers,
)
else:
return SimpleIngestComponent(
storage_context=storage_context,
embed_model=embed_model,
transformations=transformations,
)

View File

@@ -0,0 +1,105 @@
import logging
from pathlib import Path
from llama_index.core.readers import StringIterableReader
from llama_index.core.readers.base import BaseReader
from llama_index.core.readers.json import JSONReader
from llama_index.core.schema import Document
logger = logging.getLogger(__name__)
# Inspired by the `llama_index.core.readers.file.base` module
def _try_loading_included_file_formats() -> dict[str, type[BaseReader]]:
try:
from llama_index.readers.file.docs import ( # type: ignore
DocxReader,
HWPReader,
PDFReader,
)
from llama_index.readers.file.epub import EpubReader # type: ignore
from llama_index.readers.file.image import ImageReader # type: ignore
from llama_index.readers.file.ipynb import IPYNBReader # type: ignore
from llama_index.readers.file.markdown import MarkdownReader # type: ignore
from llama_index.readers.file.mbox import MboxReader # type: ignore
from llama_index.readers.file.slides import PptxReader # type: ignore
from llama_index.readers.file.tabular import PandasCSVReader # type: ignore
from llama_index.readers.file.video_audio import ( # type: ignore
VideoAudioReader,
)
except ImportError as e:
raise ImportError("`llama-index-readers-file` package not found") from e
default_file_reader_cls: dict[str, type[BaseReader]] = {
".hwp": HWPReader,
".pdf": PDFReader,
".docx": DocxReader,
".pptx": PptxReader,
".ppt": PptxReader,
".pptm": PptxReader,
".jpg": ImageReader,
".png": ImageReader,
".jpeg": ImageReader,
".mp3": VideoAudioReader,
".mp4": VideoAudioReader,
".csv": PandasCSVReader,
".epub": EpubReader,
".md": MarkdownReader,
".mbox": MboxReader,
".ipynb": IPYNBReader,
}
return default_file_reader_cls
# Patching the default file reader to support other file types
FILE_READER_CLS = _try_loading_included_file_formats()
FILE_READER_CLS.update(
{
".json": JSONReader,
}
)
class IngestionHelper:
"""Helper class to transform a file into a list of documents.
This class should be used to transform a file into a list of documents.
These methods are thread-safe (and multiprocessing-safe).
"""
@staticmethod
def transform_file_into_documents(
file_name: str, file_data: Path
) -> list[Document]:
documents = IngestionHelper._load_file_to_documents(file_name, file_data)
for document in documents:
document.metadata["file_name"] = file_name
IngestionHelper._exclude_metadata(documents)
return documents
@staticmethod
def _load_file_to_documents(file_name: str, file_data: Path) -> list[Document]:
logger.debug("Transforming file_name=%s into documents", file_name)
extension = Path(file_name).suffix
reader_cls = FILE_READER_CLS.get(extension)
if reader_cls is None:
logger.debug(
"No reader found for extension=%s, using default string reader",
extension,
)
# Read as a plain text
string_reader = StringIterableReader()
return string_reader.load_data([file_data.read_text()])
logger.debug("Specific reader found for extension=%s", extension)
return reader_cls().load_data(file_data)
@staticmethod
def _exclude_metadata(documents: list[Document]) -> None:
logger.debug("Excluding metadata from count=%s documents", len(documents))
for document in documents:
document.metadata["doc_id"] = document.doc_id
# We don't want the Embeddings search to receive this metadata
document.excluded_embed_metadata_keys = ["doc_id"]
# We don't want the LLM to receive these metadata in the context
document.excluded_llm_metadata_keys = ["file_name", "doc_id", "page_label"]

View File

@@ -0,0 +1 @@
"""LLM implementations."""

View File

@@ -0,0 +1,276 @@
# mypy: ignore-errors
from __future__ import annotations
import io
import json
import logging
from typing import TYPE_CHECKING, Any
import boto3 # type: ignore
from llama_index.core.base.llms.generic_utils import (
completion_response_to_chat_response,
stream_completion_response_to_chat_response,
)
from llama_index.core.bridge.pydantic import Field
from llama_index.core.llms import (
CompletionResponse,
CustomLLM,
LLMMetadata,
)
from llama_index.core.llms.callbacks import (
llm_chat_callback,
llm_completion_callback,
)
if TYPE_CHECKING:
from collections.abc import Sequence
from llama_index.callbacks import CallbackManager
from llama_index.llms import (
ChatMessage,
ChatResponse,
ChatResponseGen,
CompletionResponseGen,
)
logger = logging.getLogger(__name__)
class LineIterator:
r"""A helper class for parsing the byte stream input from TGI container.
The output of the model will be in the following format:
```
b'data:{"token": {"text": " a"}}\n\n'
b'data:{"token": {"text": " challenging"}}\n\n'
b'data:{"token": {"text": " problem"
b'}}'
...
```
While usually each PayloadPart event from the event stream will contain a byte array
with a full json, this is not guaranteed and some of the json objects may be split
across PayloadPart events. For example:
```
{'PayloadPart': {'Bytes': b'{"outputs": '}}
{'PayloadPart': {'Bytes': b'[" problem"]}\n'}}
```
This class accounts for this by concatenating bytes written via the 'write' function
and then exposing a method which will return lines (ending with a '\n' character)
within the buffer via the 'scan_lines' function. It maintains the position of the
last read position to ensure that previous bytes are not exposed again. It will
also save any pending lines that doe not end with a '\n' to make sure truncations
are concatinated
"""
def __init__(self, stream: Any) -> None:
"""Line iterator initializer."""
self.byte_iterator = iter(stream)
self.buffer = io.BytesIO()
self.read_pos = 0
def __iter__(self) -> Any:
"""Self iterator."""
return self
def __next__(self) -> Any:
"""Next element from iterator."""
while True:
self.buffer.seek(self.read_pos)
line = self.buffer.readline()
if line and line[-1] == ord("\n"):
self.read_pos += len(line)
return line[:-1]
try:
chunk = next(self.byte_iterator)
except StopIteration:
if self.read_pos < self.buffer.getbuffer().nbytes:
continue
raise
if "PayloadPart" not in chunk:
logger.warning("Unknown event type=%s", chunk)
continue
self.buffer.seek(0, io.SEEK_END)
self.buffer.write(chunk["PayloadPart"]["Bytes"])
class SagemakerLLM(CustomLLM):
"""Sagemaker Inference Endpoint models.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
"""
endpoint_name: str = Field(description="")
temperature: float = Field(description="The temperature to use for sampling.")
max_new_tokens: int = Field(description="The maximum number of tokens to generate.")
context_window: int = Field(
description="The maximum number of context tokens for the model."
)
messages_to_prompt: Any = Field(
description="The function to convert messages to a prompt.", exclude=True
)
completion_to_prompt: Any = Field(
description="The function to convert a completion to a prompt.", exclude=True
)
generate_kwargs: dict[str, Any] = Field(
default_factory=dict, description="Kwargs used for generation."
)
model_kwargs: dict[str, Any] = Field(
default_factory=dict, description="Kwargs used for model initialization."
)
verbose: bool = Field(description="Whether to print verbose output.")
_boto_client: Any = boto3.client(
"sagemaker-runtime",
) # TODO make it an optional field
def __init__(
self,
endpoint_name: str | None = "",
temperature: float = 0.1,
max_new_tokens: int = 512, # to review defaults
context_window: int = 2048, # to review defaults
messages_to_prompt: Any = None,
completion_to_prompt: Any = None,
callback_manager: CallbackManager | None = None,
generate_kwargs: dict[str, Any] | None = None,
model_kwargs: dict[str, Any] | None = None,
verbose: bool = True,
) -> None:
"""SagemakerLLM initializer."""
model_kwargs = model_kwargs or {}
model_kwargs.update({"n_ctx": context_window, "verbose": verbose})
messages_to_prompt = messages_to_prompt or {}
completion_to_prompt = completion_to_prompt or {}
generate_kwargs = generate_kwargs or {}
generate_kwargs.update(
{"temperature": temperature, "max_tokens": max_new_tokens}
)
super().__init__(
endpoint_name=endpoint_name,
temperature=temperature,
context_window=context_window,
max_new_tokens=max_new_tokens,
messages_to_prompt=messages_to_prompt,
completion_to_prompt=completion_to_prompt,
callback_manager=callback_manager,
generate_kwargs=generate_kwargs,
model_kwargs=model_kwargs,
verbose=verbose,
)
@property
def inference_params(self):
# TODO expose the rest of params
return {
"do_sample": True,
"top_p": 0.7,
"temperature": self.temperature,
"top_k": 50,
"max_new_tokens": self.max_new_tokens,
}
@property
def metadata(self) -> LLMMetadata:
"""Get LLM metadata."""
return LLMMetadata(
context_window=self.context_window,
num_output=self.max_new_tokens,
model_name="Sagemaker LLama 2",
)
@llm_completion_callback()
def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:
self.generate_kwargs.update({"stream": False})
is_formatted = kwargs.pop("formatted", False)
if not is_formatted:
prompt = self.completion_to_prompt(prompt)
request_params = {
"inputs": prompt,
"stream": False,
"parameters": self.inference_params,
}
resp = self._boto_client.invoke_endpoint(
EndpointName=self.endpoint_name,
Body=json.dumps(request_params),
ContentType="application/json",
)
response_body = resp["Body"]
response_str = response_body.read().decode("utf-8")
response_dict = json.loads(response_str)
return CompletionResponse(
text=response_dict[0]["generated_text"][len(prompt) :], raw=resp
)
@llm_completion_callback()
def stream_complete(self, prompt: str, **kwargs: Any) -> CompletionResponseGen:
def get_stream():
text = ""
request_params = {
"inputs": prompt,
"stream": True,
"parameters": self.inference_params,
}
resp = self._boto_client.invoke_endpoint_with_response_stream(
EndpointName=self.endpoint_name,
Body=json.dumps(request_params),
ContentType="application/json",
)
event_stream = resp["Body"]
start_json = b"{"
stop_token = "<|endoftext|>"
first_token = True
for line in LineIterator(event_stream):
if line != b"" and start_json in line:
data = json.loads(line[line.find(start_json) :].decode("utf-8"))
special = data["token"]["special"]
stop = data["token"]["text"] == stop_token
if not special and not stop:
delta = data["token"]["text"]
# trim the leading space for the first token if present
if first_token:
delta = delta.lstrip()
first_token = False
text += delta
yield CompletionResponse(delta=delta, text=text, raw=data)
return get_stream()
@llm_chat_callback()
def chat(self, messages: Sequence[ChatMessage], **kwargs: Any) -> ChatResponse:
prompt = self.messages_to_prompt(messages)
completion_response = self.complete(prompt, formatted=True, **kwargs)
return completion_response_to_chat_response(completion_response)
@llm_chat_callback()
def stream_chat(
self, messages: Sequence[ChatMessage], **kwargs: Any
) -> ChatResponseGen:
prompt = self.messages_to_prompt(messages)
completion_response = self.stream_complete(prompt, formatted=True, **kwargs)
return stream_completion_response_to_chat_response(completion_response)

View File

@@ -0,0 +1,226 @@
import logging
from collections.abc import Callable
from typing import Any
from injector import inject, singleton
from llama_index.core.llms import LLM, MockLLM
from llama_index.core.settings import Settings as LlamaIndexSettings
from llama_index.core.utils import set_global_tokenizer
from transformers import AutoTokenizer # type: ignore
from private_gpt.components.llm.prompt_helper import get_prompt_style
from private_gpt.paths import models_cache_path, models_path
from private_gpt.settings.settings import Settings
logger = logging.getLogger(__name__)
@singleton
class LLMComponent:
llm: LLM
@inject
def __init__(self, settings: Settings) -> None:
llm_mode = settings.llm.mode
if settings.llm.tokenizer and settings.llm.mode != "mock":
# Try to download the tokenizer. If it fails, the LLM will still work
# using the default one, which is less accurate.
try:
set_global_tokenizer(
AutoTokenizer.from_pretrained(
pretrained_model_name_or_path=settings.llm.tokenizer,
cache_dir=str(models_cache_path),
token=settings.huggingface.access_token,
)
)
except Exception as e:
logger.warning(
f"Failed to download tokenizer {settings.llm.tokenizer}: {e!s}"
f"Please follow the instructions in the documentation to download it if needed: "
f"https://docs.privategpt.dev/installation/getting-started/troubleshooting#tokenizer-setup."
f"Falling back to default tokenizer."
)
logger.info("Initializing the LLM in mode=%s", llm_mode)
match settings.llm.mode:
case "llamacpp":
try:
from llama_index.llms.llama_cpp import LlamaCPP # type: ignore
except ImportError as e:
raise ImportError(
"Local dependencies not found, install with `poetry install --extras llms-llama-cpp`"
) from e
prompt_style = get_prompt_style(settings.llm.prompt_style)
settings_kwargs = {
"tfs_z": settings.llamacpp.tfs_z, # ollama and llama-cpp
"top_k": settings.llamacpp.top_k, # ollama and llama-cpp
"top_p": settings.llamacpp.top_p, # ollama and llama-cpp
"repeat_penalty": settings.llamacpp.repeat_penalty, # ollama llama-cpp
"n_gpu_layers": -1,
"offload_kqv": True,
}
self.llm = LlamaCPP(
model_path=str(models_path / settings.llamacpp.llm_hf_model_file),
temperature=settings.llm.temperature,
max_new_tokens=settings.llm.max_new_tokens,
context_window=settings.llm.context_window,
generate_kwargs={},
callback_manager=LlamaIndexSettings.callback_manager,
# All to GPU
model_kwargs=settings_kwargs,
# transform inputs into Llama2 format
messages_to_prompt=prompt_style.messages_to_prompt,
completion_to_prompt=prompt_style.completion_to_prompt,
verbose=True,
)
case "sagemaker":
try:
from private_gpt.components.llm.custom.sagemaker import SagemakerLLM
except ImportError as e:
raise ImportError(
"Sagemaker dependencies not found, install with `poetry install --extras llms-sagemaker`"
) from e
self.llm = SagemakerLLM(
endpoint_name=settings.sagemaker.llm_endpoint_name,
max_new_tokens=settings.llm.max_new_tokens,
context_window=settings.llm.context_window,
)
case "openai":
try:
from llama_index.llms.openai import OpenAI # type: ignore
except ImportError as e:
raise ImportError(
"OpenAI dependencies not found, install with `poetry install --extras llms-openai`"
) from e
openai_settings = settings.openai
self.llm = OpenAI(
api_base=openai_settings.api_base,
api_key=openai_settings.api_key,
model=openai_settings.model,
)
case "openailike":
try:
from llama_index.llms.openai_like import OpenAILike # type: ignore
except ImportError as e:
raise ImportError(
"OpenAILike dependencies not found, install with `poetry install --extras llms-openai-like`"
) from e
prompt_style = get_prompt_style(settings.llm.prompt_style)
openai_settings = settings.openai
self.llm = OpenAILike(
api_base=openai_settings.api_base,
api_key=openai_settings.api_key,
model=openai_settings.model,
is_chat_model=True,
max_tokens=settings.llm.max_new_tokens,
api_version="",
temperature=settings.llm.temperature,
context_window=settings.llm.context_window,
max_new_tokens=settings.llm.max_new_tokens,
messages_to_prompt=prompt_style.messages_to_prompt,
completion_to_prompt=prompt_style.completion_to_prompt,
tokenizer=settings.llm.tokenizer,
timeout=openai_settings.request_timeout,
reuse_client=False,
)
case "ollama":
try:
from llama_index.llms.ollama import Ollama # type: ignore
except ImportError as e:
raise ImportError(
"Ollama dependencies not found, install with `poetry install --extras llms-ollama`"
) from e
ollama_settings = settings.ollama
settings_kwargs = {
"tfs_z": ollama_settings.tfs_z, # ollama and llama-cpp
"num_predict": ollama_settings.num_predict, # ollama only
"top_k": ollama_settings.top_k, # ollama and llama-cpp
"top_p": ollama_settings.top_p, # ollama and llama-cpp
"repeat_last_n": ollama_settings.repeat_last_n, # ollama
"repeat_penalty": ollama_settings.repeat_penalty, # ollama llama-cpp
}
# calculate llm model. If not provided tag, it will be use latest
model_name = (
ollama_settings.llm_model + ":latest"
if ":" not in ollama_settings.llm_model
else ollama_settings.llm_model
)
llm = Ollama(
model=model_name,
base_url=ollama_settings.api_base,
temperature=settings.llm.temperature,
context_window=settings.llm.context_window,
additional_kwargs=settings_kwargs,
request_timeout=ollama_settings.request_timeout,
)
if ollama_settings.autopull_models:
from private_gpt.utils.ollama import check_connection, pull_model
if not check_connection(llm.client):
raise ValueError(
f"Failed to connect to Ollama, "
f"check if Ollama server is running on {ollama_settings.api_base}"
)
pull_model(llm.client, model_name)
if (
ollama_settings.keep_alive
!= ollama_settings.model_fields["keep_alive"].default
):
# Modify Ollama methods to use the "keep_alive" field.
def add_keep_alive(func: Callable[..., Any]) -> Callable[..., Any]:
def wrapper(*args: Any, **kwargs: Any) -> Any:
kwargs["keep_alive"] = ollama_settings.keep_alive
return func(*args, **kwargs)
return wrapper
Ollama.chat = add_keep_alive(Ollama.chat)
Ollama.stream_chat = add_keep_alive(Ollama.stream_chat)
Ollama.complete = add_keep_alive(Ollama.complete)
Ollama.stream_complete = add_keep_alive(Ollama.stream_complete)
self.llm = llm
case "azopenai":
try:
from llama_index.llms.azure_openai import ( # type: ignore
AzureOpenAI,
)
except ImportError as e:
raise ImportError(
"Azure OpenAI dependencies not found, install with `poetry install --extras llms-azopenai`"
) from e
azopenai_settings = settings.azopenai
self.llm = AzureOpenAI(
model=azopenai_settings.llm_model,
deployment_name=azopenai_settings.llm_deployment_name,
api_key=azopenai_settings.api_key,
azure_endpoint=azopenai_settings.azure_endpoint,
api_version=azopenai_settings.api_version,
)
case "gemini":
try:
from llama_index.llms.gemini import ( # type: ignore
Gemini,
)
except ImportError as e:
raise ImportError(
"Google Gemini dependencies not found, install with `poetry install --extras llms-gemini`"
) from e
gemini_settings = settings.gemini
self.llm = Gemini(
model_name=gemini_settings.model, api_key=gemini_settings.api_key
)
case "mock":
self.llm = MockLLM()

View File

@@ -0,0 +1,308 @@
import abc
import logging
from collections.abc import Sequence
from typing import Any, Literal
from llama_index.core.llms import ChatMessage, MessageRole
logger = logging.getLogger(__name__)
class AbstractPromptStyle(abc.ABC):
"""Abstract class for prompt styles.
This class is used to format a series of messages into a prompt that can be
understood by the models. A series of messages represents the interaction(s)
between a user and an assistant. This series of messages can be considered as a
session between a user X and an assistant Y.This session holds, through the
messages, the state of the conversation. This session, to be understood by the
model, needs to be formatted into a prompt (i.e. a string that the models
can understand). Prompts can be formatted in different ways,
depending on the model.
The implementations of this class represent the different ways to format a
series of messages into a prompt.
"""
def __init__(self, *args: Any, **kwargs: Any) -> None:
logger.debug("Initializing prompt_style=%s", self.__class__.__name__)
@abc.abstractmethod
def _messages_to_prompt(self, messages: Sequence[ChatMessage]) -> str:
pass
@abc.abstractmethod
def _completion_to_prompt(self, completion: str) -> str:
pass
def messages_to_prompt(self, messages: Sequence[ChatMessage]) -> str:
prompt = self._messages_to_prompt(messages)
logger.debug("Got for messages='%s' the prompt='%s'", messages, prompt)
return prompt
def completion_to_prompt(self, completion: str) -> str:
prompt = self._completion_to_prompt(completion)
logger.debug("Got for completion='%s' the prompt='%s'", completion, prompt)
return prompt
class DefaultPromptStyle(AbstractPromptStyle):
"""Default prompt style that uses the defaults from llama_utils.
It basically passes None to the LLM, indicating it should use
the default functions.
"""
def __init__(self, *args: Any, **kwargs: Any) -> None:
super().__init__(*args, **kwargs)
# Hacky way to override the functions
# Override the functions to be None, and pass None to the LLM.
self.messages_to_prompt = None # type: ignore[method-assign, assignment]
self.completion_to_prompt = None # type: ignore[method-assign, assignment]
def _messages_to_prompt(self, messages: Sequence[ChatMessage]) -> str:
return ""
def _completion_to_prompt(self, completion: str) -> str:
return ""
class Llama2PromptStyle(AbstractPromptStyle):
"""Simple prompt style that uses llama 2 prompt style.
Inspired by llama_index/legacy/llms/llama_utils.py
It transforms the sequence of messages into a prompt that should look like:
```text
<s> [INST] <<SYS>> your system prompt here. <</SYS>>
user message here [/INST] assistant (model) response here </s>
```
"""
BOS, EOS = "<s>", "</s>"
B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
DEFAULT_SYSTEM_PROMPT = """\
You are a helpful, respectful and honest assistant. \
Always answer as helpfully as possible and follow ALL given instructions. \
Do not speculate or make up information. \
Do not reference any given instructions or context. \
"""
def _messages_to_prompt(self, messages: Sequence[ChatMessage]) -> str:
string_messages: list[str] = []
if messages[0].role == MessageRole.SYSTEM:
# pull out the system message (if it exists in messages)
system_message_str = messages[0].content or ""
messages = messages[1:]
else:
system_message_str = self.DEFAULT_SYSTEM_PROMPT
system_message_str = f"{self.B_SYS} {system_message_str.strip()} {self.E_SYS}"
for i in range(0, len(messages), 2):
# first message should always be a user
user_message = messages[i]
assert user_message.role == MessageRole.USER
if i == 0:
# make sure system prompt is included at the start
str_message = f"{self.BOS} {self.B_INST} {system_message_str} "
else:
# end previous user-assistant interaction
string_messages[-1] += f" {self.EOS}"
# no need to include system prompt
str_message = f"{self.BOS} {self.B_INST} "
# include user message content
str_message += f"{user_message.content} {self.E_INST}"
if len(messages) > (i + 1):
# if assistant message exists, add to str_message
assistant_message = messages[i + 1]
assert assistant_message.role == MessageRole.ASSISTANT
str_message += f" {assistant_message.content}"
string_messages.append(str_message)
return "".join(string_messages)
def _completion_to_prompt(self, completion: str) -> str:
system_prompt_str = self.DEFAULT_SYSTEM_PROMPT
return (
f"{self.BOS} {self.B_INST} {self.B_SYS} {system_prompt_str.strip()} {self.E_SYS} "
f"{completion.strip()} {self.E_INST}"
)
class Llama3PromptStyle(AbstractPromptStyle):
r"""Template for Meta's Llama 3.1.
The format follows this structure:
<|begin_of_text|>
<|start_header_id|>system<|end_header_id|>
[System message content]<|eot_id|>
<|start_header_id|>user<|end_header_id|>
[User message content]<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
[Assistant message content]<|eot_id|>
...
(Repeat for each message, including possible 'ipython' role)
"""
BOS, EOS = "<|begin_of_text|>", "<|end_of_text|>"
B_INST, E_INST = "<|start_header_id|>", "<|end_header_id|>"
EOT = "<|eot_id|>"
B_SYS, E_SYS = "<|start_header_id|>system<|end_header_id|>", "<|eot_id|>"
ASSISTANT_INST = "<|start_header_id|>assistant<|end_header_id|>"
DEFAULT_SYSTEM_PROMPT = """\
You are a helpful, respectful and honest assistant. \
Always answer as helpfully as possible and follow ALL given instructions. \
Do not speculate or make up information. \
Do not reference any given instructions or context. \
"""
def _messages_to_prompt(self, messages: Sequence[ChatMessage]) -> str:
prompt = ""
has_system_message = False
for i, message in enumerate(messages):
if not message or message.content is None:
continue
if message.role == MessageRole.SYSTEM:
prompt += f"{self.B_SYS}\n\n{message.content.strip()}{self.E_SYS}"
has_system_message = True
else:
role_header = f"{self.B_INST}{message.role.value}{self.E_INST}"
prompt += f"{role_header}\n\n{message.content.strip()}{self.EOT}"
# Add assistant header if the last message is not from the assistant
if i == len(messages) - 1 and message.role != MessageRole.ASSISTANT:
prompt += f"{self.ASSISTANT_INST}\n\n"
# Add default system prompt if no system message was provided
if not has_system_message:
prompt = (
f"{self.B_SYS}\n\n{self.DEFAULT_SYSTEM_PROMPT}{self.E_SYS}" + prompt
)
# TODO: Implement tool handling logic
return prompt
def _completion_to_prompt(self, completion: str) -> str:
return (
f"{self.B_SYS}\n\n{self.DEFAULT_SYSTEM_PROMPT}{self.E_SYS}"
f"{self.B_INST}user{self.E_INST}\n\n{completion.strip()}{self.EOT}"
f"{self.ASSISTANT_INST}\n\n"
)
class TagPromptStyle(AbstractPromptStyle):
"""Tag prompt style (used by Vigogne) that uses the prompt style `<|ROLE|>`.
It transforms the sequence of messages into a prompt that should look like:
```text
<|system|>: your system prompt here.
<|user|>: user message here
(possibly with context and question)
<|assistant|>: assistant (model) response here.
```
FIXME: should we add surrounding `<s>` and `</s>` tags, like in llama2?
"""
def _messages_to_prompt(self, messages: Sequence[ChatMessage]) -> str:
"""Format message to prompt with `<|ROLE|>: MSG` style."""
prompt = ""
for message in messages:
role = message.role
content = message.content or ""
message_from_user = f"<|{role.lower()}|>: {content.strip()}"
message_from_user += "\n"
prompt += message_from_user
# we are missing the last <|assistant|> tag that will trigger a completion
prompt += "<|assistant|>: "
return prompt
def _completion_to_prompt(self, completion: str) -> str:
return self._messages_to_prompt(
[ChatMessage(content=completion, role=MessageRole.USER)]
)
class MistralPromptStyle(AbstractPromptStyle):
def _messages_to_prompt(self, messages: Sequence[ChatMessage]) -> str:
inst_buffer = []
text = ""
for message in messages:
if message.role == MessageRole.SYSTEM or message.role == MessageRole.USER:
inst_buffer.append(str(message.content).strip())
elif message.role == MessageRole.ASSISTANT:
text += "<s>[INST] " + "\n".join(inst_buffer) + " [/INST]"
text += " " + str(message.content).strip() + "</s>"
inst_buffer.clear()
else:
raise ValueError(f"Unknown message role {message.role}")
if len(inst_buffer) > 0:
text += "<s>[INST] " + "\n".join(inst_buffer) + " [/INST]"
return text
def _completion_to_prompt(self, completion: str) -> str:
return self._messages_to_prompt(
[ChatMessage(content=completion, role=MessageRole.USER)]
)
class ChatMLPromptStyle(AbstractPromptStyle):
def _messages_to_prompt(self, messages: Sequence[ChatMessage]) -> str:
prompt = "<|im_start|>system\n"
for message in messages:
role = message.role
content = message.content or ""
if role.lower() == "system":
message_from_user = f"{content.strip()}"
prompt += message_from_user
elif role.lower() == "user":
prompt += "<|im_end|>\n<|im_start|>user\n"
message_from_user = f"{content.strip()}<|im_end|>\n"
prompt += message_from_user
prompt += "<|im_start|>assistant\n"
return prompt
def _completion_to_prompt(self, completion: str) -> str:
return self._messages_to_prompt(
[ChatMessage(content=completion, role=MessageRole.USER)]
)
def get_prompt_style(
prompt_style: Literal["default", "llama2", "llama3", "tag", "mistral", "chatml"]
| None
) -> AbstractPromptStyle:
"""Get the prompt style to use from the given string.
:param prompt_style: The prompt style to use.
:return: The prompt style to use.
"""
if prompt_style is None or prompt_style == "default":
return DefaultPromptStyle()
elif prompt_style == "llama2":
return Llama2PromptStyle()
elif prompt_style == "llama3":
return Llama3PromptStyle()
elif prompt_style == "tag":
return TagPromptStyle()
elif prompt_style == "mistral":
return MistralPromptStyle()
elif prompt_style == "chatml":
return ChatMLPromptStyle()
raise ValueError(f"Unknown prompt_style='{prompt_style}'")

View File

@@ -0,0 +1,67 @@
import logging
from injector import inject, singleton
from llama_index.core.storage.docstore import BaseDocumentStore, SimpleDocumentStore
from llama_index.core.storage.index_store import SimpleIndexStore
from llama_index.core.storage.index_store.types import BaseIndexStore
from private_gpt.paths import local_data_path
from private_gpt.settings.settings import Settings
logger = logging.getLogger(__name__)
@singleton
class NodeStoreComponent:
index_store: BaseIndexStore
doc_store: BaseDocumentStore
@inject
def __init__(self, settings: Settings) -> None:
match settings.nodestore.database:
case "simple":
try:
self.index_store = SimpleIndexStore.from_persist_dir(
persist_dir=str(local_data_path)
)
except FileNotFoundError:
logger.debug("Local index store not found, creating a new one")
self.index_store = SimpleIndexStore()
try:
self.doc_store = SimpleDocumentStore.from_persist_dir(
persist_dir=str(local_data_path)
)
except FileNotFoundError:
logger.debug("Local document store not found, creating a new one")
self.doc_store = SimpleDocumentStore()
case "postgres":
try:
from llama_index.core.storage.docstore.postgres_docstore import (
PostgresDocumentStore,
)
from llama_index.core.storage.index_store.postgres_index_store import (
PostgresIndexStore,
)
except ImportError:
raise ImportError(
"Postgres dependencies not found, install with `poetry install --extras storage-nodestore-postgres`"
) from None
if settings.postgres is None:
raise ValueError("Postgres index/doc store settings not found.")
self.index_store = PostgresIndexStore.from_params(
**settings.postgres.model_dump(exclude_none=True)
)
self.doc_store = PostgresDocumentStore.from_params(
**settings.postgres.model_dump(exclude_none=True)
)
case _:
# Should be unreachable
# The settings validator should have caught this
raise ValueError(
f"Database {settings.nodestore.database} not supported"
)

View File

@@ -0,0 +1,103 @@
from collections.abc import Generator
from typing import Any
from llama_index.core.schema import BaseNode, MetadataMode
from llama_index.core.vector_stores.utils import node_to_metadata_dict
from llama_index.vector_stores.chroma import ChromaVectorStore # type: ignore
def chunk_list(
lst: list[BaseNode], max_chunk_size: int
) -> Generator[list[BaseNode], None, None]:
"""Yield successive max_chunk_size-sized chunks from lst.
Args:
lst (List[BaseNode]): list of nodes with embeddings
max_chunk_size (int): max chunk size
Yields:
Generator[List[BaseNode], None, None]: list of nodes with embeddings
"""
for i in range(0, len(lst), max_chunk_size):
yield lst[i : i + max_chunk_size]
class BatchedChromaVectorStore(ChromaVectorStore): # type: ignore
"""Chroma vector store, batching additions to avoid reaching the max batch limit.
In this vector store, embeddings are stored within a ChromaDB collection.
During query time, the index uses ChromaDB to query for the top
k most similar nodes.
Args:
chroma_client (from chromadb.api.API):
API instance
chroma_collection (chromadb.api.models.Collection.Collection):
ChromaDB collection instance
"""
chroma_client: Any | None
def __init__(
self,
chroma_client: Any,
chroma_collection: Any,
host: str | None = None,
port: str | None = None,
ssl: bool = False,
headers: dict[str, str] | None = None,
collection_kwargs: dict[Any, Any] | None = None,
) -> None:
super().__init__(
chroma_collection=chroma_collection,
host=host,
port=port,
ssl=ssl,
headers=headers,
collection_kwargs=collection_kwargs or {},
)
self.chroma_client = chroma_client
def add(self, nodes: list[BaseNode], **add_kwargs: Any) -> list[str]:
"""Add nodes to index, batching the insertion to avoid issues.
Args:
nodes: List[BaseNode]: list of nodes with embeddings
add_kwargs: _
"""
if not self.chroma_client:
raise ValueError("Client not initialized")
if not self._collection:
raise ValueError("Collection not initialized")
max_chunk_size = self.chroma_client.max_batch_size
node_chunks = chunk_list(nodes, max_chunk_size)
all_ids = []
for node_chunk in node_chunks:
embeddings = []
metadatas = []
ids = []
documents = []
for node in node_chunk:
embeddings.append(node.get_embedding())
metadatas.append(
node_to_metadata_dict(
node, remove_text=True, flat_metadata=self.flat_metadata
)
)
ids.append(node.node_id)
documents.append(node.get_content(metadata_mode=MetadataMode.NONE))
self._collection.add(
embeddings=embeddings,
ids=ids,
metadatas=metadatas,
documents=documents,
)
all_ids.extend(ids)
return all_ids

View File

@@ -0,0 +1,217 @@
import logging
import typing
from injector import inject, singleton
from llama_index.core.indices.vector_store import VectorIndexRetriever, VectorStoreIndex
from llama_index.core.vector_stores.types import (
BasePydanticVectorStore,
FilterCondition,
MetadataFilter,
MetadataFilters,
)
from private_gpt.open_ai.extensions.context_filter import ContextFilter
from private_gpt.paths import local_data_path
from private_gpt.settings.settings import Settings
logger = logging.getLogger(__name__)
def _doc_id_metadata_filter(
context_filter: ContextFilter | None,
) -> MetadataFilters:
filters = MetadataFilters(filters=[], condition=FilterCondition.OR)
if context_filter is not None and context_filter.docs_ids is not None:
for doc_id in context_filter.docs_ids:
filters.filters.append(MetadataFilter(key="doc_id", value=doc_id))
return filters
@singleton
class VectorStoreComponent:
settings: Settings
vector_store: BasePydanticVectorStore
@inject
def __init__(self, settings: Settings) -> None:
self.settings = settings
match settings.vectorstore.database:
case "postgres":
try:
from llama_index.vector_stores.postgres import ( # type: ignore
PGVectorStore,
)
except ImportError as e:
raise ImportError(
"Postgres dependencies not found, install with `poetry install --extras vector-stores-postgres`"
) from e
if settings.postgres is None:
raise ValueError(
"Postgres settings not found. Please provide settings."
)
self.vector_store = typing.cast(
BasePydanticVectorStore,
PGVectorStore.from_params(
**settings.postgres.model_dump(exclude_none=True),
table_name="embeddings",
embed_dim=settings.embedding.embed_dim,
),
)
case "chroma":
try:
import chromadb # type: ignore
from chromadb.config import ( # type: ignore
Settings as ChromaSettings,
)
from private_gpt.components.vector_store.batched_chroma import (
BatchedChromaVectorStore,
)
except ImportError as e:
raise ImportError(
"ChromaDB dependencies not found, install with `poetry install --extras vector-stores-chroma`"
) from e
chroma_settings = ChromaSettings(anonymized_telemetry=False)
chroma_client = chromadb.PersistentClient(
path=str((local_data_path / "chroma_db").absolute()),
settings=chroma_settings,
)
chroma_collection = chroma_client.get_or_create_collection(
"make_this_parameterizable_per_api_call"
) # TODO
self.vector_store = typing.cast(
BasePydanticVectorStore,
BatchedChromaVectorStore(
chroma_client=chroma_client, chroma_collection=chroma_collection
),
)
case "qdrant":
try:
from llama_index.vector_stores.qdrant import ( # type: ignore
QdrantVectorStore,
)
from qdrant_client import QdrantClient # type: ignore
except ImportError as e:
raise ImportError(
"Qdrant dependencies not found, install with `poetry install --extras vector-stores-qdrant`"
) from e
if settings.qdrant is None:
logger.info(
"Qdrant config not found. Using default settings."
"Trying to connect to Qdrant at localhost:6333."
)
client = QdrantClient()
else:
client = QdrantClient(
**settings.qdrant.model_dump(exclude_none=True)
)
self.vector_store = typing.cast(
BasePydanticVectorStore,
QdrantVectorStore(
client=client,
collection_name="make_this_parameterizable_per_api_call",
), # TODO
)
case "milvus":
try:
from llama_index.vector_stores.milvus import ( # type: ignore
MilvusVectorStore,
)
except ImportError as e:
raise ImportError(
"Milvus dependencies not found, install with `poetry install --extras vector-stores-milvus`"
) from e
if settings.milvus is None:
logger.info(
"Milvus config not found. Using default settings.\n"
"Trying to connect to Milvus at local_data/private_gpt/milvus/milvus_local.db "
"with collection 'make_this_parameterizable_per_api_call'."
)
self.vector_store = typing.cast(
BasePydanticVectorStore,
MilvusVectorStore(
dim=settings.embedding.embed_dim,
collection_name="make_this_parameterizable_per_api_call",
overwrite=True,
),
)
else:
self.vector_store = typing.cast(
BasePydanticVectorStore,
MilvusVectorStore(
dim=settings.embedding.embed_dim,
uri=settings.milvus.uri,
token=settings.milvus.token,
collection_name=settings.milvus.collection_name,
overwrite=settings.milvus.overwrite,
),
)
case "clickhouse":
try:
from clickhouse_connect import ( # type: ignore
get_client,
)
from llama_index.vector_stores.clickhouse import ( # type: ignore
ClickHouseVectorStore,
)
except ImportError as e:
raise ImportError(
"ClickHouse dependencies not found, install with `poetry install --extras vector-stores-clickhouse`"
) from e
if settings.clickhouse is None:
raise ValueError(
"ClickHouse settings not found. Please provide settings."
)
clickhouse_client = get_client(
host=settings.clickhouse.host,
port=settings.clickhouse.port,
username=settings.clickhouse.username,
password=settings.clickhouse.password,
)
self.vector_store = ClickHouseVectorStore(
clickhouse_client=clickhouse_client
)
case _:
# Should be unreachable
# The settings validator should have caught this
raise ValueError(
f"Vectorstore database {settings.vectorstore.database} not supported"
)
def get_retriever(
self,
index: VectorStoreIndex,
context_filter: ContextFilter | None = None,
similarity_top_k: int = 2,
) -> VectorIndexRetriever:
# This way we support qdrant (using doc_ids) and the rest (using filters)
return VectorIndexRetriever(
index=index,
similarity_top_k=similarity_top_k,
doc_ids=context_filter.docs_ids if context_filter else None,
filters=(
_doc_id_metadata_filter(context_filter)
if self.settings.vectorstore.database != "qdrant"
else None
),
)
def close(self) -> None:
if hasattr(self.vector_store.client, "close"):
self.vector_store.client.close()

3
private_gpt/constants.py Normal file
View File

@@ -0,0 +1,3 @@
from pathlib import Path
PROJECT_ROOT_PATH: Path = Path(__file__).parents[1]

19
private_gpt/di.py Normal file
View File

@@ -0,0 +1,19 @@
from injector import Injector
from private_gpt.settings.settings import Settings, unsafe_typed_settings
def create_application_injector() -> Injector:
_injector = Injector(auto_bind=True)
_injector.binder.bind(Settings, to=unsafe_typed_settings)
return _injector
"""
Global injector for the application.
Avoid using this reference, it will make your code harder to test.
Instead, use the `request.state.injector` reference, which is bound to every request
"""
global_injector: Injector = create_application_injector()

69
private_gpt/launcher.py Normal file
View File

@@ -0,0 +1,69 @@
"""FastAPI app creation, logger configuration and main API routes."""
import logging
from fastapi import Depends, FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from injector import Injector
from llama_index.core.callbacks import CallbackManager
from llama_index.core.callbacks.global_handlers import create_global_handler
from llama_index.core.settings import Settings as LlamaIndexSettings
from private_gpt.server.chat.chat_router import chat_router
from private_gpt.server.chunks.chunks_router import chunks_router
from private_gpt.server.completions.completions_router import completions_router
from private_gpt.server.embeddings.embeddings_router import embeddings_router
from private_gpt.server.health.health_router import health_router
from private_gpt.server.ingest.ingest_router import ingest_router
from private_gpt.server.recipes.summarize.summarize_router import summarize_router
from private_gpt.settings.settings import Settings
logger = logging.getLogger(__name__)
def create_app(root_injector: Injector) -> FastAPI:
# Start the API
async def bind_injector_to_request(request: Request) -> None:
request.state.injector = root_injector
app = FastAPI(dependencies=[Depends(bind_injector_to_request)])
app.include_router(completions_router)
app.include_router(chat_router)
app.include_router(chunks_router)
app.include_router(ingest_router)
app.include_router(summarize_router)
app.include_router(embeddings_router)
app.include_router(health_router)
# Add LlamaIndex simple observability
global_handler = create_global_handler("simple")
if global_handler:
LlamaIndexSettings.callback_manager = CallbackManager([global_handler])
settings = root_injector.get(Settings)
if settings.server.cors.enabled:
logger.debug("Setting up CORS middleware")
app.add_middleware(
CORSMiddleware,
allow_credentials=settings.server.cors.allow_credentials,
allow_origins=settings.server.cors.allow_origins,
allow_origin_regex=settings.server.cors.allow_origin_regex,
allow_methods=settings.server.cors.allow_methods,
allow_headers=settings.server.cors.allow_headers,
)
if settings.ui.enabled:
logger.debug("Importing the UI module")
try:
from private_gpt.ui.ui import PrivateGptUi
except ImportError as e:
raise ImportError(
"UI dependencies not found, install with `poetry install --extras ui`"
) from e
ui = root_injector.get(PrivateGptUi)
ui.mount_in_app(app, settings.ui.path)
return app

6
private_gpt/main.py Normal file
View File

@@ -0,0 +1,6 @@
"""FastAPI app creation, logger configuration and main API routes."""
from private_gpt.di import global_injector
from private_gpt.launcher import create_app
app = create_app(global_injector)

View File

@@ -0,0 +1 @@
"""OpenAI compatibility utilities."""

View File

@@ -0,0 +1 @@
"""OpenAI API extensions."""

View File

@@ -0,0 +1,7 @@
from pydantic import BaseModel, Field
class ContextFilter(BaseModel):
docs_ids: list[str] | None = Field(
examples=[["c202d5e6-7b69-4869-81cc-dd574ee8ee11"]]
)

View File

@@ -0,0 +1,122 @@
import time
import uuid
from collections.abc import Iterator
from typing import Literal
from llama_index.core.llms import ChatResponse, CompletionResponse
from pydantic import BaseModel, Field
from private_gpt.server.chunks.chunks_service import Chunk
class OpenAIDelta(BaseModel):
"""A piece of completion that needs to be concatenated to get the full message."""
content: str | None
class OpenAIMessage(BaseModel):
"""Inference result, with the source of the message.
Role could be the assistant or system
(providing a default response, not AI generated).
"""
role: Literal["assistant", "system", "user"] = Field(default="user")
content: str | None
class OpenAIChoice(BaseModel):
"""Response from AI.
Either the delta or the message will be present, but never both.
Sources used will be returned in case context retrieval was enabled.
"""
finish_reason: str | None = Field(examples=["stop"])
delta: OpenAIDelta | None = None
message: OpenAIMessage | None = None
sources: list[Chunk] | None = None
index: int = 0
class OpenAICompletion(BaseModel):
"""Clone of OpenAI Completion model.
For more information see: https://platform.openai.com/docs/api-reference/chat/object
"""
id: str
object: Literal["completion", "completion.chunk"] = Field(default="completion")
created: int = Field(..., examples=[1623340000])
model: Literal["private-gpt"]
choices: list[OpenAIChoice]
@classmethod
def from_text(
cls,
text: str | None,
finish_reason: str | None = None,
sources: list[Chunk] | None = None,
) -> "OpenAICompletion":
return OpenAICompletion(
id=str(uuid.uuid4()),
object="completion",
created=int(time.time()),
model="private-gpt",
choices=[
OpenAIChoice(
message=OpenAIMessage(role="assistant", content=text),
finish_reason=finish_reason,
sources=sources,
)
],
)
@classmethod
def json_from_delta(
cls,
*,
text: str | None,
finish_reason: str | None = None,
sources: list[Chunk] | None = None,
) -> str:
chunk = OpenAICompletion(
id=str(uuid.uuid4()),
object="completion.chunk",
created=int(time.time()),
model="private-gpt",
choices=[
OpenAIChoice(
delta=OpenAIDelta(content=text),
finish_reason=finish_reason,
sources=sources,
)
],
)
return chunk.model_dump_json()
def to_openai_response(
response: str | ChatResponse, sources: list[Chunk] | None = None
) -> OpenAICompletion:
if isinstance(response, ChatResponse):
return OpenAICompletion.from_text(response.delta, finish_reason="stop")
else:
return OpenAICompletion.from_text(
response, finish_reason="stop", sources=sources
)
def to_openai_sse_stream(
response_generator: Iterator[str | CompletionResponse | ChatResponse],
sources: list[Chunk] | None = None,
) -> Iterator[str]:
for response in response_generator:
if isinstance(response, CompletionResponse | ChatResponse):
yield f"data: {OpenAICompletion.json_from_delta(text=response.delta)}\n\n"
else:
yield f"data: {OpenAICompletion.json_from_delta(text=response, sources=sources)}\n\n"
yield f"data: {OpenAICompletion.json_from_delta(text='', finish_reason='stop')}\n\n"
yield "data: [DONE]\n\n"

18
private_gpt/paths.py Normal file
View File

@@ -0,0 +1,18 @@
from pathlib import Path
from private_gpt.constants import PROJECT_ROOT_PATH
from private_gpt.settings.settings import settings
def _absolute_or_from_project_root(path: str) -> Path:
if path.startswith("/"):
return Path(path)
return PROJECT_ROOT_PATH / path
models_path: Path = PROJECT_ROOT_PATH / "models"
models_cache_path: Path = models_path / "cache"
docs_path: Path = PROJECT_ROOT_PATH / "docs"
local_data_path: Path = _absolute_or_from_project_root(
settings().data.local_data_folder
)

View File

@@ -0,0 +1 @@
"""private-gpt server."""

View File

View File

@@ -0,0 +1,115 @@
from fastapi import APIRouter, Depends, Request
from llama_index.core.llms import ChatMessage, MessageRole
from pydantic import BaseModel
from starlette.responses import StreamingResponse
from private_gpt.open_ai.extensions.context_filter import ContextFilter
from private_gpt.open_ai.openai_models import (
OpenAICompletion,
OpenAIMessage,
to_openai_response,
to_openai_sse_stream,
)
from private_gpt.server.chat.chat_service import ChatService
from private_gpt.server.utils.auth import authenticated
chat_router = APIRouter(prefix="/v1", dependencies=[Depends(authenticated)])
class ChatBody(BaseModel):
messages: list[OpenAIMessage]
use_context: bool = False
context_filter: ContextFilter | None = None
include_sources: bool = True
stream: bool = False
model_config = {
"json_schema_extra": {
"examples": [
{
"messages": [
{
"role": "system",
"content": "You are a rapper. Always answer with a rap.",
},
{
"role": "user",
"content": "How do you fry an egg?",
},
],
"stream": False,
"use_context": True,
"include_sources": True,
"context_filter": {
"docs_ids": ["c202d5e6-7b69-4869-81cc-dd574ee8ee11"]
},
}
]
}
}
@chat_router.post(
"/chat/completions",
response_model=None,
responses={200: {"model": OpenAICompletion}},
tags=["Contextual Completions"],
openapi_extra={
"x-fern-streaming": {
"stream-condition": "stream",
"response": {"$ref": "#/components/schemas/OpenAICompletion"},
"response-stream": {"$ref": "#/components/schemas/OpenAICompletion"},
}
},
)
def chat_completion(
request: Request, body: ChatBody
) -> OpenAICompletion | StreamingResponse:
"""Given a list of messages comprising a conversation, return a response.
Optionally include an initial `role: system` message to influence the way
the LLM answers.
If `use_context` is set to `true`, the model will use context coming
from the ingested documents to create the response. The documents being used can
be filtered using the `context_filter` and passing the document IDs to be used.
Ingested documents IDs can be found using `/ingest/list` endpoint. If you want
all ingested documents to be used, remove `context_filter` altogether.
When using `'include_sources': true`, the API will return the source Chunks used
to create the response, which come from the context provided.
When using `'stream': true`, the API will return data chunks following [OpenAI's
streaming model](https://platform.openai.com/docs/api-reference/chat/streaming):
```
{"id":"12345","object":"completion.chunk","created":1694268190,
"model":"private-gpt","choices":[{"index":0,"delta":{"content":"Hello"},
"finish_reason":null}]}
```
"""
service = request.state.injector.get(ChatService)
all_messages = [
ChatMessage(content=m.content, role=MessageRole(m.role)) for m in body.messages
]
if body.stream:
completion_gen = service.stream_chat(
messages=all_messages,
use_context=body.use_context,
context_filter=body.context_filter,
)
return StreamingResponse(
to_openai_sse_stream(
completion_gen.response,
completion_gen.sources if body.include_sources else None,
),
media_type="text/event-stream",
)
else:
completion = service.chat(
messages=all_messages,
use_context=body.use_context,
context_filter=body.context_filter,
)
return to_openai_response(
completion.response, completion.sources if body.include_sources else None
)

View File

@@ -0,0 +1,210 @@
from dataclasses import dataclass
from injector import inject, singleton
from llama_index.core.chat_engine import ContextChatEngine, SimpleChatEngine
from llama_index.core.chat_engine.types import (
BaseChatEngine,
)
from llama_index.core.indices import VectorStoreIndex
from llama_index.core.indices.postprocessor import MetadataReplacementPostProcessor
from llama_index.core.llms import ChatMessage, MessageRole
from llama_index.core.postprocessor import (
SentenceTransformerRerank,
SimilarityPostprocessor,
)
from llama_index.core.storage import StorageContext
from llama_index.core.types import TokenGen
from pydantic import BaseModel
from private_gpt.components.embedding.embedding_component import EmbeddingComponent
from private_gpt.components.llm.llm_component import LLMComponent
from private_gpt.components.node_store.node_store_component import NodeStoreComponent
from private_gpt.components.vector_store.vector_store_component import (
VectorStoreComponent,
)
from private_gpt.open_ai.extensions.context_filter import ContextFilter
from private_gpt.server.chunks.chunks_service import Chunk
from private_gpt.settings.settings import Settings
class Completion(BaseModel):
response: str
sources: list[Chunk] | None = None
class CompletionGen(BaseModel):
response: TokenGen
sources: list[Chunk] | None = None
@dataclass
class ChatEngineInput:
system_message: ChatMessage | None = None
last_message: ChatMessage | None = None
chat_history: list[ChatMessage] | None = None
@classmethod
def from_messages(cls, messages: list[ChatMessage]) -> "ChatEngineInput":
# Detect if there is a system message, extract the last message and chat history
system_message = (
messages[0]
if len(messages) > 0 and messages[0].role == MessageRole.SYSTEM
else None
)
last_message = (
messages[-1]
if len(messages) > 0 and messages[-1].role == MessageRole.USER
else None
)
# Remove from messages list the system message and last message,
# if they exist. The rest is the chat history.
if system_message:
messages.pop(0)
if last_message:
messages.pop(-1)
chat_history = messages if len(messages) > 0 else None
return cls(
system_message=system_message,
last_message=last_message,
chat_history=chat_history,
)
@singleton
class ChatService:
settings: Settings
@inject
def __init__(
self,
settings: Settings,
llm_component: LLMComponent,
vector_store_component: VectorStoreComponent,
embedding_component: EmbeddingComponent,
node_store_component: NodeStoreComponent,
) -> None:
self.settings = settings
self.llm_component = llm_component
self.embedding_component = embedding_component
self.vector_store_component = vector_store_component
self.storage_context = StorageContext.from_defaults(
vector_store=vector_store_component.vector_store,
docstore=node_store_component.doc_store,
index_store=node_store_component.index_store,
)
self.index = VectorStoreIndex.from_vector_store(
vector_store_component.vector_store,
storage_context=self.storage_context,
llm=llm_component.llm,
embed_model=embedding_component.embedding_model,
show_progress=True,
)
def _chat_engine(
self,
system_prompt: str | None = None,
use_context: bool = False,
context_filter: ContextFilter | None = None,
) -> BaseChatEngine:
settings = self.settings
if use_context:
vector_index_retriever = self.vector_store_component.get_retriever(
index=self.index,
context_filter=context_filter,
similarity_top_k=self.settings.rag.similarity_top_k,
)
node_postprocessors = [
MetadataReplacementPostProcessor(target_metadata_key="window"),
SimilarityPostprocessor(
similarity_cutoff=settings.rag.similarity_value
),
]
if settings.rag.rerank.enabled:
rerank_postprocessor = SentenceTransformerRerank(
model=settings.rag.rerank.model, top_n=settings.rag.rerank.top_n
)
node_postprocessors.append(rerank_postprocessor)
return ContextChatEngine.from_defaults(
system_prompt=system_prompt,
retriever=vector_index_retriever,
llm=self.llm_component.llm, # Takes no effect at the moment
node_postprocessors=node_postprocessors,
)
else:
return SimpleChatEngine.from_defaults(
system_prompt=system_prompt,
llm=self.llm_component.llm,
)
def stream_chat(
self,
messages: list[ChatMessage],
use_context: bool = False,
context_filter: ContextFilter | None = None,
) -> CompletionGen:
chat_engine_input = ChatEngineInput.from_messages(messages)
last_message = (
chat_engine_input.last_message.content
if chat_engine_input.last_message
else None
)
system_prompt = (
chat_engine_input.system_message.content
if chat_engine_input.system_message
else None
)
chat_history = (
chat_engine_input.chat_history if chat_engine_input.chat_history else None
)
chat_engine = self._chat_engine(
system_prompt=system_prompt,
use_context=use_context,
context_filter=context_filter,
)
streaming_response = chat_engine.stream_chat(
message=last_message if last_message is not None else "",
chat_history=chat_history,
)
sources = [Chunk.from_node(node) for node in streaming_response.source_nodes]
completion_gen = CompletionGen(
response=streaming_response.response_gen, sources=sources
)
return completion_gen
def chat(
self,
messages: list[ChatMessage],
use_context: bool = False,
context_filter: ContextFilter | None = None,
) -> Completion:
chat_engine_input = ChatEngineInput.from_messages(messages)
last_message = (
chat_engine_input.last_message.content
if chat_engine_input.last_message
else None
)
system_prompt = (
chat_engine_input.system_message.content
if chat_engine_input.system_message
else None
)
chat_history = (
chat_engine_input.chat_history if chat_engine_input.chat_history else None
)
chat_engine = self._chat_engine(
system_prompt=system_prompt,
use_context=use_context,
context_filter=context_filter,
)
wrapped_response = chat_engine.chat(
message=last_message if last_message is not None else "",
chat_history=chat_history,
)
sources = [Chunk.from_node(node) for node in wrapped_response.source_nodes]
completion = Completion(response=wrapped_response.response, sources=sources)
return completion

View File

View File

@@ -0,0 +1,55 @@
from typing import Literal
from fastapi import APIRouter, Depends, Request
from pydantic import BaseModel, Field
from private_gpt.open_ai.extensions.context_filter import ContextFilter
from private_gpt.server.chunks.chunks_service import Chunk, ChunksService
from private_gpt.server.utils.auth import authenticated
chunks_router = APIRouter(prefix="/v1", dependencies=[Depends(authenticated)])
class ChunksBody(BaseModel):
text: str = Field(examples=["Q3 2023 sales"])
context_filter: ContextFilter | None = None
limit: int = 10
prev_next_chunks: int = Field(default=0, examples=[2])
class ChunksResponse(BaseModel):
object: Literal["list"]
model: Literal["private-gpt"]
data: list[Chunk]
@chunks_router.post("/chunks", tags=["Context Chunks"])
def chunks_retrieval(request: Request, body: ChunksBody) -> ChunksResponse:
"""Given a `text`, returns the most relevant chunks from the ingested documents.
The returned information can be used to generate prompts that can be
passed to `/completions` or `/chat/completions` APIs. Note: it is usually a very
fast API, because only the Embeddings model is involved, not the LLM. The
returned information contains the relevant chunk `text` together with the source
`document` it is coming from. It also contains a score that can be used to
compare different results.
The max number of chunks to be returned is set using the `limit` param.
Previous and next chunks (pieces of text that appear right before or after in the
document) can be fetched by using the `prev_next_chunks` field.
The documents being used can be filtered using the `context_filter` and passing
the document IDs to be used. Ingested documents IDs can be found using
`/ingest/list` endpoint. If you want all ingested documents to be used,
remove `context_filter` altogether.
"""
service = request.state.injector.get(ChunksService)
results = service.retrieve_relevant(
body.text, body.context_filter, body.limit, body.prev_next_chunks
)
return ChunksResponse(
object="list",
model="private-gpt",
data=results,
)

View File

@@ -0,0 +1,125 @@
from typing import TYPE_CHECKING, Literal
from injector import inject, singleton
from llama_index.core.indices import VectorStoreIndex
from llama_index.core.schema import NodeWithScore
from llama_index.core.storage import StorageContext
from pydantic import BaseModel, Field
from private_gpt.components.embedding.embedding_component import EmbeddingComponent
from private_gpt.components.llm.llm_component import LLMComponent
from private_gpt.components.node_store.node_store_component import NodeStoreComponent
from private_gpt.components.vector_store.vector_store_component import (
VectorStoreComponent,
)
from private_gpt.open_ai.extensions.context_filter import ContextFilter
from private_gpt.server.ingest.model import IngestedDoc
if TYPE_CHECKING:
from llama_index.core.schema import RelatedNodeInfo
class Chunk(BaseModel):
object: Literal["context.chunk"]
score: float = Field(examples=[0.023])
document: IngestedDoc
text: str = Field(examples=["Outbound sales increased 20%, driven by new leads."])
previous_texts: list[str] | None = Field(
default=None,
examples=[["SALES REPORT 2023", "Inbound didn't show major changes."]],
)
next_texts: list[str] | None = Field(
default=None,
examples=[
[
"New leads came from Google Ads campaign.",
"The campaign was run by the Marketing Department",
]
],
)
@classmethod
def from_node(cls: type["Chunk"], node: NodeWithScore) -> "Chunk":
doc_id = node.node.ref_doc_id if node.node.ref_doc_id is not None else "-"
return cls(
object="context.chunk",
score=node.score or 0.0,
document=IngestedDoc(
object="ingest.document",
doc_id=doc_id,
doc_metadata=node.metadata,
),
text=node.get_content(),
)
@singleton
class ChunksService:
@inject
def __init__(
self,
llm_component: LLMComponent,
vector_store_component: VectorStoreComponent,
embedding_component: EmbeddingComponent,
node_store_component: NodeStoreComponent,
) -> None:
self.vector_store_component = vector_store_component
self.llm_component = llm_component
self.embedding_component = embedding_component
self.storage_context = StorageContext.from_defaults(
vector_store=vector_store_component.vector_store,
docstore=node_store_component.doc_store,
index_store=node_store_component.index_store,
)
def _get_sibling_nodes_text(
self, node_with_score: NodeWithScore, related_number: int, forward: bool = True
) -> list[str]:
explored_nodes_texts = []
current_node = node_with_score.node
for _ in range(related_number):
explored_node_info: RelatedNodeInfo | None = (
current_node.next_node if forward else current_node.prev_node
)
if explored_node_info is None:
break
explored_node = self.storage_context.docstore.get_node(
explored_node_info.node_id
)
explored_nodes_texts.append(explored_node.get_content())
current_node = explored_node
return explored_nodes_texts
def retrieve_relevant(
self,
text: str,
context_filter: ContextFilter | None = None,
limit: int = 10,
prev_next_chunks: int = 0,
) -> list[Chunk]:
index = VectorStoreIndex.from_vector_store(
self.vector_store_component.vector_store,
storage_context=self.storage_context,
llm=self.llm_component.llm,
embed_model=self.embedding_component.embedding_model,
show_progress=True,
)
vector_index_retriever = self.vector_store_component.get_retriever(
index=index, context_filter=context_filter, similarity_top_k=limit
)
nodes = vector_index_retriever.retrieve(text)
nodes.sort(key=lambda n: n.score or 0.0, reverse=True)
retrieved_nodes = []
for node in nodes:
chunk = Chunk.from_node(node)
chunk.previous_texts = self._get_sibling_nodes_text(
node, prev_next_chunks, False
)
chunk.next_texts = self._get_sibling_nodes_text(node, prev_next_chunks)
retrieved_nodes.append(chunk)
return retrieved_nodes

View File

@@ -0,0 +1 @@
"""Deprecated Openai compatibility endpoint."""

View File

@@ -0,0 +1,92 @@
from fastapi import APIRouter, Depends, Request
from pydantic import BaseModel
from starlette.responses import StreamingResponse
from private_gpt.open_ai.extensions.context_filter import ContextFilter
from private_gpt.open_ai.openai_models import (
OpenAICompletion,
OpenAIMessage,
)
from private_gpt.server.chat.chat_router import ChatBody, chat_completion
from private_gpt.server.utils.auth import authenticated
completions_router = APIRouter(prefix="/v1", dependencies=[Depends(authenticated)])
class CompletionsBody(BaseModel):
prompt: str
system_prompt: str | None = None
use_context: bool = False
context_filter: ContextFilter | None = None
include_sources: bool = True
stream: bool = False
model_config = {
"json_schema_extra": {
"examples": [
{
"prompt": "How do you fry an egg?",
"system_prompt": "You are a rapper. Always answer with a rap.",
"stream": False,
"use_context": False,
"include_sources": False,
}
]
}
}
@completions_router.post(
"/completions",
response_model=None,
summary="Completion",
responses={200: {"model": OpenAICompletion}},
tags=["Contextual Completions"],
openapi_extra={
"x-fern-streaming": {
"stream-condition": "stream",
"response": {"$ref": "#/components/schemas/OpenAICompletion"},
"response-stream": {"$ref": "#/components/schemas/OpenAICompletion"},
}
},
)
def prompt_completion(
request: Request, body: CompletionsBody
) -> OpenAICompletion | StreamingResponse:
"""We recommend most users use our Chat completions API.
Given a prompt, the model will return one predicted completion.
Optionally include a `system_prompt` to influence the way the LLM answers.
If `use_context`
is set to `true`, the model will use context coming from the ingested documents
to create the response. The documents being used can be filtered using the
`context_filter` and passing the document IDs to be used. Ingested documents IDs
can be found using `/ingest/list` endpoint. If you want all ingested documents to
be used, remove `context_filter` altogether.
When using `'include_sources': true`, the API will return the source Chunks used
to create the response, which come from the context provided.
When using `'stream': true`, the API will return data chunks following [OpenAI's
streaming model](https://platform.openai.com/docs/api-reference/chat/streaming):
```
{"id":"12345","object":"completion.chunk","created":1694268190,
"model":"private-gpt","choices":[{"index":0,"delta":{"content":"Hello"},
"finish_reason":null}]}
```
"""
messages = [OpenAIMessage(content=body.prompt, role="user")]
# If system prompt is passed, create a fake message with the system prompt.
if body.system_prompt:
messages.insert(0, OpenAIMessage(content=body.system_prompt, role="system"))
chat_body = ChatBody(
messages=messages,
use_context=body.use_context,
stream=body.stream,
include_sources=body.include_sources,
context_filter=body.context_filter,
)
return chat_completion(request, chat_body)

Some files were not shown because too many files have changed in this diff Show More