Commit Graph

58 Commits

Author SHA1 Message Date
Javier Martinez
23704d23ad
feat: add new cuda profile 2024-08-05 17:48:14 +02:00
Javier Martinez
f09f6dd255
fix: add built image from DockerHub (#2042)
* chore: update docker-compose with profiles

* docs: add quick start doc

* chore: generate docker release when new version is released

* chore: add dockerhub image in docker-compose

* docs: update quickstart with local/remote images

* chore: update docker tag

* chore: refactor dockerfile names

* chore: update docker-compose names

* docs: update llamacpp naming

* fix: naming

* docs: fix llamacpp command
2024-08-05 17:15:38 +02:00
Javier Martinez
dae0727a1b
fix(deploy): improve Docker-Compose and quickstart on Docker (#2037)
* chore: update docker-compose with profiles

* docs: add quick start doc
2024-08-05 16:28:19 +02:00
Javier Martinez
50b3027a24
docs: update docs and capture (#2029)
* docs: update Readme

* style: refactor image

* docs: change important to tip
2024-08-01 10:01:22 +02:00
Javier Martinez
8119842ae6
feat(recipe): add our first recipe Summarize (#2028)
* feat: add summary recipe

* test: add summary tests

* docs: move all recipes docs

* docs: add recipes and summarize doc

* docs: update openapi reference

* refactor: split method in two method (summary)

* feat: add initial summarize ui

* feat: add mode explanation

* fix: mypy

* feat: allow to configure async property in summarize

* refactor: move modes to enum and update mode explanations

* docs: fix url

* docs: remove list-llm pages

* docs: remove double header

* fix: summary description
2024-07-31 16:53:27 +02:00
Javier Martinez
40638a18a5
fix: unify embedding models (#2027)
* feat: unify embedding model to nomic

* docs: add embedding dimensions mismatch

* docs: fix fern
2024-07-31 14:35:46 +02:00
Javier Martinez
9027d695c1
feat: make llama3.1 as default (#2022)
* feat: change ollama default model to llama3.1

* chore: bump versions

* feat: Change default model in local mode to llama3.1

* chore: make sure last poetry version is used

* fix: mypy

* fix: do not add BOS (with last llamacpp-python version)
2024-07-31 14:35:36 +02:00
Javier Martinez
e54a8fe043
fix: prevent to ingest local files (by default) (#2010)
* feat: prevent to local ingestion (by default) and add white-list

* docs: add local ingestion warning

* docs: add missing comment

* fix: update exception error

* fix: black
2024-07-31 14:33:46 +02:00
Javier Martinez
20bad17c98
feat(llm): autopull ollama models (#2019)
* chore: update ollama (llm)

* feat: allow to autopull ollama models

* fix: mypy

* chore: install always ollama client

* refactor: check connection and pull ollama method to utils

* docs: update ollama config with autopulling info
2024-07-29 13:25:42 +02:00
Iván Martínez
05a986231c
Add proper param to demo urls (#2007) 2024-07-22 14:44:03 +02:00
Javier Martinez
b62669784b
docs: update welcome page (#2004) 2024-07-18 14:42:39 +02:00
Jackson
43cc31f740
feat(vectordb): Milvus vector db Integration (#1996)
* integrate Milvus into Private GPT

* adjust milvus settings

* update doc info and reformat

* adjust milvus initialization

* adjust import error

* mionr update

* adjust format

* adjust the db storing path

* update doc
2024-07-18 10:55:45 +02:00
Javier Martinez
4523a30c8f
feat(docs): update documentation and fix preview-docs (#2000)
* docs: add missing configurations

* docs: change HF embeddings by ollama

* docs: add disclaimer about Gradio UI

* docs: improve readability in concepts

* docs: reorder `Fully Local Setups`

* docs: improve setup instructions

* docs: prevent have duplicate documentation and use table to show different options

* docs: rename privateGpt to PrivateGPT

* docs: update ui image

* docs: remove useless header

* docs: convert to alerts ingestion disclaimers

* docs: add UI alternatives

* docs: reference UI alternatives in disclaimers

* docs: fix table

* chore: update doc preview version

* chore: add permissions

* chore: remove useless line

* docs: fixes

...
2024-07-18 10:06:51 +02:00
Javier Martinez
01b7ccd064
fix(config): make tokenizer optional and include a troubleshooting doc (#1998)
* docs: add troubleshooting

* fix: pass HF token to setup script and prevent to download tokenizer when it is empty

* fix: improve log and disable specific tokenizer by default

* chore: change HF_TOKEN environment to be aligned with default config

* ifx: mypy
2024-07-17 10:06:27 +02:00
Javier Martinez
15f73dbc48
docs: update repo links, citations (#1990)
* docs: update project links

...

* docs: update citation
2024-07-09 10:03:57 +02:00
fern
187bc9320e
(feat): add github button (#1989)
Co-authored-by: chdeskur <chdeskur@gmail.com>
2024-07-09 08:48:47 +02:00
Marco Braga
dde02245bc
fix(docs): Fix concepts.mdx referencing to installation page (#1779)
* Fix/update concepts.mdx referencing to installation page

The link for `/installation` is broken in the "Main Concepts" page.

The correct path would be `./installation` or  maybe `/installation/getting-started/installation`

* fix: docs

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-08 16:19:50 +02:00
Mart
067a5f144c
feat(docs): Fix setup docu (#1926)
* Update settings.mdx

* docs: add cmd

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-08 16:19:16 +02:00
Proger666
2612928839
feat(vectorstore): Add clickhouse support as vectore store (#1883)
* Added ClickHouse vector sotre support

* port fix

* updated lock file

* fix: mypy

* fix: mypy

---------

Co-authored-by: Valery Denisov <valerydenisov@double.cloud>
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-08 16:18:22 +02:00
uw4
fc13368bc7
feat(llm): Support for Google Gemini LLMs and Embeddings (#1965)
* Support for Google Gemini LLMs and Embeddings

Initial support for Gemini, enables usage of Google LLMs and embedding models (see settings-gemini.yaml)

Install via
poetry install --extras "llms-gemini embeddings-gemini"

Notes:
* had to bump llama-index-core to later version that supports Gemini
* poetry --no-update did not work: Gemini/llama_index seem to require more (transient) updates to make it work...

* fix: crash when gemini is not selected

* docs: add gemini llm

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-08 11:47:36 +02:00
Shengsheng Huang
19a7c065ef
feat(docs): update doc for ipex-llm (#1968) 2024-07-08 09:42:44 +02:00
Fran García
d13029a046
feat(docs): add privategpt-ts sdk (#1924) 2024-05-10 14:13:15 +02:00
Daniel Gallego Vico
c1802e7cf0
fix(docs): Update installation.mdx (#1866)
Update repo url
2024-04-19 17:10:58 +02:00
Иван
8a836e4651
feat(docs): Add guide Llama-CPP Linux AMD GPU support (#1782) 2024-04-02 16:55:05 +02:00
machatschek
83adc12a8e
feat(RAG): Introduce SentenceTransformer Reranker (#1810) 2024-04-02 10:29:51 +02:00
Iván Martínez
572518143a
feat(docs): Feature/upgrade docs (#1741)
* Upgrade fern version

* Add info about SDKs
2024-03-19 21:26:53 +01:00
Brett England
134fc54d7d
feat(ingest): Created a faster ingestion mode - pipeline (#1750)
* Unify pgvector and postgres connection settings

* Remove local changes

* Update file pgvector->postgres

* postgresql should be postgres

* Adding pipeline ingestion mode

* disable hugging face parallelism.  Continue on file to doc transform failure

* Semaphore to limit docq async workers. ETA reporting
2024-03-19 21:24:46 +01:00
Otto L
1efac6a3fe
feat(llm - embed): Add support for Azure OpenAI (#1698)
* Add support for Azure OpenAI

* fix: wrong default api_version

Should be dashes instead of underscores.
see: https://learn.microsoft.com/en-us/azure/ai-services/openai/reference

* fix: code styling

applied "make check" changes

* refactor: extend documentation

* mention azopenai as available option and extras
* add recommended section
* include settings-azopenai.yaml configuration file

* fix: documentation
2024-03-15 16:49:50 +01:00
Brett England
258d02d87c
fix(docs): Minor documentation amendment (#1739)
* Unify pgvector and postgres connection settings

* Remove local changes

* Update file pgvector->postgres

* postgresql should be postgres
2024-03-15 16:36:32 +01:00
Brett England
63de7e4930
feat: unify settings for vector and nodestore connections to PostgreSQL (#1730)
* Unify pgvector and postgres connection settings

* Remove local changes

* Update file pgvector->postgres
2024-03-15 09:55:17 +01:00
Brett England
68b3a34b03
feat(nodestore): add Postgres for the doc and index store (#1706)
* Adding Postgres for the doc and index store

* Adding documentation.  Rename postgres database local->simple.  Postgres storage dependencies

* Update documentation for postgres storage

* Renaming feature to nodestore

* update docstore -> nodestore in doc

* missed some docstore changes in doc

* Updated poetry.lock

* Formatting updates to pass ruff/black checks

* Correction to unreachable code!

* Format adjustment to pass black test

* Adjust extra inclusion name for vector pg

* extra dep change for pg vector

* storage-postgres -> storage-nodestore-postgres

* Hash change on poetry lock
2024-03-14 17:12:33 +01:00
Andrew Jiang
84ad16af80
feat(docs): upgrade fern (#1596) 2024-03-11 23:02:56 +01:00
Iván Martínez
45f05711eb
feat: Upgrade to LlamaIndex to 0.10 (#1663)
* Extract optional dependencies

* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity

* Support Ollama embeddings

* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine

* Fix vector retriever filters
2024-03-06 17:51:30 +01:00
TQ
cd40e3982b
feat(Vector): support pgvector (#1624) 2024-02-20 15:29:26 +01:00
Ygal Blum
6bbec79583
feat(llm): Add support for Ollama LLM (#1526) 2024-02-09 15:50:50 +01:00
Iván Martínez
24fae660e6
feat: Add stream information to generate SDKs (#1569) 2024-02-02 16:14:22 +01:00
CognitiveTech
e326126d0d
feat: add mistral + chatml prompts (#1426) 2024-01-16 22:51:14 +01:00
Guido Schulz
0a89d76cc5
fix(docs): Update quickstart doc and set version in pyproject.toml to 0.2.0 2023-12-26 13:09:31 +01:00
Matthew Hill
2d27a9f956
feat(llm): Add openailike llm mode (#1447)
This mode behaves the same as the openai mode, except that it allows setting custom models not
supported by OpenAI. It can be used with any tool that serves models from an OpenAI compatible API.

Implements #1424
2023-12-26 10:26:08 +01:00
Iván Martínez
6eeb95ec7f
feat(API): Ingest plain text (#1417)
* Add ingest/text route to ingest plain text

* Add new ingest text test and adapt ingest/file ones

* Include new API in docs

* Remove duplicated logic
2023-12-18 21:47:05 +01:00
Iván Martínez
8ec7cf49f4
feat(settings): Update default model to TheBloke/Mistral-7B-Instruct-v0.2-GGUF (#1415)
* Update LlamaCPP dependency

* Default to TheBloke/Mistral-7B-Instruct-v0.2-GGUF

* Fix API docs
2023-12-17 16:11:08 +01:00
Eliott Bouhana
4e496e970a
docs: remove misleading comment about pgpt working with python 3.12 (#1394)
I was misled into believing I could install using python 3.12 whereas the pyproject.toml explicitly states otherwise. This PR only removes this comment to make sure other people are not also trapped 😄
2023-12-15 21:35:02 +01:00
Federico Grandi
1d28ae2915
docs: fix minor capitalization typo (#1392) 2023-12-12 20:31:38 +01:00
3ly-13
145f3ec9f4
feat(ui): Allows User to Set System Prompt via "Additional Options" in Chat Interface (#1353) 2023-12-10 19:45:14 +01:00
3ly-13
a072a40a7c
Allow setting OpenAI model in settings (#1386)
feat(settings): Allow setting openai model to be used. Default to GPT 3.5
2023-12-09 20:13:00 +01:00
Iván Martínez
f235c50be9
Delete old docs (#1384) 2023-12-08 22:39:23 +01:00
EEmlan
9302620eac
Adding german speaking model to documentation (#1374) 2023-12-08 11:26:25 +01:00
lopagela
56af625d71
Fix the parallel ingestion mode, and make it available through conf (#1336)
* Fix the parallel ingestion mode, and make it available through conf

Also updated the documentation to show how to configure the ingest mode.

* PR feedback: redirect to documentation
2023-11-30 11:41:55 +01:00
Francisco García Sierra
b7ca7d35a0
Update ingest api docs with Windows support (#1289) 2023-11-29 20:56:37 +01:00
ishaandatta
28d03fdda8
Adding working combination of LLM and Embedding Model to recipes (#1315)
Co-authored-by: ishaandatta <ishaandatta50@gmail.com>
2023-11-29 20:54:22 +01:00