1
0
mirror of https://github.com/imartinez/privateGPT.git synced 2025-05-07 07:47:15 +00:00
Commit Graph

319 Commits

Author SHA1 Message Date
Javier Martinez
b1acf9dc2c
fix: publish image name () 2024-08-07 17:39:32 +02:00
Javier Martinez
4ca6d0cb55
fix: add numpy issue to troubleshooting ()
* docs: add numpy issue to troubleshooting

* fix: troubleshooting link

...
2024-08-07 12:16:03 +02:00
Javier Martinez
b16abbefe4
fix: update matplotlib to 3.9.1-post1 to fix win install
* chore: block matplotlib to fix installation in window machines

* chore: remove workaround, just update poetry.lock

* fix: update matplotlib to last version
2024-08-07 11:26:42 +02:00
github-actions[bot]
ca2b8da69c
chore(main): release 0.6.1 ()
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-08-05 17:17:34 +02:00
Javier Martinez
f09f6dd255
fix: add built image from DockerHub ()
* chore: update docker-compose with profiles

* docs: add quick start doc

* chore: generate docker release when new version is released

* chore: add dockerhub image in docker-compose

* docs: update quickstart with local/remote images

* chore: update docker tag

* chore: refactor dockerfile names

* chore: update docker-compose names

* docs: update llamacpp naming

* fix: naming

* docs: fix llamacpp command
2024-08-05 17:15:38 +02:00
Liam Dowd
1c665f7900
fix: Adding azopenai to model list ()
Fixing the error I encountered while using the azopenai mode
2024-08-05 16:30:10 +02:00
Javier Martinez
1d4c14d7a3
fix(deploy): generate docker release when new version is released () 2024-08-05 16:28:19 +02:00
Javier Martinez
dae0727a1b
fix(deploy): improve Docker-Compose and quickstart on Docker ()
* chore: update docker-compose with profiles

* docs: add quick start doc
2024-08-05 16:28:19 +02:00
github-actions[bot]
6674b46fea
chore(main): release 0.6.0 ()
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-08-02 11:28:22 +02:00
Javier Martinez
e44a7f5773
chore: bump version () 2024-08-02 11:26:03 +02:00
Javier Martinez
cf61bf780f
feat(llm): add progress bar when ollama is pulling models ()
* fix: add ollama progress bar when pulling models

* feat: add ollama queue

* fix: mypy
2024-08-01 19:14:26 +02:00
Javier Martinez
50b3027a24
docs: update docs and capture ()
* docs: update Readme

* style: refactor image

* docs: change important to tip
2024-08-01 10:01:22 +02:00
Javier Martinez
54659588b5
fix: nomic embeddings ()
* fix: allow to configure trust_remote_code

based on: https://github.com/zylon-ai/private-gpt/issues/1893#issuecomment-2118629391

* fix: nomic hf embeddings
2024-08-01 09:43:30 +02:00
Javier Martinez
8119842ae6
feat(recipe): add our first recipe Summarize ()
* feat: add summary recipe

* test: add summary tests

* docs: move all recipes docs

* docs: add recipes and summarize doc

* docs: update openapi reference

* refactor: split method in two method (summary)

* feat: add initial summarize ui

* feat: add mode explanation

* fix: mypy

* feat: allow to configure async property in summarize

* refactor: move modes to enum and update mode explanations

* docs: fix url

* docs: remove list-llm pages

* docs: remove double header

* fix: summary description
2024-07-31 16:53:27 +02:00
Javier Martinez
40638a18a5
fix: unify embedding models ()
* feat: unify embedding model to nomic

* docs: add embedding dimensions mismatch

* docs: fix fern
2024-07-31 14:35:46 +02:00
Javier Martinez
9027d695c1
feat: make llama3.1 as default ()
* feat: change ollama default model to llama3.1

* chore: bump versions

* feat: Change default model in local mode to llama3.1

* chore: make sure last poetry version is used

* fix: mypy

* fix: do not add BOS (with last llamacpp-python version)
2024-07-31 14:35:36 +02:00
Javier Martinez
e54a8fe043
fix: prevent to ingest local files (by default) ()
* feat: prevent to local ingestion (by default) and add white-list

* docs: add local ingestion warning

* docs: add missing comment

* fix: update exception error

* fix: black
2024-07-31 14:33:46 +02:00
Javier Martinez
1020cd5328
fix: light mode () 2024-07-31 12:59:31 +02:00
Quentin McGaw
65c5a1708b
chore(docker): dockerfiles improvements and fixes ()
* `UID` and `GID` build arguments for `worker` user

* `POETRY_EXTRAS` build argument with default values

* Copy `Makefile` for `make ingest` command

* Do NOT copy markdown files
I doubt anyone reads a markdown file within a Docker image

* Fix PYTHONPATH value

* Set home directory to `/home/worker` when creating user

* Combine `ENV` instructions together

* Define environment variables with their defaults
- For documentation purposes
- Reflect defaults set in settings-docker.yml

* `PGPT_EMBEDDING_MODE` to define embedding mode

* Remove ineffective `python3 -m pipx ensurepath`

* Use `&&` instead of `;` to chain commands to detect failure better

* Add `--no-root` flag to poetry install commands

* Set PGPT_PROFILES to docker

* chore: remove envs

* chore: update to use ollama in docker-compose

* chore: don't copy makefile

* chore: don't copy fern

* fix: tiktoken cache

* fix: docker compose port

* fix: ffmpy dependency ()

* fix: ffmpy dependency

* fix: block ffmpy to commit sha

* feat(llm): autopull ollama models ()

* chore: update ollama (llm)

* feat: allow to autopull ollama models

* fix: mypy

* chore: install always ollama client

* refactor: check connection and pull ollama method to utils

* docs: update ollama config with autopulling info

...

* chore: autopull ollama models

* chore: add GID/UID comment

...

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-30 17:59:38 +02:00
Robert Hirsch
d080969407
added llama3 prompt ()
* added llama3 prompt

* more fixes to pass tests; changed type VectorStore -> BasePydanticVectorStore, see https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md#2024-05-14

* fix: new llama3 prompt

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-29 17:28:00 +02:00
Javier Martinez
d4375d078f
fix(ui): gradio bug fixes ()
* fix: when two user messages were sent

* fix: add source divider

* fix: add favicon

* fix: add zylon link

* refactor: update label
2024-07-29 16:48:16 +02:00
Javier Martinez
20bad17c98
feat(llm): autopull ollama models ()
* chore: update ollama (llm)

* feat: allow to autopull ollama models

* fix: mypy

* chore: install always ollama client

* refactor: check connection and pull ollama method to utils

* docs: update ollama config with autopulling info
2024-07-29 13:25:42 +02:00
Javier Martinez
dabf556dae
fix: ffmpy dependency ()
* fix: ffmpy dependency

* fix: block ffmpy to commit sha
2024-07-29 11:56:57 +02:00
Iván Martínez
05a986231c
Add proper param to demo urls () 2024-07-22 14:44:03 +02:00
Javier Martinez
b62669784b
docs: update welcome page () 2024-07-18 14:42:39 +02:00
Javier Martinez
2c78bb2958
docs: add PR and issue templates ()
* chore: add pull request template

* chore: add issue templates

* chore: require more information in bugs
2024-07-18 12:56:10 +02:00
Iván Martínez
90d211c5cd
Update README.md ()
* Update README.md

Remove the outdated contact form and point to Zylon website for those looking for a ready-to-use enterprise solution built on top of PrivateGPT

* Update README.md

Update text to address the comments

* Update README.md

Improve text
2024-07-18 12:11:24 +02:00
Jackson
43cc31f740
feat(vectordb): Milvus vector db Integration ()
* integrate Milvus into Private GPT

* adjust milvus settings

* update doc info and reformat

* adjust milvus initialization

* adjust import error

* mionr update

* adjust format

* adjust the db storing path

* update doc
2024-07-18 10:55:45 +02:00
Javier Martinez
4523a30c8f
feat(docs): update documentation and fix preview-docs ()
* docs: add missing configurations

* docs: change HF embeddings by ollama

* docs: add disclaimer about Gradio UI

* docs: improve readability in concepts

* docs: reorder `Fully Local Setups`

* docs: improve setup instructions

* docs: prevent have duplicate documentation and use table to show different options

* docs: rename privateGpt to PrivateGPT

* docs: update ui image

* docs: remove useless header

* docs: convert to alerts ingestion disclaimers

* docs: add UI alternatives

* docs: reference UI alternatives in disclaimers

* docs: fix table

* chore: update doc preview version

* chore: add permissions

* chore: remove useless line

* docs: fixes

...
2024-07-18 10:06:51 +02:00
Javier Martinez
01b7ccd064
fix(config): make tokenizer optional and include a troubleshooting doc ()
* docs: add troubleshooting

* fix: pass HF token to setup script and prevent to download tokenizer when it is empty

* fix: improve log and disable specific tokenizer by default

* chore: change HF_TOKEN environment to be aligned with default config

* ifx: mypy
2024-07-17 10:06:27 +02:00
Javier Martinez
15f73dbc48
docs: update repo links, citations ()
* docs: update project links

...

* docs: update citation
2024-07-09 10:03:57 +02:00
fern
187bc9320e
(feat): add github button ()
Co-authored-by: chdeskur <chdeskur@gmail.com>
2024-07-09 08:48:47 +02:00
Marco Braga
dde02245bc
fix(docs): Fix concepts.mdx referencing to installation page ()
* Fix/update concepts.mdx referencing to installation page

The link for `/installation` is broken in the "Main Concepts" page.

The correct path would be `./installation` or  maybe `/installation/getting-started/installation`

* fix: docs

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-08 16:19:50 +02:00
Mart
067a5f144c
feat(docs): Fix setup docu ()
* Update settings.mdx

* docs: add cmd

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-08 16:19:16 +02:00
Proger666
2612928839
feat(vectorstore): Add clickhouse support as vectore store ()
* Added ClickHouse vector sotre support

* port fix

* updated lock file

* fix: mypy

* fix: mypy

---------

Co-authored-by: Valery Denisov <valerydenisov@double.cloud>
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-08 16:18:22 +02:00
uw4
fc13368bc7
feat(llm): Support for Google Gemini LLMs and Embeddings ()
* Support for Google Gemini LLMs and Embeddings

Initial support for Gemini, enables usage of Google LLMs and embedding models (see settings-gemini.yaml)

Install via
poetry install --extras "llms-gemini embeddings-gemini"

Notes:
* had to bump llama-index-core to later version that supports Gemini
* poetry --no-update did not work: Gemini/llama_index seem to require more (transient) updates to make it work...

* fix: crash when gemini is not selected

* docs: add gemini llm

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-08 11:47:36 +02:00
Shengsheng Huang
19a7c065ef
feat(docs): update doc for ipex-llm () 2024-07-08 09:42:44 +02:00
Javier Martinez
b687dc8524
feat: bump dependencies () 2024-07-05 16:31:13 +02:00
Pablo Orgaz
c7212ac7cc
fix(LLM): mistral ignoring assistant messages ()
* fix: mistral ignoring assistant messages

* fix: typing

* fix: fix tests
2024-05-30 15:41:16 +02:00
Yevhenii Semendiak
3b3e96ad6c
Allow parameterizing OpenAI embeddings component (api_base, key, model) ()
* Allow parameterizing OpenAI embeddings component (api_base, key, model)

* Update settings

* Update description
2024-05-17 09:52:50 +02:00
jcbonnet-fwd
45df99feb7
Add timeout parameter for better support of openailike LLM tools on local computer (like LM Studio). ()
feat(llm): Improve settings of the OpenAILike LLM
2024-05-10 16:44:08 +02:00
Fran García
966af4771d
fix(settings): enable cors by default so it will work when using ts sdk (spa) () 2024-05-10 14:13:46 +02:00
Fran García
d13029a046
feat(docs): add privategpt-ts sdk () 2024-05-10 14:13:15 +02:00
Patrick Peng
9d0d614706
fix: Replacing unsafe eval() with json.loads() () 2024-04-30 09:58:19 +02:00
icsy7867
e21bf20c10
feat: prompt_style applied to all LLMs + extra LLM params. ()
* Updated prompt_style to be moved to the main LLM setting since all LLMs from llama_index can utilize this.  I also included temperature, context window size, max_tokens, max_new_tokens into the openailike to help ensure the settings are consistent from the other implementations.

* Removed prompt_style from llamacpp entirely

* Fixed settings-local.yaml to include prompt_style in the LLM settings instead of llamacpp.
2024-04-30 09:53:10 +02:00
Daniel Gallego Vico
c1802e7cf0
fix(docs): Update installation.mdx ()
Update repo url
2024-04-19 17:10:58 +02:00
Marco Repetto
2a432bf9c5
fix: make embedding_api_base match api_base when on docker () 2024-04-19 15:42:19 +02:00
dividebysandwich
947e737f30
fix: "no such group" error in Dockerfile, added docx2txt and cryptography deps ()
* Fixed "no such group" error in Dockerfile, added docx2txt to poetry so docx parsing works out of the box for docker containers

* added cryptography dependency for pdf parsing
2024-04-19 15:40:00 +02:00
imartinez
49ef729abc Allow passing HF access token to download tokenizer. Fallback to default tokenizer. 2024-04-19 15:38:25 +02:00
Pablo Orgaz
347be643f7
fix(llm): special tokens and leading space () 2024-04-04 14:37:29 +02:00