Commit Graph

344 Commits

Author SHA1 Message Date
Somashekar B R
16d1f6000f fixed mypy ignored tool.mypy-llama_index.embeddings.fireworks 2024-09-25 00:40:50 +05:30
Somashekar B R
b2ffe5b32d fixed mypy 2024-09-25 00:37:25 +05:30
Somashekar B R
0ff7a066cc fixed mypy ignored tool.mypy-llama_index.embeddings.fireworks 2024-09-25 00:27:09 +05:30
Somashekar B R
03e88094ca fixed mypy ignored llama-index-embeddings-fireworks 2024-09-25 00:19:59 +05:30
Somashekar B R
6a460605de fixed mypy ignored mypy-llama-index-embeddings-fireworks 2024-09-25 00:16:03 +05:30
Somashekar B R
b807e50895 fixed mypy private_gpt for llama-index 2024-09-24 23:53:27 +05:30
Somashekar B R
80f15a1568 fixed ruff chekc 2024-09-24 23:42:00 +05:30
Somashekar B R
cecec30ca8 fixed test black error 2024-09-24 23:37:03 +05:30
Somashekar B R
9c3590e555 Added embedded model option for fireworks \n Added documentation for fireworks 2024-09-24 23:27:40 +05:30
Somashekar B R
519c48b70b Merge branch 'main' of github.com:somashekhar161/private-gpt into feat/fireworks-integration 2024-09-24 22:38:45 +05:30
J
fa3c30661d
fix: Add default mode option to settings (#2078)
* Add default mode option to settings

* Revise default_mode to Literal (enum) and add to settings.yaml

* Revise to pass make check/test

* Default mode: RAG

---------

Co-authored-by: Jason <jason@sowinsight.solutions>
2024-09-24 08:33:02 +02:00
Liam Dowd
f9182b3a86
feat: Adding MistralAI mode (#2065)
* Adding MistralAI mode

* Update embedding_component.py

* Update ui.py

* Update settings.py

* Update embedding_component.py

* Update settings.py

* Update settings.py

* Update settings-mistral.yaml

* Update llm_component.py

* Update settings-mistral.yaml

* Update settings.py

* Update settings.py

* Update ui.py

* Update embedding_component.py

* Delete settings-mistral.yaml

---------

Co-authored-by: SkiingIsFun123 <101684827+SkiingIsFun123@users.noreply.github.com>
Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-09-24 08:31:30 +02:00
Javier Martinez
8c12c6830b
fix: docker permissions (#2059)
* fix: missing depends_on

* chore: update copy permissions

* chore: update entrypoint

* Revert "chore: update entrypoint"

This reverts commit f73a36af2f.

* Revert "chore: update copy permissions"

This reverts commit fabc3f66bb.

* style: fix docker warning

* fix: multiples fixes

* fix: user permissions writing local_data folder
2024-09-24 08:30:58 +02:00
Somashekar B R
4d2254648c Added dockerfile and docker-compose for fireworks 2024-09-21 19:26:10 +05:30
Somashekar B R
241637f0c4 FEAT: Added Fireworks integration 2024-09-20 15:04:01 +05:30
Somashekar B R
c5c1f9b3e2 fixed at ui height overlow 2024-09-20 14:50:19 +05:30
Somashekar B R
2d5865e773 FEAT: Added fixed UI mode name 2024-09-20 14:49:02 +05:30
Somashekar B R
a38675b844 FEAT: Added Fireworks Integration 2024-09-20 14:15:34 +05:30
Somashekar B R
f1ad995124 FEAT: Added Fireworks Integration 2024-09-20 14:13:09 +05:30
Javier Martinez
77461b96cf
feat: add retry connection to ollama (#2084)
* feat: add retry connection to ollama

When Ollama is running in the docker-compose, traefik is not ready sometimes to route the request, and it fails

* fix: mypy
2024-09-16 16:43:05 +02:00
Trivikram Kamat
42628596b2
ci: bump actions/checkout to v4 (#2077) 2024-09-09 08:53:13 +02:00
Artur Martins
7603b3627d
fix: Rectify ffmpy poetry config; update version from 0.3.2 to 0.4.0 (#2062)
* Fix: Rectify ffmpy 0.3.2 poetry config

* keep optional set to false for ffmpy

* Updating ffmpy to version 0.4.0

* Remove comment about a fix
2024-08-21 10:39:58 +02:00
Javier Martinez
89477ea9d3
fix: naming image and ollama-cpu (#2056) 2024-08-12 08:23:16 +02:00
github-actions[bot]
22904ca8ad
chore(main): release 0.6.2 (#2049)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-08-08 18:16:41 +02:00
Javier Martinez
7fefe408b4
fix: auto-update version (#2052) 2024-08-08 16:50:42 +02:00
Javier Martinez
b1acf9dc2c
fix: publish image name (#2043) 2024-08-07 17:39:32 +02:00
Javier Martinez
4ca6d0cb55
fix: add numpy issue to troubleshooting (#2048)
* docs: add numpy issue to troubleshooting

* fix: troubleshooting link

...
2024-08-07 12:16:03 +02:00
Javier Martinez
b16abbefe4
fix: update matplotlib to 3.9.1-post1 to fix win install
* chore: block matplotlib to fix installation in window machines

* chore: remove workaround, just update poetry.lock

* fix: update matplotlib to last version
2024-08-07 11:26:42 +02:00
github-actions[bot]
ca2b8da69c
chore(main): release 0.6.1 (#2041)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-08-05 17:17:34 +02:00
Javier Martinez
f09f6dd255
fix: add built image from DockerHub (#2042)
* chore: update docker-compose with profiles

* docs: add quick start doc

* chore: generate docker release when new version is released

* chore: add dockerhub image in docker-compose

* docs: update quickstart with local/remote images

* chore: update docker tag

* chore: refactor dockerfile names

* chore: update docker-compose names

* docs: update llamacpp naming

* fix: naming

* docs: fix llamacpp command
2024-08-05 17:15:38 +02:00
Liam Dowd
1c665f7900
fix: Adding azopenai to model list (#2035)
Fixing the error I encountered while using the azopenai mode
2024-08-05 16:30:10 +02:00
Javier Martinez
1d4c14d7a3
fix(deploy): generate docker release when new version is released (#2038) 2024-08-05 16:28:19 +02:00
Javier Martinez
dae0727a1b
fix(deploy): improve Docker-Compose and quickstart on Docker (#2037)
* chore: update docker-compose with profiles

* docs: add quick start doc
2024-08-05 16:28:19 +02:00
github-actions[bot]
6674b46fea
chore(main): release 0.6.0 (#1834)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-08-02 11:28:22 +02:00
Javier Martinez
e44a7f5773
chore: bump version (#2033) 2024-08-02 11:26:03 +02:00
Javier Martinez
cf61bf780f
feat(llm): add progress bar when ollama is pulling models (#2031)
* fix: add ollama progress bar when pulling models

* feat: add ollama queue

* fix: mypy
2024-08-01 19:14:26 +02:00
Javier Martinez
50b3027a24
docs: update docs and capture (#2029)
* docs: update Readme

* style: refactor image

* docs: change important to tip
2024-08-01 10:01:22 +02:00
Javier Martinez
54659588b5
fix: nomic embeddings (#2030)
* fix: allow to configure trust_remote_code

based on: https://github.com/zylon-ai/private-gpt/issues/1893#issuecomment-2118629391

* fix: nomic hf embeddings
2024-08-01 09:43:30 +02:00
Javier Martinez
8119842ae6
feat(recipe): add our first recipe Summarize (#2028)
* feat: add summary recipe

* test: add summary tests

* docs: move all recipes docs

* docs: add recipes and summarize doc

* docs: update openapi reference

* refactor: split method in two method (summary)

* feat: add initial summarize ui

* feat: add mode explanation

* fix: mypy

* feat: allow to configure async property in summarize

* refactor: move modes to enum and update mode explanations

* docs: fix url

* docs: remove list-llm pages

* docs: remove double header

* fix: summary description
2024-07-31 16:53:27 +02:00
Javier Martinez
40638a18a5
fix: unify embedding models (#2027)
* feat: unify embedding model to nomic

* docs: add embedding dimensions mismatch

* docs: fix fern
2024-07-31 14:35:46 +02:00
Javier Martinez
9027d695c1
feat: make llama3.1 as default (#2022)
* feat: change ollama default model to llama3.1

* chore: bump versions

* feat: Change default model in local mode to llama3.1

* chore: make sure last poetry version is used

* fix: mypy

* fix: do not add BOS (with last llamacpp-python version)
2024-07-31 14:35:36 +02:00
Javier Martinez
e54a8fe043
fix: prevent to ingest local files (by default) (#2010)
* feat: prevent to local ingestion (by default) and add white-list

* docs: add local ingestion warning

* docs: add missing comment

* fix: update exception error

* fix: black
2024-07-31 14:33:46 +02:00
Javier Martinez
1020cd5328
fix: light mode (#2025) 2024-07-31 12:59:31 +02:00
Quentin McGaw
65c5a1708b
chore(docker): dockerfiles improvements and fixes (#1792)
* `UID` and `GID` build arguments for `worker` user

* `POETRY_EXTRAS` build argument with default values

* Copy `Makefile` for `make ingest` command

* Do NOT copy markdown files
I doubt anyone reads a markdown file within a Docker image

* Fix PYTHONPATH value

* Set home directory to `/home/worker` when creating user

* Combine `ENV` instructions together

* Define environment variables with their defaults
- For documentation purposes
- Reflect defaults set in settings-docker.yml

* `PGPT_EMBEDDING_MODE` to define embedding mode

* Remove ineffective `python3 -m pipx ensurepath`

* Use `&&` instead of `;` to chain commands to detect failure better

* Add `--no-root` flag to poetry install commands

* Set PGPT_PROFILES to docker

* chore: remove envs

* chore: update to use ollama in docker-compose

* chore: don't copy makefile

* chore: don't copy fern

* fix: tiktoken cache

* fix: docker compose port

* fix: ffmpy dependency (#2020)

* fix: ffmpy dependency

* fix: block ffmpy to commit sha

* feat(llm): autopull ollama models (#2019)

* chore: update ollama (llm)

* feat: allow to autopull ollama models

* fix: mypy

* chore: install always ollama client

* refactor: check connection and pull ollama method to utils

* docs: update ollama config with autopulling info

...

* chore: autopull ollama models

* chore: add GID/UID comment

...

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-30 17:59:38 +02:00
Robert Hirsch
d080969407
added llama3 prompt (#1962)
* added llama3 prompt

* more fixes to pass tests; changed type VectorStore -> BasePydanticVectorStore, see https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md#2024-05-14

* fix: new llama3 prompt

---------

Co-authored-by: Javier Martinez <javiermartinezalvarez98@gmail.com>
2024-07-29 17:28:00 +02:00
Javier Martinez
d4375d078f
fix(ui): gradio bug fixes (#2021)
* fix: when two user messages were sent

* fix: add source divider

* fix: add favicon

* fix: add zylon link

* refactor: update label
2024-07-29 16:48:16 +02:00
Javier Martinez
20bad17c98
feat(llm): autopull ollama models (#2019)
* chore: update ollama (llm)

* feat: allow to autopull ollama models

* fix: mypy

* chore: install always ollama client

* refactor: check connection and pull ollama method to utils

* docs: update ollama config with autopulling info
2024-07-29 13:25:42 +02:00
Javier Martinez
dabf556dae
fix: ffmpy dependency (#2020)
* fix: ffmpy dependency

* fix: block ffmpy to commit sha
2024-07-29 11:56:57 +02:00
Iván Martínez
05a986231c
Add proper param to demo urls (#2007) 2024-07-22 14:44:03 +02:00
Javier Martinez
b62669784b
docs: update welcome page (#2004) 2024-07-18 14:42:39 +02:00