Commit Graph

384 Commits

Author SHA1 Message Date
Nikhil Shrestha
f19be9183c Merge remote-tracking branch 'origin/Global-docker' into Global-docker 2024-05-05 07:55:57 +05:45
Nikhil Shrestha
978be3b5dd added checked, unchecked static folders 2024-05-05 07:55:39 +05:45
Saurab-Shrestha
7cfbad8b59 Add configuration for nginx 2024-05-05 07:17:16 +05:45
Saurab-Shrestha
ba71a39971 Fixed bug regarding pagination 2024-05-04 19:27:24 +05:45
Nikhil Shrestha
3c56af31cf fixed lock file 2024-05-04 17:23:41 +05:45
Nikhil Shrestha
1ea0663a3b Merge branch 'main' into Global-docker
# Conflicts:
#	poetry.lock
#	pyproject.toml
#	settings-local.yaml
#	settings.yaml
2024-05-04 17:16:25 +05:45
Saurab Shrestha
7d1f75fcd5
delete sql-dump file 2024-05-04 10:21:45 +05:45
Saurab-Shrestha
4472add3c2 Updated docker compose compatible for GPU 2024-05-02 17:32:29 +05:45
Saurab-Shrestha
1963190d16 Updated the llm component 2024-05-02 10:58:03 +05:45
Saurab-Shrestha
bc343206cc Updated docker settings 2024-04-30 17:45:51 +05:45
Patrick Peng
9d0d614706
fix: Replacing unsafe eval() with json.loads() (#1890) 2024-04-30 09:58:19 +02:00
icsy7867
e21bf20c10
feat: prompt_style applied to all LLMs + extra LLM params. (#1835)
* Updated prompt_style to be moved to the main LLM setting since all LLMs from llama_index can utilize this.  I also included temperature, context window size, max_tokens, max_new_tokens into the openailike to help ensure the settings are consistent from the other implementations.

* Removed prompt_style from llamacpp entirely

* Fixed settings-local.yaml to include prompt_style in the LLM settings instead of llamacpp.
2024-04-30 09:53:10 +02:00
Saurab-Shrestha
f9a454861d Updated docker compose 2024-04-29 20:08:06 +05:45
Saurab-Shrestha
3f99b0996f Merged with dev 2024-04-28 11:29:26 +05:45
Saurab-Shrestha
c7aac53cd9 Added new docker files 2024-04-28 11:25:38 +05:45
Saurab-Shrestha9639*969**9858//852
1d6fc7144a Added llama3 prompt 2024-04-24 17:15:13 +05:45
Saurab-Shrestha9639*969**9858//852
3282d52bf2 Added fastapi-pagination and audit log download 2024-04-23 17:48:13 +05:45
Saurab-Shrestha9639*969**9858//852
97317b82e0 Update with filter for audit 2024-04-21 16:52:45 +05:45
Daniel Gallego Vico
c1802e7cf0
fix(docs): Update installation.mdx (#1866)
Update repo url
2024-04-19 17:10:58 +02:00
Marco Repetto
2a432bf9c5
fix: make embedding_api_base match api_base when on docker (#1859) 2024-04-19 15:42:19 +02:00
dividebysandwich
947e737f30
fix: "no such group" error in Dockerfile, added docx2txt and cryptography deps (#1841)
* Fixed "no such group" error in Dockerfile, added docx2txt to poetry so docx parsing works out of the box for docker containers

* added cryptography dependency for pdf parsing
2024-04-19 15:40:00 +02:00
imartinez
49ef729abc Allow passing HF access token to download tokenizer. Fallback to default tokenizer. 2024-04-19 15:38:25 +02:00
Saurab-Shrestha9639*969**9858//852
8bc7fb0039 Event listener for updating total documents 2024-04-17 17:09:50 +05:45
Saurab-Shrestha9639*969**9858//852
bf135b1692 Added build history 2024-04-10 15:13:06 +05:45
Saurab-Shrestha
44d94e145e Updated the title for chat history 2024-04-09 17:34:51 +05:45
Saurab-Shrestha
a59782e24e solved merged conflict 2024-04-09 15:49:05 +05:45
Saurab-Shrestha
aab5e50f7c Merge branch 'temporary_branch' of https://github.com/QuickfoxConsulting/privateGPT into temporary_branch 2024-04-09 15:48:44 +05:45
Saurab-Shrestha9639*969**9858//852
7dab3edebf Bug fixes on chat history filter plus removed system prompt 2024-04-08 17:59:01 +05:45
Saurab-Shrestha9639*969**9858//852
fb64e15802 Bug fixes for chat history 2024-04-07 17:54:36 +05:45
Pablo Orgaz
347be643f7
fix(llm): special tokens and leading space (#1831) 2024-04-04 14:37:29 +02:00
Saurab-Shrestha9639*969**9858//852
ee0e1cd839 Updated chat history and items id with uuid 2024-04-04 12:02:12 +05:45
Nikhil Shrestha
b23dae5b18 rerank fixes 2024-04-04 11:49:28 +05:45
Saurab-Shrestha9639*969**9858//852
4bc9dd7870 Added chat history and chat item 2024-04-03 17:58:27 +05:45
Nikhil Shrestha
28e418124b updated poetry.lock 2024-04-03 17:52:39 +05:45
Nikhil Shrestha
9f929cf4f3 Merge branch 'main' into dev
# Conflicts:
#	docker-compose.yaml
#	poetry.lock
#	pyproject.toml
#	settings.yaml
2024-04-03 17:46:42 +05:45
imartinez
08c4ab175e Fix version in poetry 2024-04-03 10:59:35 +02:00
Saurab-Shrestha
355271be93 Added table for chat history 2024-04-03 14:11:08 +05:45
imartinez
f469b4619d Add required Ollama setting 2024-04-02 18:27:57 +02:00
github-actions[bot]
94ef38cbba
chore(main): release 0.5.0 (#1708)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-04-02 17:45:15 +02:00
Иван
8a836e4651
feat(docs): Add guide Llama-CPP Linux AMD GPU support (#1782) 2024-04-02 16:55:05 +02:00
Ingrid Stevens
f0b174c097
feat(ui): Add Model Information to ChatInterface label 2024-04-02 16:52:27 +02:00
igeni
bac818add5
feat(code): improve concat of strings in ui (#1785) 2024-04-02 16:42:40 +02:00
Brett England
ea153fb92f
feat(scripts): Wipe qdrant and obtain db Stats command (#1783) 2024-04-02 16:41:42 +02:00
Robin Boone
b3b0140e24
feat(llm): Ollama LLM-Embeddings decouple + longer keep_alive settings (#1800) 2024-04-02 16:23:10 +02:00
machatschek
83adc12a8e
feat(RAG): Introduce SentenceTransformer Reranker (#1810) 2024-04-02 10:29:51 +02:00
Marco Repetto
f83abff8bc
feat(docker): set default Docker to use Ollama (#1812) 2024-04-01 13:08:48 +02:00
Saurab-Shrestha9639*969**9858//852
542ed0ef4e updated with new models OpenHermes and BAAI/bge-large embedding model 2024-03-31 15:36:52 +05:45
icsy7867
087cb0b7b7
feat(rag): expose similarity_top_k and similarity_score to settings (#1771)
* Added RAG settings to settings.py, vector_store and chat_service to add similarity_top_k and similarity_score

* Updated settings in vector and chat service per Ivans request

* Updated code for mypy
2024-03-20 22:25:26 +01:00
Marco Repetto
774e256052
fix: Fixed docker-compose (#1758)
* Fixed docker-compose

* Update docker-compose.yaml
2024-03-20 21:36:45 +01:00
Iván Martínez
6f6c785dac
feat(llm): Ollama timeout setting (#1773)
* added request_timeout to ollama, default set to 30.0 in settings.yaml and settings-ollama.yaml

* Update settings-ollama.yaml

* Update settings.yaml

* updated settings.py and tidied up settings-ollama-yaml

* feat(UI): Faster startup and document listing (#1763)

* fix(ingest): update script label (#1770)

huggingface -> Hugging Face

* Fix lint errors

---------

Co-authored-by: Stephen Gresham <steve@gresham.id.au>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
2024-03-20 21:33:46 +01:00