1
0
mirror of https://github.com/imartinez/privateGPT.git synced 2025-05-05 06:48:09 +00:00
Commit Graph

13 Commits

Author SHA1 Message Date
Javier Martinez
4ca6d0cb55
fix: add numpy issue to troubleshooting ()
* docs: add numpy issue to troubleshooting

* fix: troubleshooting link

...
2024-08-07 12:16:03 +02:00
Javier Martinez
9027d695c1
feat: make llama3.1 as default ()
* feat: change ollama default model to llama3.1

* chore: bump versions

* feat: Change default model in local mode to llama3.1

* chore: make sure last poetry version is used

* fix: mypy

* fix: do not add BOS (with last llamacpp-python version)
2024-07-31 14:35:36 +02:00
Javier Martinez
20bad17c98
feat(llm): autopull ollama models ()
* chore: update ollama (llm)

* feat: allow to autopull ollama models

* fix: mypy

* chore: install always ollama client

* refactor: check connection and pull ollama method to utils

* docs: update ollama config with autopulling info
2024-07-29 13:25:42 +02:00
Jackson
43cc31f740
feat(vectordb): Milvus vector db Integration ()
* integrate Milvus into Private GPT

* adjust milvus settings

* update doc info and reformat

* adjust milvus initialization

* adjust import error

* mionr update

* adjust format

* adjust the db storing path

* update doc
2024-07-18 10:55:45 +02:00
Javier Martinez
4523a30c8f
feat(docs): update documentation and fix preview-docs ()
* docs: add missing configurations

* docs: change HF embeddings by ollama

* docs: add disclaimer about Gradio UI

* docs: improve readability in concepts

* docs: reorder `Fully Local Setups`

* docs: improve setup instructions

* docs: prevent have duplicate documentation and use table to show different options

* docs: rename privateGpt to PrivateGPT

* docs: update ui image

* docs: remove useless header

* docs: convert to alerts ingestion disclaimers

* docs: add UI alternatives

* docs: reference UI alternatives in disclaimers

* docs: fix table

* chore: update doc preview version

* chore: add permissions

* chore: remove useless line

* docs: fixes

...
2024-07-18 10:06:51 +02:00
Javier Martinez
01b7ccd064
fix(config): make tokenizer optional and include a troubleshooting doc ()
* docs: add troubleshooting

* fix: pass HF token to setup script and prevent to download tokenizer when it is empty

* fix: improve log and disable specific tokenizer by default

* chore: change HF_TOKEN environment to be aligned with default config

* ifx: mypy
2024-07-17 10:06:27 +02:00
Daniel Gallego Vico
c1802e7cf0
fix(docs): Update installation.mdx ()
Update repo url
2024-04-19 17:10:58 +02:00
Иван
8a836e4651
feat(docs): Add guide Llama-CPP Linux AMD GPU support () 2024-04-02 16:55:05 +02:00
Otto L
1efac6a3fe
feat(llm - embed): Add support for Azure OpenAI ()
* Add support for Azure OpenAI

* fix: wrong default api_version

Should be dashes instead of underscores.
see: https://learn.microsoft.com/en-us/azure/ai-services/openai/reference

* fix: code styling

applied "make check" changes

* refactor: extend documentation

* mention azopenai as available option and extras
* add recommended section
* include settings-azopenai.yaml configuration file

* fix: documentation
2024-03-15 16:49:50 +01:00
Iván Martínez
45f05711eb
feat: Upgrade to LlamaIndex to 0.10 ()
* Extract optional dependencies

* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity

* Support Ollama embeddings

* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine

* Fix vector retriever filters
2024-03-06 17:51:30 +01:00
Eliott Bouhana
4e496e970a
docs: remove misleading comment about pgpt working with python 3.12 ()
I was misled into believing I could install using python 3.12 whereas the pyproject.toml explicitly states otherwise. This PR only removes this comment to make sure other people are not also trapped 😄
2023-12-15 21:35:02 +01:00
Gianni Acquisto
9c192ddd73
Added max_new_tokens as a config option to llm yaml block ()
* added max_new_tokens as a configuration option to the llm block in settings

* Update fern/docs/pages/manual/settings.mdx

Co-authored-by: lopagela <lpglm@orange.fr>

* Update private_gpt/settings/settings.py

Add default value for max_new_tokens = 256

Co-authored-by: lopagela <lpglm@orange.fr>

* Addressed location of docs comment

* reformatting from running 'make check'

* remove default config value from settings.yaml

---------

Co-authored-by: lopagela <lpglm@orange.fr>
2023-11-26 19:17:29 +01:00
lopagela
36f69eed0f
Refactor documentation architecture ()
* Refactor documentation architecture

Split into several `tab` and sections

* Fix Fern's docs.yml after PR review

Thank you Danny!

Co-authored-by: dannysheridan <danny@buildwithfern.com>

* Re-add quickstart in the overview tab

It went missing after a refactoring of the doc architecture

* Documentation writing

* Adapt Makefile to fern documentation

* Do not create overlapping page names in fern documentation

This is causing 500. Thank you to @dsinghvi for the troubleshooting and the help!

* Add a readme to help to understand how fern documentation work and how to add new pages

* Rework the welcome view

Redirects directly users to installation guide with links for people that are not familiar with documentation browsing.

* Simplify the quickstart guide

* PR feedback on installation guide

A ton of refactoring can still be made there

* PR feedback on ingestion

* PR feedback on ingestion splitting

* Rename section on LLM

* Fix missing word in list of LLMs

---------

Co-authored-by: dannysheridan <danny@buildwithfern.com>
2023-11-19 18:46:09 +01:00