privateGPT/private_gpt
Javier Martinez 01b7ccd064
fix(config): make tokenizer optional and include a troubleshooting doc (#1998)
* docs: add troubleshooting

* fix: pass HF token to setup script and prevent to download tokenizer when it is empty

* fix: improve log and disable specific tokenizer by default

* chore: change HF_TOKEN environment to be aligned with default config

* ifx: mypy
2024-07-17 10:06:27 +02:00
..
components fix(config): make tokenizer optional and include a troubleshooting doc (#1998) 2024-07-17 10:06:27 +02:00
open_ai feat: Upgrade to LlamaIndex to 0.10 (#1663) 2024-03-06 17:51:30 +01:00
server feat(RAG): Introduce SentenceTransformer Reranker (#1810) 2024-04-02 10:29:51 +02:00
settings feat(vectorstore): Add clickhouse support as vectore store (#1883) 2024-07-08 16:18:22 +02:00
ui feat(llm): Support for Google Gemini LLMs and Embeddings (#1965) 2024-07-08 11:47:36 +02:00
utils feat(ingest): Created a faster ingestion mode - pipeline (#1750) 2024-03-19 21:24:46 +01:00
__init__.py feat(local): tiktoken cache within repo for offline (#1467) 2024-03-11 22:55:13 +01:00
__main__.py fix: Remove global state (#1216) 2023-11-12 22:20:36 +01:00
constants.py Next version of PrivateGPT (#1077) 2023-10-19 16:04:35 +02:00
di.py fix: Remove global state (#1216) 2023-11-12 22:20:36 +01:00
launcher.py feat(llm): adds serveral settings for llamacpp and ollama (#1703) 2024-03-11 22:51:05 +01:00
main.py feat: Upgrade to LlamaIndex to 0.10 (#1663) 2024-03-06 17:51:30 +01:00
paths.py fix: Remove global state (#1216) 2023-11-12 22:20:36 +01:00