mirror of
https://github.com/imartinez/privateGPT.git
synced 2025-04-27 19:28:38 +00:00
* Extract optional dependencies * Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity * Support Ollama embeddings * Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine * Fix vector retriever filters
10 lines
191 B
YAML
10 lines
191 B
YAML
server:
|
|
env_name: ${APP_ENV:mock}
|
|
|
|
# This configuration allows you to use GPU for creating embeddings while avoiding loading LLM into vRAM
|
|
llm:
|
|
mode: mock
|
|
|
|
embedding:
|
|
mode: huggingface
|