* feat: change ollama default model to llama3.1
* chore: bump versions
* feat: Change default model in local mode to llama3.1
* chore: make sure last poetry version is used
* fix: mypy
* fix: do not add BOS (with last llamacpp-python version)
* Extract optional dependencies
* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity
* Support Ollama embeddings
* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine
* Fix vector retriever filters
As discussed on Discord, the decision has been made to remove the system prompts by default, to better segregate the API and the UI usages.
A concurrent PR (#1353) is enabling the dynamic setting of a system prompt in the UI.
Therefore, if UI users want to use a custom system prompt, they can specify one directly in the UI.
If the API users want to use a custom prompt, they can pass it directly into their messages that they are passing to the API.
In the highlight of the two use case above, it becomes clear that default system_prompt does not need to exist.
* Add simple Basic auth
To enable the basic authentication, one must set `server.auth.enabled`
to true.
The static string defined in `server.auth.secret` must be set in the
header `Authorization`.
The health check endpoint will always be accessible, no matter the API
auth configuration.
* Fix linting and type check
* Fighting with mypy being too restrictive
Had to disable mypy in the `auth` as we are not using the same signature
for the authenticated method.
mypy was complaining that the signatures of `authenticated` must be
identical, no matter in which logical branch we are.
Given that fastapi is accomodating itself of method signatures (it will
inject the dependencies in the method call), this warning of mypy is
actually preventing us to do something legit.
mypy doc: https://mypy.readthedocs.io/en/stable/common_issues.html
* Write tests to verify that the simple auth is working
* Dockerize private-gpt
* Use port 8001 for local development
* Add setup script
* Add CUDA Dockerfile
* Create README.md
* Make the API use OpenAI response format
* Truncate prompt
* refactor: add models and __pycache__ to .gitignore
* Better naming
* Update readme
* Move models ignore to it's folder
* Add scaffolding
* Apply formatting
* Fix tests
* Working sagemaker custom llm
* Fix linting
* Fix linting
* Enable streaming
* Allow all 3.11 python versions
* Use llama 2 prompt format and fix completion
* Restructure (#3)
Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>
* Fix Dockerfile
* Use a specific build stage
* Cleanup
* Add FastAPI skeleton
* Cleanup openai package
* Fix DI and tests
* Split tests and tests with coverage
* Remove old scaffolding
* Add settings logic (#4)
* Add settings logic
* Add settings for sagemaker
---------
Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>
* Local LLM (#5)
* Add settings logic
* Add settings for sagemaker
* Add settings-local-example.yaml
* Delete terraform files
* Refactor tests to use fixtures
* Join deltas
* Add local model support
---------
Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>
* Update README.md
* Fix tests
* Version bump
* Enable simple llamaindex observability (#6)
* Enable simple llamaindex observability
* Improve code through linting
* Update README.md
* Move to async (#7)
* Migrate implementation to use asyncio
* Formatting
* Cleanup
* Linting
---------
Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>
* Query Docs and gradio UI
* Remove unnecessary files
* Git ignore chromadb folder
* Async migration + DI Cleanup
* Fix tests
* Add integration test
* Use fastapi responses
* Retrieval service with partial implementation
* Cleanup
* Run formatter
* Fix types
* Fetch nodes asynchronously
* Install local dependencies in tests
* Install ui dependencies in tests
* Install dependencies for llama-cpp
* Fix sudo
* Attempt to fix cuda issues
* Attempt to fix cuda issues
* Try to reclaim some space from ubuntu machine
* Retrieval with context
* Fix lint and imports
* Fix mypy
* Make retrieval API a POST
* Make Completions body a dataclass
* Fix LLM chat message order
* Add Query Chunks to Gradio UI
* Improve rag query prompt
* Rollback CI Changes
* Move to sync code
* Using Llamaindex abstraction for query retrieval
* Fix types
* Default to CONDENSED chat mode for contextualized chat
* Rename route function
* Add Chat endpoint
* Remove webhooks
* Add IntelliJ run config to gitignore
* .gitignore applied
* Sync chat completion
* Refactor total
* Typo in context_files.py
* Add embeddings component and service
* Remove wrong dataclass from IngestService
* Filter by context file id implementation
* Fix typing
* Implement context_filter and separate from the bool use_context in the API
* Change chunks api to avoid conceptual class of the context concept
* Deprecate completions and fix tests
* Remove remaining dataclasses
* Use embedding component in ingest service
* Fix ingestion to have multipart and local upload
* Fix ingestion API
* Add chunk tests
* Add configurable paths
* Cleaning up
* Add more docs
* IngestResponse includes a list of IngestedDocs
* Use IngestedDoc in the Chunk document reference
* Rename ingest routes to ingest_router.py
* Fix test working directory for intellij
* Set testpaths for pytest
* Remove unused as_chat_engine
* Add .fleet ide to gitignore
* Make LLM and Embedding model configurable
* Fix imports and checks
* Let local_data folder exist empty in the repository
* Don't use certain metadata in LLM
* Remove long lines
* Fix windows installation
* Typos
* Update poetry.lock
* Add TODO for linux
* Script and first version of docs
* No jekill build
* Fix relative url to openapi json
* Change default docs values
* Move chromadb dependency to the general group
* Fix tests to use separate local_data
* Create CNAME
* Update CNAME
* Fix openapi.json relative path
* PrivateGPT logo
* WIP OpenAPI documentation metadata
* Add ingest script (#11)
* Add ingest script
* Fix broken name refactor
* Add ingest docs and Makefile script
* Linting
* Move transformers to main dependency
* Move torch to main dependencies
* Don't load HuggingFaceEmbedding in tests
* Fix lint
---------
Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>
* Rename file to camel_case
* Commit settings-local.yaml
* Move documentation to public docs
* Fix docker image for linux
* Installation and Running the Server documentation
* Move back to docs folder, as it is the only supported by github pages
* Delete CNAME
* Create CNAME
* Delete CNAME
* Create CNAME
* Improved API documentation
* Fix lint
* Completions documentation
* Updated openapi scheme
* Ingestion API doc
* Minor doc changes
* Updated openapi scheme
* Chunks API documentation
* Embeddings and Health API, and homogeneous responses
* Revamp README with new skeleton of content
* More docs
* PrivateGPT logo
* Improve UI
* Update ingestion docu
* Update README with new sections
* Use context window in the retriever
* Gradio Documentation
* Add logo to UI
* Include Contributing and Community sections to README
* Update links to resources in the README
* Small README.md updates
* Wrap lines of README.md
* Don't put health under /v1
* Add copy button to Chat
* Architecture documentation
* Updated openapi.json
* Updated openapi.json
* Updated openapi.json
* Change UI label
* Update documentation
* Add releases link to README.md
* Gradio avatar and stop debug
* Readme update
* Clean old files
* Remove unused terraform checks
* Update twitter link.
* Disable minimum coverage
* Clean install message in README.md
---------
Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>
Co-authored-by: Iván Martínez <ivanmartit@gmail.com>
Co-authored-by: RubenGuerrero <ruben.guerrero@boopos.com>
Co-authored-by: Daniel Gallego Vico <daniel.gallego@bq.com>