* Adding Postgres for the doc and index store
* Adding documentation. Rename postgres database local->simple. Postgres storage dependencies
* Update documentation for postgres storage
* Renaming feature to nodestore
* update docstore -> nodestore in doc
* missed some docstore changes in doc
* Updated poetry.lock
* Formatting updates to pass ruff/black checks
* Correction to unreachable code!
* Format adjustment to pass black test
* Adjust extra inclusion name for vector pg
* extra dep change for pg vector
* storage-postgres -> storage-nodestore-postgres
* Hash change on poetry lock
* Extract optional dependencies
* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity
* Support Ollama embeddings
* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine
* Fix vector retriever filters
* Update llama_index to 0.9.3
Had to change some imports because of breaking change durin the llama_index update to 0.9.0
* Update poetry.lock after update of llama_index
* Configure simple builtin logging
Changed the 2 existing `print` in the `private_gpt` code base into actual python logging, stop using loguru (dependency will be dropped in a later commit).
Try to use the `key=value` logging convention in logs (to indicate what dynamic values represents, and what is dynamic vs not).
Using `%s` log style, so that the string formatting is pushed inside the logger, giving the ability to the logger to determine if the string need to be formatted or not (i.e. strings from debug logs might not be formatted if the log level is not debug)
The (basic) builtin log configuration have been placed in `private_gpt/__init__.py` in order to initialize the logging system even before we start to launch any python code in `private_gpt` package (ensuring we get any initialization log formatted as we want to)
Disabled `uvicorn` custom logging format, resulting in having uvicorn logs being outputted in our formatted.
Some more concise format could be used if we want to, especially:
```
COMPACT_LOG_FORMAT = '%(asctime)s.%(msecs)03d [%(levelname)s] %(name)s - %(message)s'
```
Python documentation and cookbook on logging for reference:
* https://docs.python.org/3/library/logging.html
* https://docs.python.org/3/howto/logging.html
* Removing loguru from the dependencies
Result of `poetry remove loguru`
* PR feedback: using `logger` variable name instead of `log`
---------
Co-authored-by: Louis Melchior <louis@jaris.io>
* Dockerize private-gpt
* Use port 8001 for local development
* Add setup script
* Add CUDA Dockerfile
* Create README.md
* Make the API use OpenAI response format
* Truncate prompt
* refactor: add models and __pycache__ to .gitignore
* Better naming
* Update readme
* Move models ignore to it's folder
* Add scaffolding
* Apply formatting
* Fix tests
* Working sagemaker custom llm
* Fix linting
* Fix linting
* Enable streaming
* Allow all 3.11 python versions
* Use llama 2 prompt format and fix completion
* Restructure (#3)
Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>
* Fix Dockerfile
* Use a specific build stage
* Cleanup
* Add FastAPI skeleton
* Cleanup openai package
* Fix DI and tests
* Split tests and tests with coverage
* Remove old scaffolding
* Add settings logic (#4)
* Add settings logic
* Add settings for sagemaker
---------
Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>
* Local LLM (#5)
* Add settings logic
* Add settings for sagemaker
* Add settings-local-example.yaml
* Delete terraform files
* Refactor tests to use fixtures
* Join deltas
* Add local model support
---------
Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>
* Update README.md
* Fix tests
* Version bump
* Enable simple llamaindex observability (#6)
* Enable simple llamaindex observability
* Improve code through linting
* Update README.md
* Move to async (#7)
* Migrate implementation to use asyncio
* Formatting
* Cleanup
* Linting
---------
Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>
* Query Docs and gradio UI
* Remove unnecessary files
* Git ignore chromadb folder
* Async migration + DI Cleanup
* Fix tests
* Add integration test
* Use fastapi responses
* Retrieval service with partial implementation
* Cleanup
* Run formatter
* Fix types
* Fetch nodes asynchronously
* Install local dependencies in tests
* Install ui dependencies in tests
* Install dependencies for llama-cpp
* Fix sudo
* Attempt to fix cuda issues
* Attempt to fix cuda issues
* Try to reclaim some space from ubuntu machine
* Retrieval with context
* Fix lint and imports
* Fix mypy
* Make retrieval API a POST
* Make Completions body a dataclass
* Fix LLM chat message order
* Add Query Chunks to Gradio UI
* Improve rag query prompt
* Rollback CI Changes
* Move to sync code
* Using Llamaindex abstraction for query retrieval
* Fix types
* Default to CONDENSED chat mode for contextualized chat
* Rename route function
* Add Chat endpoint
* Remove webhooks
* Add IntelliJ run config to gitignore
* .gitignore applied
* Sync chat completion
* Refactor total
* Typo in context_files.py
* Add embeddings component and service
* Remove wrong dataclass from IngestService
* Filter by context file id implementation
* Fix typing
* Implement context_filter and separate from the bool use_context in the API
* Change chunks api to avoid conceptual class of the context concept
* Deprecate completions and fix tests
* Remove remaining dataclasses
* Use embedding component in ingest service
* Fix ingestion to have multipart and local upload
* Fix ingestion API
* Add chunk tests
* Add configurable paths
* Cleaning up
* Add more docs
* IngestResponse includes a list of IngestedDocs
* Use IngestedDoc in the Chunk document reference
* Rename ingest routes to ingest_router.py
* Fix test working directory for intellij
* Set testpaths for pytest
* Remove unused as_chat_engine
* Add .fleet ide to gitignore
* Make LLM and Embedding model configurable
* Fix imports and checks
* Let local_data folder exist empty in the repository
* Don't use certain metadata in LLM
* Remove long lines
* Fix windows installation
* Typos
* Update poetry.lock
* Add TODO for linux
* Script and first version of docs
* No jekill build
* Fix relative url to openapi json
* Change default docs values
* Move chromadb dependency to the general group
* Fix tests to use separate local_data
* Create CNAME
* Update CNAME
* Fix openapi.json relative path
* PrivateGPT logo
* WIP OpenAPI documentation metadata
* Add ingest script (#11)
* Add ingest script
* Fix broken name refactor
* Add ingest docs and Makefile script
* Linting
* Move transformers to main dependency
* Move torch to main dependencies
* Don't load HuggingFaceEmbedding in tests
* Fix lint
---------
Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>
* Rename file to camel_case
* Commit settings-local.yaml
* Move documentation to public docs
* Fix docker image for linux
* Installation and Running the Server documentation
* Move back to docs folder, as it is the only supported by github pages
* Delete CNAME
* Create CNAME
* Delete CNAME
* Create CNAME
* Improved API documentation
* Fix lint
* Completions documentation
* Updated openapi scheme
* Ingestion API doc
* Minor doc changes
* Updated openapi scheme
* Chunks API documentation
* Embeddings and Health API, and homogeneous responses
* Revamp README with new skeleton of content
* More docs
* PrivateGPT logo
* Improve UI
* Update ingestion docu
* Update README with new sections
* Use context window in the retriever
* Gradio Documentation
* Add logo to UI
* Include Contributing and Community sections to README
* Update links to resources in the README
* Small README.md updates
* Wrap lines of README.md
* Don't put health under /v1
* Add copy button to Chat
* Architecture documentation
* Updated openapi.json
* Updated openapi.json
* Updated openapi.json
* Change UI label
* Update documentation
* Add releases link to README.md
* Gradio avatar and stop debug
* Readme update
* Clean old files
* Remove unused terraform checks
* Update twitter link.
* Disable minimum coverage
* Clean install message in README.md
---------
Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local>
Co-authored-by: Iván Martínez <ivanmartit@gmail.com>
Co-authored-by: RubenGuerrero <ruben.guerrero@boopos.com>
Co-authored-by: Daniel Gallego Vico <daniel.gallego@bq.com>