mirror of
https://github.com/imartinez/privateGPT.git
synced 2025-04-27 19:28:38 +00:00
* Dockerize private-gpt * Use port 8001 for local development * Add setup script * Add CUDA Dockerfile * Create README.md * Make the API use OpenAI response format * Truncate prompt * refactor: add models and __pycache__ to .gitignore * Better naming * Update readme * Move models ignore to it's folder * Add scaffolding * Apply formatting * Fix tests * Working sagemaker custom llm * Fix linting * Fix linting * Enable streaming * Allow all 3.11 python versions * Use llama 2 prompt format and fix completion * Restructure (#3) Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local> * Fix Dockerfile * Use a specific build stage * Cleanup * Add FastAPI skeleton * Cleanup openai package * Fix DI and tests * Split tests and tests with coverage * Remove old scaffolding * Add settings logic (#4) * Add settings logic * Add settings for sagemaker --------- Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local> * Local LLM (#5) * Add settings logic * Add settings for sagemaker * Add settings-local-example.yaml * Delete terraform files * Refactor tests to use fixtures * Join deltas * Add local model support --------- Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local> * Update README.md * Fix tests * Version bump * Enable simple llamaindex observability (#6) * Enable simple llamaindex observability * Improve code through linting * Update README.md * Move to async (#7) * Migrate implementation to use asyncio * Formatting * Cleanup * Linting --------- Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local> * Query Docs and gradio UI * Remove unnecessary files * Git ignore chromadb folder * Async migration + DI Cleanup * Fix tests * Add integration test * Use fastapi responses * Retrieval service with partial implementation * Cleanup * Run formatter * Fix types * Fetch nodes asynchronously * Install local dependencies in tests * Install ui dependencies in tests * Install dependencies for llama-cpp * Fix sudo * Attempt to fix cuda issues * Attempt to fix cuda issues * Try to reclaim some space from ubuntu machine * Retrieval with context * Fix lint and imports * Fix mypy * Make retrieval API a POST * Make Completions body a dataclass * Fix LLM chat message order * Add Query Chunks to Gradio UI * Improve rag query prompt * Rollback CI Changes * Move to sync code * Using Llamaindex abstraction for query retrieval * Fix types * Default to CONDENSED chat mode for contextualized chat * Rename route function * Add Chat endpoint * Remove webhooks * Add IntelliJ run config to gitignore * .gitignore applied * Sync chat completion * Refactor total * Typo in context_files.py * Add embeddings component and service * Remove wrong dataclass from IngestService * Filter by context file id implementation * Fix typing * Implement context_filter and separate from the bool use_context in the API * Change chunks api to avoid conceptual class of the context concept * Deprecate completions and fix tests * Remove remaining dataclasses * Use embedding component in ingest service * Fix ingestion to have multipart and local upload * Fix ingestion API * Add chunk tests * Add configurable paths * Cleaning up * Add more docs * IngestResponse includes a list of IngestedDocs * Use IngestedDoc in the Chunk document reference * Rename ingest routes to ingest_router.py * Fix test working directory for intellij * Set testpaths for pytest * Remove unused as_chat_engine * Add .fleet ide to gitignore * Make LLM and Embedding model configurable * Fix imports and checks * Let local_data folder exist empty in the repository * Don't use certain metadata in LLM * Remove long lines * Fix windows installation * Typos * Update poetry.lock * Add TODO for linux * Script and first version of docs * No jekill build * Fix relative url to openapi json * Change default docs values * Move chromadb dependency to the general group * Fix tests to use separate local_data * Create CNAME * Update CNAME * Fix openapi.json relative path * PrivateGPT logo * WIP OpenAPI documentation metadata * Add ingest script (#11) * Add ingest script * Fix broken name refactor * Add ingest docs and Makefile script * Linting * Move transformers to main dependency * Move torch to main dependencies * Don't load HuggingFaceEmbedding in tests * Fix lint --------- Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local> * Rename file to camel_case * Commit settings-local.yaml * Move documentation to public docs * Fix docker image for linux * Installation and Running the Server documentation * Move back to docs folder, as it is the only supported by github pages * Delete CNAME * Create CNAME * Delete CNAME * Create CNAME * Improved API documentation * Fix lint * Completions documentation * Updated openapi scheme * Ingestion API doc * Minor doc changes * Updated openapi scheme * Chunks API documentation * Embeddings and Health API, and homogeneous responses * Revamp README with new skeleton of content * More docs * PrivateGPT logo * Improve UI * Update ingestion docu * Update README with new sections * Use context window in the retriever * Gradio Documentation * Add logo to UI * Include Contributing and Community sections to README * Update links to resources in the README * Small README.md updates * Wrap lines of README.md * Don't put health under /v1 * Add copy button to Chat * Architecture documentation * Updated openapi.json * Updated openapi.json * Updated openapi.json * Change UI label * Update documentation * Add releases link to README.md * Gradio avatar and stop debug * Readme update * Clean old files * Remove unused terraform checks * Update twitter link. * Disable minimum coverage * Clean install message in README.md --------- Co-authored-by: Pablo Orgaz <pablo@Pablos-MacBook-Pro.local> Co-authored-by: Iván Martínez <ivanmartit@gmail.com> Co-authored-by: RubenGuerrero <ruben.guerrero@boopos.com> Co-authored-by: Daniel Gallego Vico <daniel.gallego@bq.com>
47 lines
1.2 KiB
Docker
47 lines
1.2 KiB
Docker
### IMPORTANT, THIS IMAGE CAN ONLY BE RUN IN LINUX DOCKER
|
|
### You will run into a segfault in mac
|
|
FROM python:3.11.6-slim-bookworm as base
|
|
|
|
# Install poetry
|
|
RUN pip install pipx
|
|
RUN python3 -m pipx ensurepath
|
|
RUN pipx install poetry
|
|
ENV PATH="/root/.local/bin:$PATH"
|
|
|
|
# Dependencies to build llama-cpp and wget
|
|
RUN apt update && apt install -y \
|
|
libopenblas-dev\
|
|
ninja-build\
|
|
build-essential\
|
|
pkg-config\
|
|
wget
|
|
|
|
# https://python-poetry.org/docs/configuration/#virtualenvsin-project
|
|
ENV POETRY_VIRTUALENVS_IN_PROJECT=true
|
|
|
|
FROM base as dependencies
|
|
WORKDIR /home/worker/app
|
|
COPY pyproject.toml poetry.lock ./
|
|
|
|
RUN poetry install --with local
|
|
RUN poetry install --with ui
|
|
RUN CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS"\
|
|
poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python
|
|
|
|
FROM base as app
|
|
|
|
ENV PYTHONUNBUFFERED=1
|
|
ENV PORT=8080
|
|
ENV PGPT_PROFILES=docker
|
|
EXPOSE 8080
|
|
|
|
# Prepare a non-root user
|
|
RUN adduser --system worker
|
|
WORKDIR /home/worker/app
|
|
|
|
# Copy everything, including the virtual environment
|
|
COPY --chown=worker --from=dependencies /home/worker/app .
|
|
COPY --chown=worker . .
|
|
|
|
USER worker
|
|
ENTRYPOINT .venv/bin/python -m private_gpt |