feat: Upgrade to LlamaIndex to 0.10 (#1663)

* Extract optional dependencies

* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity

* Support Ollama embeddings

* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine

* Fix vector retriever filters
This commit is contained in:
Iván Martínez
2024-03-06 17:51:30 +01:00
committed by GitHub
parent 12f3a39e8a
commit 45f05711eb
43 changed files with 1474 additions and 1396 deletions

View File

@@ -1,21 +0,0 @@
## Local Installation steps
The steps in [Installation](/installation) section are better explained and cover more
setup scenarios (macOS, Windows, Linux).
But if you like one-liners, have python3.11 installed, and you are running a UNIX (macOS or Linux)
system, you can get up and running on CPU in few lines:
```bash
git clone https://github.com/imartinez/privateGPT && cd privateGPT && \
python3.11 -m venv .venv && source .venv/bin/activate && \
pip install --upgrade pip poetry && poetry install --with ui,local && ./scripts/setup
# Launch the privateGPT API server **and** the gradio UI
poetry run python3.11 -m private_gpt
# In another terminal, create a new browser window on your private GPT!
open http://127.0.0.1:8001/
```
The above is not working, or it is too slow, so **you want to run it on GPU(s)**?
Please check the more detailed [installation guide](/installation).

View File

@@ -1,20 +1,19 @@
## Introduction 👋
PrivateGPT provides an **API** containing all the building blocks required to
build **private, context-aware AI applications**.
The API follows and extends OpenAI API standard, and supports both normal and streaming responses.
That means that, if you can use OpenAI API in one of your tools, you can use your own PrivateGPT API instead,
with no code changes, **and for free** if you are running privateGPT in `local` mode.
Looking for the installation quickstart? [Quickstart installation guide for Linux and macOS](/overview/welcome/quickstart).
Do you want to install it on Windows? Or do you want to take full advantage of your hardware for better performances?
The installation guide will help you in the [Installation section](/installation).
with no code changes, **and for free** if you are running privateGPT in a `local` setup.
Get started by understanding the [Main Concepts and Installation](/installation) and then dive into the [API Reference](/api-reference).
## Frequently Visited Resources
<Cards>
<Card
title="Main Concepts"
icon="fa-solid fa-lines-leaning"
href="/installation"
/>
<Card
title="API Reference"
icon="fa-solid fa-code"
@@ -32,6 +31,9 @@ The installation guide will help you in the [Installation section](/installation
/>
</Cards>
<br />
<Callout intent = "info">
A working **Gradio UI client** is provided to test the API, together with a set of useful tools such as bulk
model download script, ingestion script, documents folder watch, etc.