mirror of
https://github.com/imartinez/privateGPT.git
synced 2025-04-27 11:21:34 +00:00
* Extract optional dependencies * Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity * Support Ollama embeddings * Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine * Fix vector retriever filters
40 lines
1.2 KiB
Plaintext
40 lines
1.2 KiB
Plaintext
PrivateGPT provides an **API** containing all the building blocks required to
|
|
build **private, context-aware AI applications**.
|
|
The API follows and extends OpenAI API standard, and supports both normal and streaming responses.
|
|
That means that, if you can use OpenAI API in one of your tools, you can use your own PrivateGPT API instead,
|
|
with no code changes, **and for free** if you are running privateGPT in a `local` setup.
|
|
|
|
Get started by understanding the [Main Concepts and Installation](/installation) and then dive into the [API Reference](/api-reference).
|
|
|
|
## Frequently Visited Resources
|
|
|
|
<Cards>
|
|
<Card
|
|
title="Main Concepts"
|
|
icon="fa-solid fa-lines-leaning"
|
|
href="/installation"
|
|
/>
|
|
<Card
|
|
title="API Reference"
|
|
icon="fa-solid fa-code"
|
|
href="/api-reference"
|
|
/>
|
|
<Card
|
|
title="Twitter"
|
|
icon="fa-brands fa-twitter"
|
|
href="https://twitter.com/PrivateGPT_AI"
|
|
/>
|
|
<Card
|
|
title="Discord Server"
|
|
icon="fa-brands fa-discord"
|
|
href="https://discord.gg/bK6mRVpErU"
|
|
/>
|
|
</Cards>
|
|
|
|
<br />
|
|
|
|
|
|
<Callout intent = "info">
|
|
A working **Gradio UI client** is provided to test the API, together with a set of useful tools such as bulk
|
|
model download script, ingestion script, documents folder watch, etc.
|
|
</Callout> |