mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2025-10-01 18:18:19 +00:00
* small edits and placeholder gif Signed-off-by: Max Cembalest <max@nomic.ai> * jul2 docs updates Signed-off-by: Max Cembalest <max@nomic.ai> * added video Signed-off-by: mcembalest <70534565+mcembalest@users.noreply.github.com> Signed-off-by: Max Cembalest <max@nomic.ai> * quantization nits Signed-off-by: Max Cembalest <max@nomic.ai> --------- Signed-off-by: Max Cembalest <max@nomic.ai> Signed-off-by: mcembalest <70534565+mcembalest@users.noreply.github.com>
44 lines
1.7 KiB
Markdown
44 lines
1.7 KiB
Markdown
# Frequently Asked Questions
|
|
|
|
## Models
|
|
|
|
### Which language models are supported?
|
|
|
|
We support models with a `llama.cpp` implementation which have been uploaded to [HuggingFace](https://huggingface.co/).
|
|
|
|
### Which embedding models are supported?
|
|
|
|
We support SBert and Nomic Embed Text v1 & v1.5.
|
|
|
|
## Software
|
|
|
|
### What software do I need?
|
|
|
|
All you need is to [install GPT4all](../index.md) onto you Windows, Mac, or Linux computer.
|
|
|
|
### Which SDK languages are supported?
|
|
|
|
Our SDK is in Python for usability, but these are light bindings around [`llama.cpp`](https://github.com/ggerganov/llama.cpp) implementations that we contribute to for efficiency and accessibility on everyday computers.
|
|
|
|
### Is there an API?
|
|
|
|
Yes, you can run your model in server-mode with our [OpenAI-compatible API](https://platform.openai.com/docs/api-reference/completions), which you can configure in [settings](../gpt4all_desktop/settings.md#application-settings)
|
|
|
|
### Can I monitor a GPT4All deployment?
|
|
|
|
Yes, GPT4All [integrates](../gpt4all_python/monitoring.md) with [OpenLIT](https://github.com/openlit/openlit) so you can deploy LLMs with user interactions and hardware usage automatically monitored for full observability.
|
|
|
|
### Is there a command line interface (CLI)?
|
|
|
|
[Yes](https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/cli), we have a lightweight use of the Python client as a CLI. We welcome further contributions!
|
|
|
|
## Hardware
|
|
|
|
### What hardware do I need?
|
|
|
|
GPT4All can run on CPU, Metal (Apple Silicon M1+), and GPU.
|
|
|
|
### What are the system requirements?
|
|
|
|
Your CPU needs to support [AVX or AVX2 instructions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions) and you need enough RAM to load a model into memory. If the
|