Commit Graph

257 Commits

Author SHA1 Message Date
Ingrid Stevens
ab79a93fc8
Updates ollama to "config_settings.ollama.llm_model" 2024-03-22 10:38:57 +01:00
Ingrid Stevens
b81bfce770
Merge branch 'main' into update-ui-include-model-info-#1647 2024-03-16 14:51:46 +01:00
Otto L
1efac6a3fe
feat(llm - embed): Add support for Azure OpenAI (#1698)
* Add support for Azure OpenAI

* fix: wrong default api_version

Should be dashes instead of underscores.
see: https://learn.microsoft.com/en-us/azure/ai-services/openai/reference

* fix: code styling

applied "make check" changes

* refactor: extend documentation

* mention azopenai as available option and extras
* add recommended section
* include settings-azopenai.yaml configuration file

* fix: documentation
2024-03-15 16:49:50 +01:00
Brett England
258d02d87c
fix(docs): Minor documentation amendment (#1739)
* Unify pgvector and postgres connection settings

* Remove local changes

* Update file pgvector->postgres

* postgresql should be postgres
2024-03-15 16:36:32 +01:00
Brett England
63de7e4930
feat: unify settings for vector and nodestore connections to PostgreSQL (#1730)
* Unify pgvector and postgres connection settings

* Remove local changes

* Update file pgvector->postgres
2024-03-15 09:55:17 +01:00
Brett England
68b3a34b03
feat(nodestore): add Postgres for the doc and index store (#1706)
* Adding Postgres for the doc and index store

* Adding documentation.  Rename postgres database local->simple.  Postgres storage dependencies

* Update documentation for postgres storage

* Renaming feature to nodestore

* update docstore -> nodestore in doc

* missed some docstore changes in doc

* Updated poetry.lock

* Formatting updates to pass ruff/black checks

* Correction to unreachable code!

* Format adjustment to pass black test

* Adjust extra inclusion name for vector pg

* extra dep change for pg vector

* storage-postgres -> storage-nodestore-postgres

* Hash change on poetry lock
2024-03-14 17:12:33 +01:00
Iván Martínez
d17c34e81a
fix(settings): set default tokenizer to avoid running make setup fail (#1709) 2024-03-13 09:53:40 +01:00
Ingrid Stevens
b12d7f8b63 return None rather than raising an error 2024-03-12 10:07:48 +01:00
Ingrid Stevens
8459025260 changes local to llamacpp 2024-03-12 10:03:28 +01:00
Andrew Jiang
84ad16af80
feat(docs): upgrade fern (#1596) 2024-03-11 23:02:56 +01:00
Arun Yadav
821bca32e9
feat(local): tiktoken cache within repo for offline (#1467) 2024-03-11 22:55:13 +01:00
icsy7867
02dc83e8e9
feat(llm): adds serveral settings for llamacpp and ollama (#1703) 2024-03-11 22:51:05 +01:00
Hoffelhas
410bf7a71f
feat(ui): maintain score order when curating sources (#1643)
* Update ui.py

Changed 'curated_sources' from a list, in order to maintain score order when returning the curated sources.

* Maintain score order after curating sources
2024-03-11 22:27:30 +01:00
icsy7867
290b9fb084
feat(ui): add sources check to not repeat identical sources (#1705) 2024-03-11 22:24:18 +01:00
github-actions[bot]
1b03b369c0
chore(main): release 0.4.0 (#1628)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-03-06 17:53:35 +01:00
Iván Martínez
45f05711eb
feat: Upgrade to LlamaIndex to 0.10 (#1663)
* Extract optional dependencies

* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity

* Support Ollama embeddings

* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine

* Fix vector retriever filters
2024-03-06 17:51:30 +01:00
Ingrid Stevens
133c1da13a refines get_model_label()
refines get_model_label()
removes reliance on PGPT_PROFILES;
Instead, uses settings().llm.mode.
Possible options: "local", "openai", "openailike", "sagemaker", "mock", "ollama".
2024-02-28 17:01:46 +01:00
Ingrid Stevens
5620248aae
Update ui.py
Related to Issue: Add Model Information to ChatInterface label in private_gpt/ui/ui.py #1647

Introduces a new function `get_model_label` that dynamically determines the model label based on the PGPT_PROFILES environment variable. The function returns the model label if it's set to either "ollama" or "vllm", or None otherwise.

The get_model_label function is then used to set the label text for the chatbot interface, which includes the LLM mode and the model label (if available). This change allows the UI to display the correct model label based on the user's configuration.

Please review the changes and let me know if you have any feedback or suggestions. Thank you!
2024-02-24 15:15:12 +01:00
Daniel Gallego Vico
12f3a39e8a
Update x handle to zylon private gpt (#1644) 2024-02-23 15:51:35 +01:00
TQ
cd40e3982b
feat(Vector): support pgvector (#1624) 2024-02-20 15:29:26 +01:00
github-actions[bot]
066ea5bf28
chore(main): release 0.3.0 (#1413)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-02-16 17:42:39 +01:00
Iván Martínez
aa13afde07
feat(UI): Select file to Query or Delete + Delete ALL (#1612)
---------

Co-authored-by: Robin Boone <rboone@sofics.com>
2024-02-16 17:36:09 +01:00
icsy7867
24fb80ca38
fix(UI): Updated ui.py. Frees up the CPU to not be bottlenecked.
Updated ui.py to include a small sleep timer while building the stream deltas.  This recursive function fires off so quickly to eats up too much of the CPU.  This small sleep frees up the CPU to not be bottlenecked.  This value can go lower/shorter.  But 0.02 or 0.025 seems to work well. (#1589)

Co-authored-by: root <root@wesgitlabdemo.icl.gtri.org>
2024-02-16 12:52:14 +01:00
Ygal Blum
6bbec79583
feat(llm): Add support for Ollama LLM (#1526) 2024-02-09 15:50:50 +01:00
Nick Smirnov
b178b51451
feat(bulk-ingest): Add --ignored Flag to Exclude Specific Files and Directories During Ingestion (#1432) 2024-02-07 19:59:32 +01:00
Iván Martínez
24fae660e6
feat: Add stream information to generate SDKs (#1569) 2024-02-02 16:14:22 +01:00
Pablo Orgaz
3e67e21d38
Add embedding mode config (#1541) 2024-01-25 10:55:32 +01:00
Naveen Kannan
869233f0e4
fix: Adding an LLM param to fix broken generator from llamacpp (#1519) 2024-01-17 18:10:45 +01:00
CognitiveTech
e326126d0d
feat: add mistral + chatml prompts (#1426) 2024-01-16 22:51:14 +01:00
Robert Gay
6191bcdbd6
fix: minor bug in chat stream output - python error being serialized (#1449) 2024-01-16 16:41:20 +01:00
Iván Martínez
d3acd85fe3
fix(tests): load the test settings only when running tests
Previous implementation causes false positives with the last version of LlamaIndex
2024-01-09 12:03:16 +01:00
Guido Schulz
0a89d76cc5
fix(docs): Update quickstart doc and set version in pyproject.toml to 0.2.0 2023-12-26 13:09:31 +01:00
Matthew Hill
2d27a9f956
feat(llm): Add openailike llm mode (#1447)
This mode behaves the same as the openai mode, except that it allows setting custom models not
supported by OpenAI. It can be used with any tool that serves models from an OpenAI compatible API.

Implements #1424
2023-12-26 10:26:08 +01:00
imartinez
fee9f08ef3 Move back to 3900 for the context window to avoid melting local machines 2023-12-22 18:21:43 +01:00
Iván Martínez
fde2b942bc
fix(deploy): fix local and external dockerfiles 2023-12-22 14:16:46 +01:00
Iván Martínez
4c69c458ab
Improve ingest logs (#1438) 2023-12-21 17:13:46 +01:00
Iván Martínez
4780540870
feat(settings): Configurable context_window and tokenizer (#1437) 2023-12-21 14:49:35 +01:00
Iván Martínez
6eeb95ec7f
feat(API): Ingest plain text (#1417)
* Add ingest/text route to ingest plain text

* Add new ingest text test and adapt ingest/file ones

* Include new API in docs

* Remove duplicated logic
2023-12-18 21:47:05 +01:00
Pablo Orgaz
059f35840a
fix(docker): docker broken copy (#1419) 2023-12-18 16:55:18 +01:00
Iván Martínez
8ec7cf49f4
feat(settings): Update default model to TheBloke/Mistral-7B-Instruct-v0.2-GGUF (#1415)
* Update LlamaCPP dependency

* Default to TheBloke/Mistral-7B-Instruct-v0.2-GGUF

* Fix API docs
2023-12-17 16:11:08 +01:00
Rohit Das
c71ae7cee9
feat(ui): make chat area stretch to fill the screen (#1397) 2023-12-17 12:02:13 +01:00
cognitivetech
2564f8d2bb
fix(settings): correct yaml multiline string (#1403) 2023-12-16 19:02:46 +01:00
Eliott Bouhana
4e496e970a
docs: remove misleading comment about pgpt working with python 3.12 (#1394)
I was misled into believing I could install using python 3.12 whereas the pyproject.toml explicitly states otherwise. This PR only removes this comment to make sure other people are not also trapped 😄
2023-12-15 21:35:02 +01:00
Federico Grandi
3582764801
ci: fix preview docs checkout ref (#1393) 2023-12-12 20:33:34 +01:00
Federico Grandi
1d28ae2915
docs: fix minor capitalization typo (#1392) 2023-12-12 20:31:38 +01:00
github-actions[bot]
e8ac51bba4
chore(main): release 0.2.0 (#1387)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-12-10 20:08:12 +01:00
3ly-13
145f3ec9f4
feat(ui): Allows User to Set System Prompt via "Additional Options" in Chat Interface (#1353) 2023-12-10 19:45:14 +01:00
3ly-13
a072a40a7c
Allow setting OpenAI model in settings (#1386)
feat(settings): Allow setting openai model to be used. Default to GPT 3.5
2023-12-09 20:13:00 +01:00
Louis Melchior
a3ed14c58f
feat(llm): drop default_system_prompt (#1385)
As discussed on Discord, the decision has been made to remove the system prompts by default, to better segregate the API and the UI usages.

A concurrent PR (#1353) is enabling the dynamic setting of a system prompt in the UI.

Therefore, if UI users want to use a custom system prompt, they can specify one directly in the UI.
If the API users want to use a custom prompt, they can pass it directly into their messages that they are passing to the API.

In the highlight of the two use case above, it becomes clear that default system_prompt does not need to exist.
2023-12-08 23:13:51 +01:00
Iván Martínez
f235c50be9
Delete old docs (#1384) 2023-12-08 22:39:23 +01:00