Commit Graph

333 Commits

Author SHA1 Message Date
Nick Smirnov
b178b51451
feat(bulk-ingest): Add --ignored Flag to Exclude Specific Files and Directories During Ingestion (#1432) 2024-02-07 19:59:32 +01:00
Iván Martínez
24fae660e6
feat: Add stream information to generate SDKs (#1569) 2024-02-02 16:14:22 +01:00
Pablo Orgaz
3e67e21d38
Add embedding mode config (#1541) 2024-01-25 10:55:32 +01:00
Naveen Kannan
869233f0e4
fix: Adding an LLM param to fix broken generator from llamacpp (#1519) 2024-01-17 18:10:45 +01:00
CognitiveTech
e326126d0d
feat: add mistral + chatml prompts (#1426) 2024-01-16 22:51:14 +01:00
Robert Gay
6191bcdbd6
fix: minor bug in chat stream output - python error being serialized (#1449) 2024-01-16 16:41:20 +01:00
Iván Martínez
d3acd85fe3
fix(tests): load the test settings only when running tests
Previous implementation causes false positives with the last version of LlamaIndex
2024-01-09 12:03:16 +01:00
Guido Schulz
0a89d76cc5
fix(docs): Update quickstart doc and set version in pyproject.toml to 0.2.0 2023-12-26 13:09:31 +01:00
Matthew Hill
2d27a9f956
feat(llm): Add openailike llm mode (#1447)
This mode behaves the same as the openai mode, except that it allows setting custom models not
supported by OpenAI. It can be used with any tool that serves models from an OpenAI compatible API.

Implements #1424
2023-12-26 10:26:08 +01:00
imartinez
fee9f08ef3 Move back to 3900 for the context window to avoid melting local machines 2023-12-22 18:21:43 +01:00
Iván Martínez
fde2b942bc
fix(deploy): fix local and external dockerfiles 2023-12-22 14:16:46 +01:00
Iván Martínez
4c69c458ab
Improve ingest logs (#1438) 2023-12-21 17:13:46 +01:00
Iván Martínez
4780540870
feat(settings): Configurable context_window and tokenizer (#1437) 2023-12-21 14:49:35 +01:00
Iván Martínez
6eeb95ec7f
feat(API): Ingest plain text (#1417)
* Add ingest/text route to ingest plain text

* Add new ingest text test and adapt ingest/file ones

* Include new API in docs

* Remove duplicated logic
2023-12-18 21:47:05 +01:00
Pablo Orgaz
059f35840a
fix(docker): docker broken copy (#1419) 2023-12-18 16:55:18 +01:00
Iván Martínez
8ec7cf49f4
feat(settings): Update default model to TheBloke/Mistral-7B-Instruct-v0.2-GGUF (#1415)
* Update LlamaCPP dependency

* Default to TheBloke/Mistral-7B-Instruct-v0.2-GGUF

* Fix API docs
2023-12-17 16:11:08 +01:00
Rohit Das
c71ae7cee9
feat(ui): make chat area stretch to fill the screen (#1397) 2023-12-17 12:02:13 +01:00
cognitivetech
2564f8d2bb
fix(settings): correct yaml multiline string (#1403) 2023-12-16 19:02:46 +01:00
Eliott Bouhana
4e496e970a
docs: remove misleading comment about pgpt working with python 3.12 (#1394)
I was misled into believing I could install using python 3.12 whereas the pyproject.toml explicitly states otherwise. This PR only removes this comment to make sure other people are not also trapped 😄
2023-12-15 21:35:02 +01:00
Federico Grandi
3582764801
ci: fix preview docs checkout ref (#1393) 2023-12-12 20:33:34 +01:00
Federico Grandi
1d28ae2915
docs: fix minor capitalization typo (#1392) 2023-12-12 20:31:38 +01:00
github-actions[bot]
e8ac51bba4
chore(main): release 0.2.0 (#1387)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-12-10 20:08:12 +01:00
3ly-13
145f3ec9f4
feat(ui): Allows User to Set System Prompt via "Additional Options" in Chat Interface (#1353) 2023-12-10 19:45:14 +01:00
3ly-13
a072a40a7c
Allow setting OpenAI model in settings (#1386)
feat(settings): Allow setting openai model to be used. Default to GPT 3.5
2023-12-09 20:13:00 +01:00
Louis Melchior
a3ed14c58f
feat(llm): drop default_system_prompt (#1385)
As discussed on Discord, the decision has been made to remove the system prompts by default, to better segregate the API and the UI usages.

A concurrent PR (#1353) is enabling the dynamic setting of a system prompt in the UI.

Therefore, if UI users want to use a custom system prompt, they can specify one directly in the UI.
If the API users want to use a custom prompt, they can pass it directly into their messages that they are passing to the API.

In the highlight of the two use case above, it becomes clear that default system_prompt does not need to exist.
2023-12-08 23:13:51 +01:00
Iván Martínez
f235c50be9
Delete old docs (#1384) 2023-12-08 22:39:23 +01:00
EEmlan
9302620eac
Adding german speaking model to documentation (#1374) 2023-12-08 11:26:25 +01:00
Max Zangs
9cf972563e
Add setup option to Makefile (#1368) 2023-12-08 10:34:12 +01:00
github-actions[bot]
3d301d0c6f
chore(main): release 0.1.0 (#1094)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-12-01 14:45:54 +01:00
lopagela
56af625d71
Fix the parallel ingestion mode, and make it available through conf (#1336)
* Fix the parallel ingestion mode, and make it available through conf

Also updated the documentation to show how to configure the ingest mode.

* PR feedback: redirect to documentation
2023-11-30 11:41:55 +01:00
Francisco García Sierra
b7ca7d35a0
Update ingest api docs with Windows support (#1289) 2023-11-29 20:56:37 +01:00
ishaandatta
28d03fdda8
Adding working combination of LLM and Embedding Model to recipes (#1315)
Co-authored-by: ishaandatta <ishaandatta50@gmail.com>
2023-11-29 20:54:22 +01:00
Phi Long
aabdb046ae
Add docker compose (#1277)
Co-authored-by: philongn <philongn@theugroup.co>
Co-authored-by: Pablo Orgaz <pabloogc@gmail.com>
2023-11-29 16:46:40 +01:00
Iván Martínez
64ed9cd872
Allow passing a system prompt (#1318) 2023-11-29 15:51:19 +01:00
Gianni Acquisto
9c192ddd73
Added max_new_tokens as a config option to llm yaml block (#1317)
* added max_new_tokens as a configuration option to the llm block in settings

* Update fern/docs/pages/manual/settings.mdx

Co-authored-by: lopagela <lpglm@orange.fr>

* Update private_gpt/settings/settings.py

Add default value for max_new_tokens = 256

Co-authored-by: lopagela <lpglm@orange.fr>

* Addressed location of docs comment

* reformatting from running 'make check'

* remove default config value from settings.yaml

---------

Co-authored-by: lopagela <lpglm@orange.fr>
2023-11-26 19:17:29 +01:00
Gianni Acquisto
baf29f06fa
Adding docs about embeddings settings + adding the embedding.mode: local in mock profile (#1316) 2023-11-26 17:32:11 +01:00
lopagela
bafdd3baf1
Ingestion Speedup Multiple strategy (#1309) 2023-11-25 20:12:09 +01:00
Iván Martínez
546ba33e6f
Update readme with supporters info (#1311) 2023-11-25 18:35:59 +01:00
Iván Martínez
944c43bfa8
Multi language support - fern debug (#1307)
---------

Co-authored-by: Louis <lpglm@orange.fr>
Co-authored-by: LeMoussel <cnhx27@gmail.com>
2023-11-25 14:34:23 +01:00
Iván Martínez
e8d88f8952
Update preview-docs.yml
Use pull_request_target to be able to access FERN publication secret
2023-11-25 10:14:04 +01:00
Iván Martínez
c6d6e0e71b
Update preview-docs.yml to enable debug 2023-11-24 17:51:33 +01:00
Iván Martínez
510caa576b
Make qdrant the default vector db (#1285)
* Make qdrant the default vector db

---------

Co-authored-by: Pablo Orgaz <pabloogc@gmail.com>
Co-authored-by: lopagela <lpglm@orange.fr>
2023-11-20 16:19:22 +01:00
Francisco García Sierra
f1cbff0fb7
fix: Windows permission error on ingest service tmp files (#1280) 2023-11-20 10:08:03 +01:00
lopagela
a09cd7a892
Update llama_index to 0.9.3 (#1278)
* Update llama_index to 0.9.3

Had to change some imports because of breaking change durin the llama_index update to 0.9.0

* Update poetry.lock after update of llama_index
2023-11-19 18:49:36 +01:00
lopagela
36f69eed0f
Refactor documentation architecture (#1264)
* Refactor documentation architecture

Split into several `tab` and sections

* Fix Fern's docs.yml after PR review

Thank you Danny!

Co-authored-by: dannysheridan <danny@buildwithfern.com>

* Re-add quickstart in the overview tab

It went missing after a refactoring of the doc architecture

* Documentation writing

* Adapt Makefile to fern documentation

* Do not create overlapping page names in fern documentation

This is causing 500. Thank you to @dsinghvi for the troubleshooting and the help!

* Add a readme to help to understand how fern documentation work and how to add new pages

* Rework the welcome view

Redirects directly users to installation guide with links for people that are not familiar with documentation browsing.

* Simplify the quickstart guide

* PR feedback on installation guide

A ton of refactoring can still be made there

* PR feedback on ingestion

* PR feedback on ingestion splitting

* Rename section on LLM

* Fix missing word in list of LLMs

---------

Co-authored-by: dannysheridan <danny@buildwithfern.com>
2023-11-19 18:46:09 +01:00
Iván Martínez
57a829a8e8
Move fern workflows to root workflows folder (#1273)
* Move fern workflows to root workflows folder

* Only run fern actions when docu has changed
2023-11-18 20:47:44 +01:00
Iván Martínez
8af5ed3347
Delete CNAME 2023-11-18 20:23:05 +01:00
lopagela
224812f7f6
Update to gradio 4 and allow upload multiple files at once in UI (#1271) 2023-11-18 20:19:43 +01:00
Iván Martínez
adaa00ccc8
Fix/readme UI image (#1272) 2023-11-18 20:19:03 +01:00
lopagela
99dc670df0
Add badges in the README.md (#1261)
Using https://shields.io/

To have the complete list of badges available, c.f. to their documentation: https://shields.io/badges
2023-11-18 18:47:30 +01:00