Commit Graph

  • 774e256052 fix: Fixed docker-compose (#1758) Marco Repetto 2024-03-20 21:36:45 +01:00
  • 6f6c785dac feat(llm): Ollama timeout setting (#1773) Iván Martínez 2024-03-20 21:33:46 +01:00
  • c2d694852b feat: wipe per storage type (#1772) Brett England 2024-03-20 16:31:44 -04:00
  • 7d2de5c96f fix(ingest): update script label (#1770) Ikko Eltociear Ashimine 2024-03-21 04:23:08 +09:00
  • 348df781b5 feat(UI): Faster startup and document listing (#1763) Iván Martínez 2024-03-20 19:11:44 +01:00
  • 572518143a feat(docs): Feature/upgrade docs (#1741) Iván Martínez 2024-03-19 21:26:53 +01:00
  • 134fc54d7d feat(ingest): Created a faster ingestion mode - pipeline (#1750) Brett England 2024-03-19 16:24:46 -04:00
  • 1efac6a3fe feat(llm - embed): Add support for Azure OpenAI (#1698) Otto L 2024-03-15 16:49:50 +01:00
  • 258d02d87c fix(docs): Minor documentation amendment (#1739) Brett England 2024-03-15 11:36:32 -04:00
  • 63de7e4930 feat: unify settings for vector and nodestore connections to PostgreSQL (#1730) Brett England 2024-03-15 04:55:17 -04:00
  • 68b3a34b03 feat(nodestore): add Postgres for the doc and index store (#1706) Brett England 2024-03-14 12:12:33 -04:00
  • d17c34e81a fix(settings): set default tokenizer to avoid running make setup fail (#1709) Iván Martínez 2024-03-13 09:53:40 +01:00
  • 84ad16af80 feat(docs): upgrade fern (#1596) Andrew Jiang 2024-03-12 09:02:56 +11:00
  • 821bca32e9 feat(local): tiktoken cache within repo for offline (#1467) Arun Yadav 2024-03-12 03:25:13 +05:30
  • 02dc83e8e9 feat(llm): adds serveral settings for llamacpp and ollama (#1703) icsy7867 2024-03-11 17:51:05 -04:00
  • 410bf7a71f feat(ui): maintain score order when curating sources (#1643) Hoffelhas 2024-03-11 22:27:30 +01:00
  • 290b9fb084 feat(ui): add sources check to not repeat identical sources (#1705) icsy7867 2024-03-11 17:24:18 -04:00
  • 77d43ef31c Update installation doc feature/tensorrt-support imartinez 2024-03-08 00:55:51 +01:00
  • a93db2850c Stop using the fake tensorrt-llm package. Update documentation. imartinez 2024-03-08 00:52:01 +01:00
  • 937c52354b Lower min python version to 3.10 imartinez 2024-03-07 22:27:12 +01:00
  • 1b03b369c0 chore(main): release 0.4.0 (#1628) v0.4.0 github-actions[bot] 2024-03-06 17:53:35 +01:00
  • 45f05711eb feat: Upgrade to LlamaIndex to 0.10 (#1663) Iván Martínez 2024-03-06 17:51:30 +01:00
  • 3aaf4d682b Merge branch 'feature/upgrade-llamaindex' into feature/tensorrt-support imartinez 2024-03-01 10:01:00 +01:00
  • 274c386312 Fix error comment for openailike imartinez 2024-03-01 09:33:11 +01:00
  • e1456c13fe Windows note for setting env vars imartinez 2024-03-01 09:31:08 +01:00
  • 8fad1966f0 Update setup script to point to the new settings imartinez 2024-03-01 09:15:43 +01:00
  • 70ca241a8b Fix mypy imartinez 2024-02-29 19:44:32 +01:00
  • a7b18058b5 Support for Nvidia TensorRT imartinez 2024-02-29 19:41:58 +01:00
  • c3fe36e070 Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity imartinez 2024-02-29 16:40:11 +01:00
  • 85276893a3 Fix actions and dockerfiles imartinez 2024-02-29 15:38:32 +01:00
  • 63d3b9f936 Fix mypy imartinez 2024-02-29 15:20:26 +01:00
  • 34d48d7b4d Format fixes imartinez 2024-02-29 14:50:47 +01:00
  • 8c390812ff Documentation updates and default settings reviewed imartinez 2024-02-29 14:48:55 +01:00
  • 3373e80850 Extract optional dependencies imartinez 2024-02-28 20:28:30 +01:00
  • d0a7d991a2 Working refactor. Dependency clean-up pending. imartinez 2024-02-28 18:45:54 +01:00
  • 12f3a39e8a Update x handle to zylon private gpt (#1644) Daniel Gallego Vico 2024-02-23 15:51:35 +01:00
  • cd40e3982b feat(Vector): support pgvector (#1624) TQ 2024-02-20 22:29:26 +08:00
  • 066ea5bf28 chore(main): release 0.3.0 (#1413) v0.3.0 github-actions[bot] 2024-02-16 17:42:39 +01:00
  • aa13afde07 feat(UI): Select file to Query or Delete + Delete ALL (#1612) Iván Martínez 2024-02-16 17:36:09 +01:00
  • 24fb80ca38 fix(UI): Updated ui.py. Frees up the CPU to not be bottlenecked. icsy7867 2024-02-16 06:52:14 -05:00
  • 6bbec79583 feat(llm): Add support for Ollama LLM (#1526) Ygal Blum 2024-02-09 16:50:50 +02:00
  • b178b51451 feat(bulk-ingest): Add --ignored Flag to Exclude Specific Files and Directories During Ingestion (#1432) Nick Smirnov 2024-02-07 21:59:32 +03:00
  • 24fae660e6 feat: Add stream information to generate SDKs (#1569) Iván Martínez 2024-02-02 16:14:22 +01:00
  • 3e67e21d38 Add embedding mode config (#1541) Pablo Orgaz 2024-01-25 10:55:32 +01:00
  • 869233f0e4 fix: Adding an LLM param to fix broken generator from llamacpp (#1519) Naveen Kannan 2024-01-17 12:10:45 -05:00
  • e326126d0d feat: add mistral + chatml prompts (#1426) CognitiveTech 2024-01-16 16:51:14 -05:00
  • 6191bcdbd6 fix: minor bug in chat stream output - python error being serialized (#1449) Robert Gay 2024-01-16 07:41:20 -08:00
  • d3acd85fe3 fix(tests): load the test settings only when running tests Iván Martínez 2024-01-09 12:03:16 +01:00
  • 0a89d76cc5 fix(docs): Update quickstart doc and set version in pyproject.toml to 0.2.0 Guido Schulz 2023-12-26 13:09:31 +01:00
  • 2d27a9f956 feat(llm): Add openailike llm mode (#1447) Matthew Hill 2023-12-26 04:26:08 -05:00
  • fee9f08ef3 Move back to 3900 for the context window to avoid melting local machines imartinez 2023-12-22 18:21:43 +01:00
  • fde2b942bc fix(deploy): fix local and external dockerfiles Iván Martínez 2023-12-22 14:16:46 +01:00
  • 4c69c458ab Improve ingest logs (#1438) Iván Martínez 2023-12-21 17:13:46 +01:00
  • 4780540870 feat(settings): Configurable context_window and tokenizer (#1437) Iván Martínez 2023-12-21 14:49:35 +01:00
  • 6eeb95ec7f feat(API): Ingest plain text (#1417) Iván Martínez 2023-12-18 21:47:05 +01:00
  • 059f35840a fix(docker): docker broken copy (#1419) Pablo Orgaz 2023-12-18 16:55:18 +01:00
  • 8ec7cf49f4 feat(settings): Update default model to TheBloke/Mistral-7B-Instruct-v0.2-GGUF (#1415) Iván Martínez 2023-12-17 16:11:08 +01:00
  • c71ae7cee9 feat(ui): make chat area stretch to fill the screen (#1397) Rohit Das 2023-12-17 16:32:13 +05:30
  • 2564f8d2bb fix(settings): correct yaml multiline string (#1403) cognitivetech 2023-12-16 13:02:46 -05:00
  • 4e496e970a docs: remove misleading comment about pgpt working with python 3.12 (#1394) Eliott Bouhana 2023-12-15 21:35:02 +01:00
  • 3582764801 ci: fix preview docs checkout ref (#1393) Federico Grandi 2023-12-12 20:33:34 +01:00
  • 1d28ae2915 docs: fix minor capitalization typo (#1392) Federico Grandi 2023-12-12 20:31:38 +01:00
  • e8ac51bba4 chore(main): release 0.2.0 (#1387) v0.2.0 github-actions[bot] 2023-12-10 20:08:12 +01:00
  • 145f3ec9f4 feat(ui): Allows User to Set System Prompt via "Additional Options" in Chat Interface (#1353) 3ly-13 2023-12-10 12:45:14 -06:00
  • a072a40a7c Allow setting OpenAI model in settings (#1386) 3ly-13 2023-12-09 13:13:00 -06:00
  • a3ed14c58f feat(llm): drop default_system_prompt (#1385) Louis Melchior 2023-12-08 23:13:51 +01:00
  • f235c50be9 Delete old docs (#1384) Iván Martínez 2023-12-08 22:39:23 +01:00
  • 9302620eac Adding german speaking model to documentation (#1374) EEmlan 2023-12-08 11:26:25 +01:00
  • 9cf972563e Add setup option to Makefile (#1368) Max Zangs 2023-12-08 10:34:12 +01:00
  • 99a870bcee Disable mypy on a file that is specific to local mode more-prompt-format Louis 2023-12-03 19:12:40 +01:00
  • 29816d8a3a Make test optional for prompt_helper if no llama_cpp is installed Louis 2023-12-03 19:03:34 +01:00
  • af1463637b Update poetry lock, and fix run for template prompt format Louis 2023-12-03 18:46:18 +01:00
  • 5bc5054000 Fix typing, linting and add tests Louis 2023-12-03 16:31:20 +01:00
  • 76faffb269 WIP more prompt format, and more maintainable Louis 2023-12-03 00:48:43 +01:00
  • 3d301d0c6f chore(main): release 0.1.0 (#1094) v0.1.0 github-actions[bot] 2023-12-01 14:45:54 +01:00
  • 56af625d71 Fix the parallel ingestion mode, and make it available through conf (#1336) lopagela 2023-11-30 11:41:55 +01:00
  • b7ca7d35a0 Update ingest api docs with Windows support (#1289) Francisco García Sierra 2023-11-29 20:56:37 +01:00
  • 28d03fdda8 Adding working combination of LLM and Embedding Model to recipes (#1315) ishaandatta 2023-11-30 01:24:22 +05:30
  • aabdb046ae Add docker compose (#1277) Phi Long 2023-11-29 23:46:40 +08:00
  • 64ed9cd872 Allow passing a system prompt (#1318) Iván Martínez 2023-11-29 15:51:19 +01:00
  • 9c192ddd73 Added max_new_tokens as a config option to llm yaml block (#1317) Gianni Acquisto 2023-11-26 19:17:29 +01:00
  • baf29f06fa Adding docs about embeddings settings + adding the embedding.mode: local in mock profile (#1316) Gianni Acquisto 2023-11-26 17:32:11 +01:00
  • bafdd3baf1 Ingestion Speedup Multiple strategy (#1309) lopagela 2023-11-25 20:12:09 +01:00
  • 546ba33e6f Update readme with supporters info (#1311) Iván Martínez 2023-11-25 18:35:59 +01:00
  • 944c43bfa8 Multi language support - fern debug (#1307) Iván Martínez 2023-11-25 14:34:23 +01:00
  • e8d88f8952 Update preview-docs.yml Iván Martínez 2023-11-25 10:14:04 +01:00
  • c6d6e0e71b Update preview-docs.yml to enable debug Iván Martínez 2023-11-24 17:51:33 +01:00
  • 510caa576b Make qdrant the default vector db (#1285) Iván Martínez 2023-11-20 16:19:22 +01:00
  • f1cbff0fb7 fix: Windows permission error on ingest service tmp files (#1280) Francisco García Sierra 2023-11-20 10:08:03 +01:00
  • a09cd7a892 Update llama_index to 0.9.3 (#1278) lopagela 2023-11-19 18:49:36 +01:00
  • 36f69eed0f Refactor documentation architecture (#1264) lopagela 2023-11-19 18:46:09 +01:00
  • 57a829a8e8 Move fern workflows to root workflows folder (#1273) Iván Martínez 2023-11-18 20:47:44 +01:00
  • 8af5ed3347 Delete CNAME Iván Martínez 2023-11-18 20:23:05 +01:00
  • 224812f7f6 Update to gradio 4 and allow upload multiple files at once in UI (#1271) lopagela 2023-11-18 20:19:43 +01:00
  • adaa00ccc8 Fix/readme UI image (#1272) Iván Martínez 2023-11-18 20:19:03 +01:00
  • 99dc670df0 Add badges in the README.md (#1261) lopagela 2023-11-18 18:47:30 +01:00
  • f7d7b6cd4b Fixed the avatar of the box by using a local file (#1266) lopagela 2023-11-18 12:29:27 +01:00
  • 0d520026a3 fix: Windows 11 failing to auto-delete tmp file (#1260) Pablo Orgaz 2023-11-17 18:23:57 +01:00
  • 4197ada626 feat: enable resume download for hf_hub_download (#1249) Lai Zn 2023-11-17 07:13:11 +08:00
  • 09d9a91946 Create CNAME Iván Martínez 2023-11-17 00:07:50 +01:00