* Extract optional dependencies
* Separate local mode into llms-llama-cpp and embeddings-huggingface for clarity
* Support Ollama embeddings
* Upgrade to llamaindex 0.10.14. Remove legacy use of ServiceContext in ContextChatEngine
* Fix vector retriever filters
I was misled into believing I could install using python 3.12 whereas the pyproject.toml explicitly states otherwise. This PR only removes this comment to make sure other people are not also trapped 😄
* added max_new_tokens as a configuration option to the llm block in settings
* Update fern/docs/pages/manual/settings.mdx
Co-authored-by: lopagela <lpglm@orange.fr>
* Update private_gpt/settings/settings.py
Add default value for max_new_tokens = 256
Co-authored-by: lopagela <lpglm@orange.fr>
* Addressed location of docs comment
* reformatting from running 'make check'
* remove default config value from settings.yaml
---------
Co-authored-by: lopagela <lpglm@orange.fr>
* Refactor documentation architecture
Split into several `tab` and sections
* Fix Fern's docs.yml after PR review
Thank you Danny!
Co-authored-by: dannysheridan <danny@buildwithfern.com>
* Re-add quickstart in the overview tab
It went missing after a refactoring of the doc architecture
* Documentation writing
* Adapt Makefile to fern documentation
* Do not create overlapping page names in fern documentation
This is causing 500. Thank you to @dsinghvi for the troubleshooting and the help!
* Add a readme to help to understand how fern documentation work and how to add new pages
* Rework the welcome view
Redirects directly users to installation guide with links for people that are not familiar with documentation browsing.
* Simplify the quickstart guide
* PR feedback on installation guide
A ton of refactoring can still be made there
* PR feedback on ingestion
* PR feedback on ingestion splitting
* Rename section on LLM
* Fix missing word in list of LLMs
---------
Co-authored-by: dannysheridan <danny@buildwithfern.com>