langchain/docs/extras
eryk-dsai 7f5713b80a
feat: grammar-based sampling in llama-cpp (#9712)
## Description 

The following PR enables the [grammar-based
sampling](https://github.com/ggerganov/llama.cpp/tree/master/grammars)
in llama-cpp LLM.

In short, loading file with formal grammar definition will constrain
model outputs. For instance, one can force the model to generate valid
JSON or generate only python lists.

In the follow-up PR we will add:
* docs with some description why it is cool and how it works
* maybe some code sample for some task such as in llama repo

---------

Co-authored-by: Lance Martin <lance@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-28 09:52:55 -07:00
..
_templates Update Integrations links (#8206) 2023-07-24 21:20:32 -07:00
additional_resources Added In-Depth Langchain Agent Execution Guide (#9507) 2023-08-20 15:59:01 -07:00
ecosystem Added a link to the dependencies document (#9703) 2023-08-24 07:23:48 -07:00
guides fix broken wandb link in debugging page (#9771) 2023-08-25 15:34:08 -07:00
integrations feat: grammar-based sampling in llama-cpp (#9712) 2023-08-28 09:52:55 -07:00
modules docs: Fix a spelling mistake in adding_memory.ipynb (#9794) 2023-08-26 12:04:43 -07:00
use_cases typo: funtions --> functions (#9784) 2023-08-26 11:47:47 -07:00