mirror of
https://github.com/hwchase17/langchain.git
synced 2026-04-04 11:25:11 +00:00
Use docusaurus versioning with a callout, merged master as well @hwchase17 @baskaryan --------- Signed-off-by: Weichen Xu <weichen.xu@databricks.com> Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com> Co-authored-by: Leonid Ganeline <leo.gan.57@gmail.com> Co-authored-by: Leonid Kuligin <lkuligin@yandex.ru> Co-authored-by: Averi Kitsch <akitsch@google.com> Co-authored-by: Erick Friis <erick@langchain.dev> Co-authored-by: Nuno Campos <nuno@langchain.dev> Co-authored-by: Nuno Campos <nuno@boringbits.io> Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com> Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com> Co-authored-by: Martín Gotelli Ferenaz <martingotelliferenaz@gmail.com> Co-authored-by: Fayfox <admin@fayfox.com> Co-authored-by: Eugene Yurtsev <eugene@langchain.dev> Co-authored-by: Dawson Bauer <105886620+djbauer2@users.noreply.github.com> Co-authored-by: Ravindu Somawansa <ravindu.somawansa@gmail.com> Co-authored-by: Dhruv Chawla <43818888+Dominastorm@users.noreply.github.com> Co-authored-by: ccurme <chester.curme@gmail.com> Co-authored-by: Bagatur <baskaryan@gmail.com> Co-authored-by: WeichenXu <weichen.xu@databricks.com> Co-authored-by: Benito Geordie <89472452+benitoThree@users.noreply.github.com> Co-authored-by: kartikTAI <129414343+kartikTAI@users.noreply.github.com> Co-authored-by: Kartik Sarangmath <kartik@thirdai.com> Co-authored-by: Sevin F. Varoglu <sfvaroglu@octoml.ai> Co-authored-by: MacanPN <martin.triska@gmail.com> Co-authored-by: Prashanth Rao <35005448+prrao87@users.noreply.github.com> Co-authored-by: Hyeongchan Kim <kozistr@gmail.com> Co-authored-by: sdan <git@sdan.io> Co-authored-by: Guangdong Liu <liugddx@gmail.com> Co-authored-by: Rahul Triptahi <rahul.psit.ec@gmail.com> Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com> Co-authored-by: pjb157 <84070455+pjb157@users.noreply.github.com> Co-authored-by: Eun Hye Kim <ehkim1440@gmail.com> Co-authored-by: kaijietti <43436010+kaijietti@users.noreply.github.com> Co-authored-by: Pengcheng Liu <pcliu.fd@gmail.com> Co-authored-by: Tomer Cagan <tomer@tomercagan.com> Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
66 lines
1.9 KiB
Plaintext
66 lines
1.9 KiB
Plaintext
# RWKV-4
|
|
|
|
This page covers how to use the `RWKV-4` wrapper within LangChain.
|
|
It is broken into two parts: installation and setup, and then usage with an example.
|
|
|
|
## Installation and Setup
|
|
- Install the Python package with `pip install rwkv`
|
|
- Install the tokenizer Python package with `pip install tokenizer`
|
|
- Download a [RWKV model](https://huggingface.co/BlinkDL/rwkv-4-raven/tree/main) and place it in your desired directory
|
|
- Download the [tokens file](https://raw.githubusercontent.com/BlinkDL/ChatRWKV/main/20B_tokenizer.json)
|
|
|
|
## Usage
|
|
|
|
### RWKV
|
|
|
|
To use the RWKV wrapper, you need to provide the path to the pre-trained model file and the tokenizer's configuration.
|
|
```python
|
|
from langchain_community.llms import RWKV
|
|
|
|
# Test the model
|
|
|
|
```python
|
|
|
|
def generate_prompt(instruction, input=None):
|
|
if input:
|
|
return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
|
|
|
# Instruction:
|
|
{instruction}
|
|
|
|
# Input:
|
|
{input}
|
|
|
|
# Response:
|
|
"""
|
|
else:
|
|
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
|
|
|
# Instruction:
|
|
{instruction}
|
|
|
|
# Response:
|
|
"""
|
|
|
|
|
|
model = RWKV(model="./models/RWKV-4-Raven-3B-v7-Eng-20230404-ctx4096.pth", strategy="cpu fp32", tokens_path="./rwkv/20B_tokenizer.json")
|
|
response = model(generate_prompt("Once upon a time, "))
|
|
```
|
|
## Model File
|
|
|
|
You can find links to model file downloads at the [RWKV-4-Raven](https://huggingface.co/BlinkDL/rwkv-4-raven/tree/main) repository.
|
|
|
|
### Rwkv-4 models -> recommended VRAM
|
|
|
|
|
|
```
|
|
RWKV VRAM
|
|
Model | 8bit | bf16/fp16 | fp32
|
|
14B | 16GB | 28GB | >50GB
|
|
7B | 8GB | 14GB | 28GB
|
|
3B | 2.8GB| 6GB | 12GB
|
|
1b5 | 1.3GB| 3GB | 6GB
|
|
```
|
|
|
|
See the [rwkv pip](https://pypi.org/project/rwkv/) page for more information about strategies, including streaming and cuda support.
|