mirror of
https://github.com/hwchase17/langchain.git
synced 2025-07-17 18:23:59 +00:00
docs: update hf pipeline docs (#12908)
- **Description:** Noticed that the Hugging Face Pipeline documentation
was a bit out of date.
Updated with information about passing in a pipeline directly
(consistent with docstring) and a recent contribution of mine on adding
support for multi-gpu specifications with Accelerate in
21eeba075c
This commit is contained in:
parent
37da6e546b
commit
1eb7d3a862
@ -41,7 +41,9 @@
|
|||||||
"id": "91ad075f-71d5-4bc8-ab91-cc0ad5ef16bb",
|
"id": "91ad075f-71d5-4bc8-ab91-cc0ad5ef16bb",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Load the model"
|
"### Model Loading\n",
|
||||||
|
"\n",
|
||||||
|
"Models can be loaded by specifying the model parameters using the `from_model_id` method."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -53,12 +55,12 @@
|
|||||||
},
|
},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from langchain.llms import HuggingFacePipeline\n",
|
"from langchain.llms.huggingface_pipeline import HuggingFacePipeline\n",
|
||||||
"\n",
|
"\n",
|
||||||
"llm = HuggingFacePipeline.from_model_id(\n",
|
"hf = HuggingFacePipeline.from_model_id(\n",
|
||||||
" model_id=\"bigscience/bloom-1b7\",\n",
|
" model_id=\"gpt2\",\n",
|
||||||
" task=\"text-generation\",\n",
|
" task=\"text-generation\",\n",
|
||||||
" model_kwargs={\"temperature\": 0, \"max_length\": 64},\n",
|
" pipeline_kwargs={\"max_new_tokens\": 10},\n",
|
||||||
")"
|
")"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@ -66,6 +68,31 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "00104b27-0c15-4a97-b198-4512337ee211",
|
"id": "00104b27-0c15-4a97-b198-4512337ee211",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"They can also be loaded by passing in an existing `transformers` pipeline directly"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"from langchain.llms.huggingface_pipeline import HuggingFacePipeline\n",
|
||||||
|
"from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n",
|
||||||
|
"\n",
|
||||||
|
"model_id = \"gpt2\"\n",
|
||||||
|
"tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
|
||||||
|
"model = AutoModelForCausalLM.from_pretrained(model_id)\n",
|
||||||
|
"pipe = pipeline(\n",
|
||||||
|
" \"text-generation\", model=model, tokenizer=tokenizer, max_new_tokens=10\n",
|
||||||
|
")\n",
|
||||||
|
"hf = HuggingFacePipeline(pipeline=pipe)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Create Chain\n",
|
"### Create Chain\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -87,7 +114,7 @@
|
|||||||
"Answer: Let's think step by step.\"\"\"\n",
|
"Answer: Let's think step by step.\"\"\"\n",
|
||||||
"prompt = PromptTemplate.from_template(template)\n",
|
"prompt = PromptTemplate.from_template(template)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"chain = prompt | llm\n",
|
"chain = prompt | hf\n",
|
||||||
"\n",
|
"\n",
|
||||||
"question = \"What is electroencephalography?\"\n",
|
"question = \"What is electroencephalography?\"\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -98,6 +125,40 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "dbbc3a37",
|
"id": "dbbc3a37",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### GPU Inference\n",
|
||||||
|
"\n",
|
||||||
|
"When running on a machine with GPU, you can specify the `device=n` parameter to put the model on the specified device.\n",
|
||||||
|
"Defaults to `-1` for CPU inference.\n",
|
||||||
|
"\n",
|
||||||
|
"If you have multiple-GPUs and/or the model is too large for a single GPU, you can specify `device_map=\"auto\"`, which requires and uses the [Accelerate](https://huggingface.co/docs/accelerate/index) library to automatically determine how to load the model weights. \n",
|
||||||
|
"\n",
|
||||||
|
"*Note*: both `device` and `device_map` should not be specified together and can lead to unexpected behavior."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"gpu_llm = HuggingFacePipeline.from_model_id(\n",
|
||||||
|
" model_id=\"gpt2\",\n",
|
||||||
|
" task=\"text-generation\",\n",
|
||||||
|
" device=0, # replace with device_map=\"auto\" to use the accelerate library.\n",
|
||||||
|
" pipeline_kwargs={\"max_new_tokens\": 10},\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"gpu_chain = prompt | gpu_llm\n",
|
||||||
|
"\n",
|
||||||
|
"question = \"What is electroencephalography?\"\n",
|
||||||
|
"\n",
|
||||||
|
"print(gpu_chain.invoke({\"question\": question}))"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Batch GPU Inference\n",
|
"### Batch GPU Inference\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -147,7 +208,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.8.10"
|
"version": "3.10.5"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
Loading…
Reference in New Issue
Block a user