mirror of
https://github.com/hwchase17/langchain.git
synced 2025-06-25 08:03:39 +00:00
docs: Add goodfire notebook and add to packages.yml (#29512)
- **Description:** Add Goodfire ipynb notebook and add langchain-goodfire package to packages.yml - **Issue:** n/a - **Dependencies:** docs only - **Twitter handle:** keenanpepper --------- Co-authored-by: Chester Curme <chester.curme@gmail.com>
This commit is contained in:
parent
a3c5e4d070
commit
2f97916dea
354
docs/docs/integrations/chat/goodfire.ipynb
Normal file
354
docs/docs/integrations/chat/goodfire.ipynb
Normal file
@ -0,0 +1,354 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "raw",
|
||||||
|
"id": "afaf8039",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"---\n",
|
||||||
|
"sidebar_label: Goodfire\n",
|
||||||
|
"---"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "e49f1e0d",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# ChatGoodfire\n",
|
||||||
|
"\n",
|
||||||
|
"This will help you getting started with Goodfire [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatGoodfire features and configurations head to the [PyPI project page](https://pypi.org/project/langchain-goodfire/), or go directly to the [Goodfire SDK docs](https://docs.goodfire.ai/sdk-reference/example). All of the Goodfire-specific functionality (e.g. SAE features, variants, etc.) is available via the main `goodfire` package. This integration is a wrapper around the Goodfire SDK.\n",
|
||||||
|
"\n",
|
||||||
|
"## Overview\n",
|
||||||
|
"### Integration details\n",
|
||||||
|
"\n",
|
||||||
|
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
|
||||||
|
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
|
||||||
|
"| [ChatGoodfire](https://python.langchain.com/api_reference/goodfire/chat_models/langchain_goodfire.chat_models.ChatGoodfire.html) | [langchain-goodfire](https://python.langchain.com/api_reference/goodfire/) | ❌ | ❌ | ❌ |  |  |\n",
|
||||||
|
"\n",
|
||||||
|
"### Model features\n",
|
||||||
|
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
|
||||||
|
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
|
||||||
|
"| ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
|
||||||
|
"\n",
|
||||||
|
"## Setup\n",
|
||||||
|
"\n",
|
||||||
|
"To access Goodfire models you'll need to create a/an Goodfire account, get an API key, and install the `langchain-goodfire` integration package.\n",
|
||||||
|
"\n",
|
||||||
|
"### Credentials\n",
|
||||||
|
"\n",
|
||||||
|
"Head to [Goodfire Settings](https://platform.goodfire.ai/organization/settings/api-keys) to sign up to Goodfire and generate an API key. Once you've done this set the GOODFIRE_API_KEY environment variable."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 1,
|
||||||
|
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import getpass\n",
|
||||||
|
"import os\n",
|
||||||
|
"\n",
|
||||||
|
"if not os.getenv(\"GOODFIRE_API_KEY\"):\n",
|
||||||
|
" os.environ[\"GOODFIRE_API_KEY\"] = getpass.getpass(\"Enter your Goodfire API key: \")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||||
|
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Installation\n",
|
||||||
|
"\n",
|
||||||
|
"The LangChain Goodfire integration lives in the `langchain-goodfire` package:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 2,
|
||||||
|
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"Note: you may need to restart the kernel to use updated packages.\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"%pip install -qU langchain-goodfire"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Instantiation\n",
|
||||||
|
"\n",
|
||||||
|
"Now we can instantiate our model object and generate chat completions:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 3,
|
||||||
|
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stderr",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"import goodfire\n",
|
||||||
|
"from langchain_goodfire import ChatGoodfire\n",
|
||||||
|
"\n",
|
||||||
|
"base_variant = goodfire.Variant(\"meta-llama/Llama-3.3-70B-Instruct\")\n",
|
||||||
|
"\n",
|
||||||
|
"llm = ChatGoodfire(\n",
|
||||||
|
" model=base_variant,\n",
|
||||||
|
" temperature=0,\n",
|
||||||
|
" max_completion_tokens=1000,\n",
|
||||||
|
" seed=42,\n",
|
||||||
|
")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "2b4f3e15",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Invocation"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 4,
|
||||||
|
"id": "62e0dbc3",
|
||||||
|
"metadata": {
|
||||||
|
"tags": []
|
||||||
|
},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={}, id='run-8d43cf35-bce8-4827-8935-c64f8fb78cd0-0', usage_metadata={'input_tokens': 51, 'output_tokens': 39, 'total_tokens': 90})"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 4,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"messages = [\n",
|
||||||
|
" (\n",
|
||||||
|
" \"system\",\n",
|
||||||
|
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
|
||||||
|
" ),\n",
|
||||||
|
" (\"human\", \"I love programming.\"),\n",
|
||||||
|
"]\n",
|
||||||
|
"ai_msg = await llm.ainvoke(messages)\n",
|
||||||
|
"ai_msg"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 5,
|
||||||
|
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"J'adore la programmation.\n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"print(ai_msg.content)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Chaining\n",
|
||||||
|
"\n",
|
||||||
|
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 6,
|
||||||
|
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"AIMessage(content='Ich liebe das Programmieren. How can I help you with programming today?', additional_kwargs={}, response_metadata={}, id='run-03d1a585-8234-46f1-a8df-bf9143fe3309-0', usage_metadata={'input_tokens': 46, 'output_tokens': 46, 'total_tokens': 92})"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 6,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||||
|
"\n",
|
||||||
|
"prompt = ChatPromptTemplate(\n",
|
||||||
|
" [\n",
|
||||||
|
" (\n",
|
||||||
|
" \"system\",\n",
|
||||||
|
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
|
||||||
|
" ),\n",
|
||||||
|
" (\"human\", \"{input}\"),\n",
|
||||||
|
" ]\n",
|
||||||
|
")\n",
|
||||||
|
"\n",
|
||||||
|
"chain = prompt | llm\n",
|
||||||
|
"await chain.ainvoke(\n",
|
||||||
|
" {\n",
|
||||||
|
" \"input_language\": \"English\",\n",
|
||||||
|
" \"output_language\": \"German\",\n",
|
||||||
|
" \"input\": \"I love programming.\",\n",
|
||||||
|
" }\n",
|
||||||
|
")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Goodfire-specific functionality\n",
|
||||||
|
"\n",
|
||||||
|
"To use Goodfire-specific functionality such as SAE features and variants, you can use the `goodfire` package directly."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 7,
|
||||||
|
"id": "3aef9e0a",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"FeatureGroup([\n",
|
||||||
|
" 0: \"The assistant should adopt the persona of a pirate\",\n",
|
||||||
|
" 1: \"The assistant should roleplay as a pirate\",\n",
|
||||||
|
" 2: \"The assistant should engage with pirate-themed content or roleplay as a pirate\",\n",
|
||||||
|
" 3: \"The assistant should roleplay as a character\",\n",
|
||||||
|
" 4: \"The assistant should roleplay as a specific character\",\n",
|
||||||
|
" 5: \"The assistant should roleplay as a game character or NPC\",\n",
|
||||||
|
" 6: \"The assistant should roleplay as a human character\",\n",
|
||||||
|
" 7: \"Requests for the assistant to roleplay or pretend to be something else\",\n",
|
||||||
|
" 8: \"Requests for the assistant to roleplay or pretend to be something\",\n",
|
||||||
|
" 9: \"The assistant is being assigned a role or persona to roleplay\"\n",
|
||||||
|
"])"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 7,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"client = goodfire.Client(api_key=os.environ[\"GOODFIRE_API_KEY\"])\n",
|
||||||
|
"\n",
|
||||||
|
"pirate_features = client.features.search(\n",
|
||||||
|
" \"assistant should roleplay as a pirate\", base_variant\n",
|
||||||
|
")\n",
|
||||||
|
"pirate_features"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 8,
|
||||||
|
"id": "52f03a00",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": [
|
||||||
|
"AIMessage(content='Why did the scarecrow win an award? Because he was outstanding in his field! Arrr! Hope that made ye laugh, matey!', additional_kwargs={}, response_metadata={}, id='run-7d8bd30f-7f80-41cb-bdb6-25c29c22a7ce-0', usage_metadata={'input_tokens': 35, 'output_tokens': 60, 'total_tokens': 95})"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"execution_count": 8,
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "execute_result"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"pirate_variant = goodfire.Variant(\"meta-llama/Llama-3.3-70B-Instruct\")\n",
|
||||||
|
"\n",
|
||||||
|
"pirate_variant.set(pirate_features[0], 0.4)\n",
|
||||||
|
"pirate_variant.set(pirate_features[1], 0.3)\n",
|
||||||
|
"\n",
|
||||||
|
"await llm.ainvoke(\"Tell me a joke\", model=pirate_variant)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## API reference\n",
|
||||||
|
"\n",
|
||||||
|
"For detailed documentation of all ChatGoodfire features and configurations head to the [API reference](https://python.langchain.com/api_reference/goodfire/chat_models/langchain_goodfire.chat_models.ChatGoodfire.html)"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": ".venv",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.12.8"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
14
docs/docs/integrations/providers/goodfire.mdx
Normal file
14
docs/docs/integrations/providers/goodfire.mdx
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
# Goodfire
|
||||||
|
|
||||||
|
[Goodfire](https://www.goodfire.ai/) is a research lab focused on AI safety and
|
||||||
|
interpretability.
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install langchain-goodfire
|
||||||
|
```
|
||||||
|
|
||||||
|
## Chat models
|
||||||
|
|
||||||
|
See detail on available chat models [here](/docs/integrations/chat/goodfire).
|
@ -375,3 +375,8 @@ packages:
|
|||||||
path: .
|
path: .
|
||||||
repo: Amitgb14/langchain_jenkins
|
repo: Amitgb14/langchain_jenkins
|
||||||
downloads: 0
|
downloads: 0
|
||||||
|
- name: langchain-goodfire
|
||||||
|
path: .
|
||||||
|
repo: keenanpepper/langchain-goodfire
|
||||||
|
downloads: 51
|
||||||
|
downloads_updated_at: '2025-01-30T00:00:00+00:00'
|
||||||
|
Loading…
Reference in New Issue
Block a user