mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-09 17:18:31 +00:00
This can only be reviewed by [hiding whitespaces](https://github.com/langchain-ai/langchain/pull/30302/files?diff=unified&w=1). The motivation behind this PR is to get my hands on the docs and make the LangSmith teasing short and clear. Right now I don't know how to do it, but this could be an include in the future. --------- Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
266 lines
8.1 KiB
Plaintext
266 lines
8.1 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "raw",
|
||
"id": "afaf8039",
|
||
"metadata": {},
|
||
"source": [
|
||
"---\n",
|
||
"sidebar_label: Cohere\n",
|
||
"---"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "9a3d6f34",
|
||
"metadata": {},
|
||
"source": [
|
||
"# CohereEmbeddings\n",
|
||
"\n",
|
||
"This will help you get started with Cohere embedding models using LangChain. For detailed documentation on `CohereEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/cohere/embeddings/langchain_cohere.embeddings.CohereEmbeddings.html).\n",
|
||
"\n",
|
||
"## Overview\n",
|
||
"### Integration details\n",
|
||
"\n",
|
||
"import { ItemTable } from \"@theme/FeatureTables\";\n",
|
||
"\n",
|
||
"<ItemTable category=\"text_embedding\" item=\"Cohere\" />\n",
|
||
"\n",
|
||
"## Setup\n",
|
||
"\n",
|
||
"To access Cohere embedding models you'll need to create a/an Cohere account, get an API key, and install the `langchain-cohere` integration package.\n",
|
||
"\n",
|
||
"### Credentials\n",
|
||
"\n",
|
||
"\n",
|
||
"Head to [cohere.com](https://cohere.com) to sign up to Cohere and generate an API key. Once you’ve done this set the COHERE_API_KEY environment variable:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"id": "36521c2a",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import getpass\n",
|
||
"import os\n",
|
||
"\n",
|
||
"if not os.getenv(\"COHERE_API_KEY\"):\n",
|
||
" os.environ[\"COHERE_API_KEY\"] = getpass.getpass(\"Enter your Cohere API key: \")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "c84fb993",
|
||
"metadata": {},
|
||
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 9,
|
||
"id": "39a4953b",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "d9664366",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Installation\n",
|
||
"\n",
|
||
"The LangChain Cohere integration lives in the `langchain-cohere` package:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "64853226",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"%pip install -qU langchain-cohere"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "45dd1724",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Instantiation\n",
|
||
"\n",
|
||
"Now we can instantiate our model object and generate chat completions:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 10,
|
||
"id": "9ea7a09b",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain_cohere import CohereEmbeddings\n",
|
||
"\n",
|
||
"embeddings = CohereEmbeddings(\n",
|
||
" model=\"embed-english-v3.0\",\n",
|
||
")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "77d271b6",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Indexing and Retrieval\n",
|
||
"\n",
|
||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||
"\n",
|
||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 11,
|
||
"id": "d817716b",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||
]
|
||
},
|
||
"execution_count": 11,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"# Create a vector store with a sample text\n",
|
||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||
"\n",
|
||
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||
"\n",
|
||
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||
" [text],\n",
|
||
" embedding=embeddings,\n",
|
||
")\n",
|
||
"\n",
|
||
"# Use the vectorstore as a retriever\n",
|
||
"retriever = vectorstore.as_retriever()\n",
|
||
"\n",
|
||
"# Retrieve the most similar text\n",
|
||
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||
"\n",
|
||
"# show the retrieved document's content\n",
|
||
"retrieved_documents[0].page_content"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "e02b9855",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Direct Usage\n",
|
||
"\n",
|
||
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||
"\n",
|
||
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||
"\n",
|
||
"### Embed single texts\n",
|
||
"\n",
|
||
"You can embed single texts or documents with `embed_query`:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 12,
|
||
"id": "0d2befcd",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"[-0.022979736, -0.030212402, -0.08886719, -0.08569336, 0.007030487, -0.0010671616, -0.033813477, 0.0\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"single_vector = embeddings.embed_query(text)\n",
|
||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "1b5a7d03",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Embed multiple texts\n",
|
||
"\n",
|
||
"You can embed multiple texts with `embed_documents`:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 13,
|
||
"id": "2f4d6e97",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"[-0.028869629, -0.030410767, -0.099121094, -0.07116699, -0.012748718, -0.0059432983, -0.04360962, 0.\n",
|
||
"[-0.047332764, -0.049957275, -0.07458496, -0.034332275, -0.057922363, -0.0112838745, -0.06994629, 0.\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"text2 = (\n",
|
||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||
")\n",
|
||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||
"for vector in two_vectors:\n",
|
||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "98785c12",
|
||
"metadata": {},
|
||
"source": [
|
||
"## API Reference\n",
|
||
"\n",
|
||
"For detailed documentation on `CohereEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/cohere/embeddings/langchain_cohere.embeddings.CohereEmbeddings.html).\n"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "Python 3 (ipykernel)",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.11.4"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|