Compare commits

..

50 Commits

Author SHA1 Message Date
Eugene Yurtsev
9edffaaed2 x 2024-04-23 17:20:45 -04:00
Eugene Yurtsev
d09f8eebff x 2024-04-23 17:15:37 -04:00
Eugene Yurtsev
1b19f839f9 x 2024-04-23 17:12:44 -04:00
Eugene Yurtsev
e8d99c9620 x 2024-04-23 16:57:04 -04:00
Eugene Yurtsev
c52a84c5a3 x 2024-04-23 16:55:05 -04:00
Eugene Yurtsev
1ac61323d3 x 2024-04-23 16:41:30 -04:00
Eugene Yurtsev
f82b2f4a6f x 2024-04-23 16:41:06 -04:00
Eugene Yurtsev
3755822a2d x 2024-04-23 16:35:29 -04:00
Eugene Yurtsev
017ae731d4 x 2024-04-23 16:34:50 -04:00
Eugene Yurtsev
9ac0b0026b x 2024-04-23 16:34:13 -04:00
Eugene Yurtsev
59fbe77510 x 2024-04-23 16:27:53 -04:00
Eugene Yurtsev
8aea083bf3 Merge branch 'master' into eugene/move_memories_2 2024-04-23 16:11:25 -04:00
Eugene Yurtsev
a7c347ab35 langchain[patch]: Update evaluation logic that instantiates a default LLM (#20760)
Favor langchain_openai over langchain_community for evaluation logic.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-04-23 16:09:32 -04:00
Eugene Yurtsev
72f720fa38 langchain[major]: Remove default instantations of LLMs from VectorstoreToolkit (#20794)
Remove default instantiation from vectorstore toolkit.
2024-04-23 16:09:14 -04:00
ccurme
42de5168b1 langchain: deprecate LLMChain, RetrievalQA, and ConversationalRetrievalChain (#20751) 2024-04-23 15:55:34 -04:00
Eugene Yurtsev
aaf376a681 x 2024-04-23 15:54:48 -04:00
Erick Friis
30c7951505 core: use qualname in beta message (#20361) 2024-04-23 11:20:13 -07:00
Aliaksandr Kuzmik
5560cc448c community[patch]: fix CometTracer bug (#20796)
Hi! My name is Alex, I'm an SDK engineer from
[Comet](https://www.comet.com/site/)

This PR updates the `CometTracer` class.

Fixed an issue when `CometTracer` failed while logging the data to Comet
because this data is not JSON-encodable.

The problem was in some of the `Run` attributes that could contain
non-default types inside, now these attributes are taken not from the
run instance, but from the `run.dict()` return value.
2024-04-23 13:24:41 -04:00
Eugene Yurtsev
1c89e45c14 langchain[major]: breaks some chains to remove hidden defaults (#20759)
Breaks some chains in langchain to remove hidden chat model / llm instantiation.
2024-04-23 11:11:40 -04:00
Eugene Yurtsev
ad6b5f84e5 community[patch],core[minor]: Move in memory cache implementation to core (#20753)
This PR moves the InMemoryCache implementation from community to core.
2024-04-23 11:10:11 -04:00
Stefano Ottolenghi
4f67ce485a docs: Fix typo to render list (#20774)
This _should_ fix the currently broken list in the [Neo4jVector
page](https://python.langchain.com/docs/integrations/vectorstores/neo4jvector/).

![Screenshot from 2024-04-23
08-40-37](https://github.com/langchain-ai/langchain/assets/114478074/ab5ad622-879e-4764-93db-5f502eae479b)
2024-04-23 14:46:58 +00:00
Eugene Yurtsev
a2cc9b55ba core[patch]: Remove autoupgrade to addable dict in Runnable/RunnableLambda/RunnablePassthrough transform (#20677)
Causes an issue for this code

```python
from langchain.chat_models.openai import ChatOpenAI
from langchain.output_parsers.openai_tools import JsonOutputToolsParser
from langchain.schema import SystemMessage

prompt = SystemMessage(content="You are a nice assistant.") + "{question}"

llm = ChatOpenAI(
    model_kwargs={
        "tools": [
            {
                "type": "function",
                "function": {
                    "name": "web_search",
                    "description": "Searches the web for the answer to the question.",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "query": {
                                "type": "string",
                                "description": "The question to search for.",
                            },
                        },
                    },
                },
            }
        ],
    },
    streaming=True,
)

parser = JsonOutputToolsParser(first_tool_only=True)

llm_chain = prompt | llm | parser | (lambda x: x)


for chunk in llm_chain.stream({"question": "tell me more about turtles"}):
    print(chunk)

# message = llm_chain.invoke({"question": "tell me more about turtles"})

# print(message)
```

Instead by definition, we'll assume that RunnableLambdas consume the
entire stream and that if the stream isn't addable then it's the last
message of the stream that's in the usable format.

---

If users want to use addable dicts, they can wrap the dict in an
AddableDict class.

---

Likely, need to follow up with the same change for other places in the
code that do the upgrade
2024-04-23 10:35:06 -04:00
Oleksandr Yaremchuk
9428923bab experimental[minor]: upgrade the prompt injection model (#20783)
- **Description:** In January, Laiyer.ai became part of ProtectAI, which
means the model became owned by ProtectAI. In addition to that,
yesterday, we released a new version of the model addressing issues the
Langchain's community and others mentioned to us about false-positives.
The new model has a better accuracy compared to the previous version,
and we thought the Langchain community would benefit from using the
[latest version of the
model](https://huggingface.co/protectai/deberta-v3-base-prompt-injection-v2).
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** @alex_yaremchuk
2024-04-23 10:23:39 -04:00
Eugene Yurtsev
645b1e142e core[minor],langchain[patch],community[patch]: Move InMemory and File implementations of Chat History to core (#20752)
This PR moves the implementations for chat history to core. So it's
easier to determine which dependencies need to be broken / add
deprecation warnings
2024-04-23 10:22:11 -04:00
ccurme
7a922f3e48 core, openai: support custom token encoders (#20762) 2024-04-23 13:57:05 +00:00
Chen94yue
b481b73805 Update custom_retriever.ipynb (#20776)
Fixed an error in the sample code to ensure that the code can run
directly.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-23 13:47:08 +00:00
Bagatur
ed980601e1 docs: update examples in api ref (#20768) 2024-04-23 00:47:52 +00:00
Bagatur
be51cd3bc9 docs: fix api ref link autogeneration (#20766) 2024-04-22 17:36:41 -07:00
monke111
c807f0a6dd Update google_drive.ipynb (#20731)
langchain_community.document_loaders depricated 
new langchain_google_community

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-22 23:30:46 +00:00
Katarina Supe
dc61e23886 docs: update Memgraph docs (#20736)
- **Description:** Memgraph Platform is being run differently now so I
updated this (I am DX engineer from Memgraph).
2024-04-22 19:27:12 -04:00
Tabish Mir
6a0d44d632 docs: Fix link for partition_pdf in Semi_Structured_RAG.ipynb cookbook (#20763)
docs: Fix link for `partition_pdf` in Semi_Structured_RAG.ipynb cookbook

- **Description:** Fix incorrect link to unstructured-io `partition_pdf`
section
2024-04-22 23:22:55 +00:00
Bagatur
fa4d6f9f8b docs: install partner pkgs vercel (#20761) 2024-04-22 23:08:02 +00:00
Christophe Bornet
0ae5027d98 community[patch]: Remove usage of deprecated StoredBlobHistory in CassandraChatMessageHistory (#20666) 2024-04-22 17:11:05 -04:00
Bagatur
eb18f4e155 infra: rm sep repo partner dirs (#20756)
so you can `poetry run pip install -e libs/partners/*/` to your hearts
content
2024-04-22 14:05:39 -07:00
Bagatur
2a11a30572 docs: automatically add api ref links (#20755)
![Screenshot 2024-04-22 at 1 51 13
PM](https://github.com/langchain-ai/langchain/assets/22008038/b8b09fec-3800-4b97-bd26-5571b8308f4a)
2024-04-22 14:05:29 -07:00
Eugene Yurtsev
936c6cc74a langchain[patch]: Add missing deprecation for openai adapters (#20668)
Add missing deprecation for openai adapters
2024-04-22 14:05:55 -04:00
Eugene Yurtsev
38adbfdf34 community[patch],core[minor]: Move BaseToolKit to core.tools (#20669) 2024-04-22 14:04:30 -04:00
Mark Needham
ce23f8293a Community patch clickhouse make it possible to not specify index (#20460)
Vector indexes in ClickHouse are experimental at the moment and can
sometimes break/change behaviour. So this PR makes it possible to say
that you don't want to specify an index type.

Any queries against the embedding column will be brute force/linear
scan, but that gives reasonable performance for small-medium dataset
sizes.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-22 10:46:37 -07:00
ccurme
c010ec8b71 patch: deprecate (a)get_relevant_documents (#20477)
- `.get_relevant_documents(query)` -> `.invoke(query)`
- `.get_relevant_documents(query=query)` -> `.invoke(query)`
- `.get_relevant_documents(query, callbacks=callbacks)` ->
`.invoke(query, config={"callbacks": callbacks})`
- `.get_relevant_documents(query, **kwargs)` -> `.invoke(query,
**kwargs)`

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-22 11:14:53 -04:00
A Noor
939d113d10 docs: Fixed grammar mistake (#20697)
Description: Changed "You are" to "You are a". Grammar issue.
Dependencies: None

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-22 02:55:05 +00:00
Matheus Henrique Raymundo
bb69819267 community: Fix the stop sequence key name for Mistral in Bedrock (#20709)
Fixing the wrong stop sequence key name that causes an error on AWS
Bedrock.
You can check the MistralAI bedrock parameters
[here](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-mistral.html)
This change fixes this
[issue](https://github.com/langchain-ai/langchain/issues/20095)
2024-04-21 20:06:06 -04:00
Bagatur
1c7b3c75a7 community[patch], experimental[patch]: support tool-calling sql and p… (#20639)
d agents
2024-04-21 15:43:09 -07:00
Bagatur
d0cee65cdc langchain[patch]: langchain-pinecone self query support (#20702) 2024-04-21 15:42:39 -07:00
Leonid Kuligin
5ae738c4fe docs: on google-genai vs google-vertexai (#20713)
Thank you for contributing to LangChain!

- [ ] **PR title**: "docs: added a description of differences
langchain_google_genai vs langchain_google_vertexai"


- [ ]
- **Description:** added a description of differences
langchain_google_genai vs langchain_google_vertexai
2024-04-21 12:53:19 -07:00
shumway743
cb6e5e56c2 community[minor]: add graph store implementation for apache age (#20582)
**Description:** implemented GraphStore class for Apache Age graph db

**Dependencies:** depends on psycopg2

Unit and integration tests included. Formatting and linting have been
run.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-20 14:31:04 -07:00
Christophe Bornet
c909ae0152 community[minor]: Add async methods to CassandraVectorStore (#20602)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-20 02:09:58 +00:00
Leonid Ganeline
06d18c106d langchain[patch]: example_selector import fix (#20676)
Cleaned up updated imports
2024-04-19 21:42:18 -04:00
Leonid Ganeline
d6470aab60 langchain: dosctore import fix (#20678)
Cleaned up imports
2024-04-19 21:41:36 -04:00
Leonid Ganeline
3a750e130c templates: utilities import fix (#20679)
Updated imports from `from langchain.utilities` to `from
langchain_community.utilities`
2024-04-19 21:41:15 -04:00
Dmitry Tyumentsev
f111efeb6e community[patch]: YandexGPT API add ability to disable request logging (#20670)
Closes (#20622)

Added the ability to [disable logging of requests to
YandexGPT](https://yandex.cloud/en/docs/foundation-models/operations/yandexgpt/disable-logging).
2024-04-19 21:40:37 -04:00
298 changed files with 6937 additions and 3971 deletions

View File

@@ -1,59 +0,0 @@
name: release note experiments
run-name: Release note for ${{ inputs.working-directory }} by @${{ github.actor }}
on:
workflow_dispatch:
inputs:
working-directory:
required: true
type: string
default: 'libs/langchain'
env:
PYTHON_VERSION: "3.11"
POETRY_VERSION: "1.7.1"
jobs:
build:
runs-on: ubuntu-latest
outputs:
pkg-name: ${{ steps.check-version.outputs.pkg-name }}
version: ${{ steps.check-version.outputs.version }}
steps:
- uses: actions/checkout@v4
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: release
- name: Check Version
id: check-version
shell: bash
working-directory: ${{ inputs.working-directory }}
run: |
echo pkg-name="$(poetry version | cut -d ' ' -f 1)" >> $GITHUB_OUTPUT
echo version="$(poetry version --short)" >> $GITHUB_OUTPUT
release-notes:
needs:
- build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: release
- name: Generate Release Notes
env:
TAG_NAME: ${{ needs.build.outputs.pkg-name }}-v${{ needs.build.outputs.version }}
RELEASE_NAME: ${{ needs.build.outputs.pkg-name }}==${{ needs.build.outputs.version }}
run: |
echo "TAG_NAME=${TAG_NAME}"
echo "RELEASE_NAME=${RELEASE_NAME}"

View File

@@ -604,7 +604,7 @@
"source": [
"# Check retrieval\n",
"query = \"Give me company names that are interesting investments based on EV / NTM and NTM rev growth. Consider EV / NTM multiples vs historical?\"\n",
"docs = retriever_multi_vector_img.get_relevant_documents(query, limit=6)\n",
"docs = retriever_multi_vector_img.invoke(query, limit=6)\n",
"\n",
"# We get 4 docs\n",
"len(docs)"
@@ -630,7 +630,7 @@
"source": [
"# Check retrieval\n",
"query = \"What are the EV / NTM and NTM rev growth for MongoDB, Cloudflare, and Datadog?\"\n",
"docs = retriever_multi_vector_img.get_relevant_documents(query, limit=6)\n",
"docs = retriever_multi_vector_img.invoke(query, limit=6)\n",
"\n",
"# We get 4 docs\n",
"len(docs)"

View File

@@ -604,7 +604,7 @@
],
"source": [
"query = \"What are the EV / NTM and NTM rev growth for MongoDB, Cloudflare, and Datadog?\"\n",
"docs = retriever_multi_vector_img.get_relevant_documents(query, limit=1)\n",
"docs = retriever_multi_vector_img.invoke(query, limit=1)\n",
"\n",
"# We get 2 docs\n",
"len(docs)"

View File

@@ -75,7 +75,7 @@
"\n",
"Apply to the [`LLaMA2`](https://arxiv.org/pdf/2307.09288.pdf) paper. \n",
"\n",
"We use the Unstructured [`partition_pdf`](https://unstructured-io.github.io/unstructured/bricks/partition.html#partition-pdf), which segments a PDF document by using a layout model. \n",
"We use the Unstructured [`partition_pdf`](https://unstructured-io.github.io/unstructured/core/partition.html#partition-pdf), which segments a PDF document by using a layout model. \n",
"\n",
"This layout model makes it possible to extract elements, such as tables, from pdfs. \n",
"\n",

View File

@@ -562,9 +562,7 @@
],
"source": [
"# We can retrieve this table\n",
"retriever.get_relevant_documents(\n",
" \"What are results for LLaMA across across domains / subjects?\"\n",
")[1]"
"retriever.invoke(\"What are results for LLaMA across across domains / subjects?\")[1]"
]
},
{
@@ -614,9 +612,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\"Images / figures with playful and creative examples\")[\n",
" 1\n",
"]"
"retriever.invoke(\"Images / figures with playful and creative examples\")[1]"
]
},
{

View File

@@ -501,9 +501,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\"Images / figures with playful and creative examples\")[\n",
" 0\n",
"]"
"retriever.invoke(\"Images / figures with playful and creative examples\")[0]"
]
},
{

View File

@@ -342,7 +342,7 @@
"# Testing on retrieval\n",
"query = \"What percentage of CPI is dedicated to Housing, and how does it compare to the combined percentage of Medical Care, Apparel, and Other Goods and Services?\"\n",
"suffix_for_images = \" Include any pie charts, graphs, or tables.\"\n",
"docs = retriever_multi_vector_img.get_relevant_documents(query + suffix_for_images)"
"docs = retriever_multi_vector_img.invoke(query + suffix_for_images)"
]
},
{

View File

@@ -169,7 +169,7 @@
"\n",
"def get_tools(query):\n",
" # Get documents, which contain the Plugins to use\n",
" docs = retriever.get_relevant_documents(query)\n",
" docs = retriever.invoke(query)\n",
" # Get the toolkits, one for each plugin\n",
" tool_kits = [toolkits_dict[d.metadata[\"plugin_name\"]] for d in docs]\n",
" # Get the tools: a separate NLAChain for each endpoint\n",

View File

@@ -193,7 +193,7 @@
"\n",
"def get_tools(query):\n",
" # Get documents, which contain the Plugins to use\n",
" docs = retriever.get_relevant_documents(query)\n",
" docs = retriever.invoke(query)\n",
" # Get the toolkits, one for each plugin\n",
" tool_kits = [toolkits_dict[d.metadata[\"plugin_name\"]] for d in docs]\n",
" # Get the tools: a separate NLAChain for each endpoint\n",

View File

@@ -142,7 +142,7 @@
"\n",
"\n",
"def get_tools(query):\n",
" docs = retriever.get_relevant_documents(query)\n",
" docs = retriever.invoke(query)\n",
" return [ALL_TOOLS[d.metadata[\"index\"]] for d in docs]"
]
},

View File

@@ -206,7 +206,7 @@
" print(\"---RETRIEVE---\")\n",
" state_dict = state[\"keys\"]\n",
" question = state_dict[\"question\"]\n",
" documents = retriever.get_relevant_documents(question)\n",
" documents = retriever.invoke(question)\n",
" return {\"keys\": {\"documents\": documents, \"question\": question}}\n",
"\n",
"\n",

View File

@@ -213,7 +213,7 @@
" print(\"---RETRIEVE---\")\n",
" state_dict = state[\"keys\"]\n",
" question = state_dict[\"question\"]\n",
" documents = retriever.get_relevant_documents(question)\n",
" documents = retriever.invoke(question)\n",
" return {\"keys\": {\"documents\": documents, \"question\": question}}\n",
"\n",
"\n",

View File

@@ -435,7 +435,7 @@
" display(HTML(image_html))\n",
"\n",
"\n",
"docs = retriever.get_relevant_documents(\"Woman with children\", k=10)\n",
"docs = retriever.invoke(\"Woman with children\", k=10)\n",
"for doc in docs:\n",
" if is_base64(doc.page_content):\n",
" plt_img_base64(doc.page_content)\n",

View File

@@ -443,7 +443,7 @@
"\n",
"\n",
"query = \"Woman with children\"\n",
"docs = retriever.get_relevant_documents(query, k=10)\n",
"docs = retriever.invoke(query, k=10)\n",
"\n",
"for doc in docs:\n",
" if is_base64(doc.page_content):\n",

View File

@@ -168,7 +168,7 @@
"\n",
"retriever = vector_store.as_retriever(search_type=\"similarity\", search_kwargs={\"k\": 3})\n",
"\n",
"retrieved_docs = retriever.get_relevant_documents(\"<your question>\")\n",
"retrieved_docs = retriever.invoke(\"<your question>\")\n",
"\n",
"print(retrieved_docs[0].page_content)\n",
"\n",

View File

@@ -1227,7 +1227,7 @@
}
],
"source": [
"results = retriever.get_relevant_documents(\n",
"results = retriever.invoke(\n",
" \"I want to stay somewhere highly rated along the coast. I want a room with a patio and a fireplace.\"\n",
")\n",
"for res in results:\n",

View File

@@ -19,6 +19,9 @@ poetry run python scripts/copy_templates.py
wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/langserve.md
wget -q https://raw.githubusercontent.com/langchain-ai/langgraph/main/README.md -O docs/langgraph.md
yarn
poetry run quarto preview docs
poetry run quarto render docs
poetry run python scripts/generate_api_reference_links.py --docs_dir docs
yarn
yarn start

File diff suppressed because one or more lines are too long

View File

@@ -194,7 +194,7 @@ Prompt templates convert raw user input to better input to the LLM.
```python
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "You are world class technical documentation writer."),
("system", "You are a world class technical documentation writer."),
("user", "{input}")
])
```

View File

@@ -9,7 +9,7 @@
"\n",
"This notebook shows how to prevent prompt injection attacks using the text classification model from `HuggingFace`.\n",
"\n",
"By default, it uses a *[laiyer/deberta-v3-base-prompt-injection](https://huggingface.co/laiyer/deberta-v3-base-prompt-injection)* model trained to identify prompt injections. \n",
"By default, it uses a *[protectai/deberta-v3-base-prompt-injection-v2](https://huggingface.co/protectai/deberta-v3-base-prompt-injection-v2)* model trained to identify prompt injections. \n",
"\n",
"In this notebook, we will use the ONNX version of the model to speed up the inference. "
]
@@ -49,11 +49,15 @@
"from optimum.onnxruntime import ORTModelForSequenceClassification\n",
"from transformers import AutoTokenizer, pipeline\n",
"\n",
"# Using https://huggingface.co/laiyer/deberta-v3-base-prompt-injection\n",
"model_path = \"laiyer/deberta-v3-base-prompt-injection\"\n",
"tokenizer = AutoTokenizer.from_pretrained(model_path)\n",
"tokenizer.model_input_names = [\"input_ids\", \"attention_mask\"] # Hack to run the model\n",
"model = ORTModelForSequenceClassification.from_pretrained(model_path, subfolder=\"onnx\")\n",
"# Using https://huggingface.co/protectai/deberta-v3-base-prompt-injection-v2\n",
"model_path = \"laiyer/deberta-v3-base-prompt-injection-v2\"\n",
"revision = None # We recommend specifiying the revision to avoid breaking changes or supply chain attacks\n",
"tokenizer = AutoTokenizer.from_pretrained(\n",
" model_path, revision=revision, model_input_names=[\"input_ids\", \"attention_mask\"]\n",
")\n",
"model = ORTModelForSequenceClassification.from_pretrained(\n",
" model_path, revision=revision, subfolder=\"onnx\"\n",
")\n",
"\n",
"classifier = pipeline(\n",
" \"text-classification\",\n",

View File

@@ -184,7 +184,7 @@
"\n",
"query = \"Qual o tempo máximo para realização da prova?\"\n",
"\n",
"docs = retriever.get_relevant_documents(query)\n",
"docs = retriever.invoke(query)\n",
"\n",
"chain.invoke(\n",
" {\"input_documents\": docs, \"query\": query}\n",

View File

@@ -630,7 +630,7 @@
],
"source": [
"# Query retriever, should return parents (using MMR since that was set as search_type above)\n",
"retrieved_parent_docs = retriever.get_relevant_documents(\n",
"retrieved_parent_docs = retriever.invoke(\n",
" \"what signs does Birch Street allow on their property?\"\n",
")\n",
"for chunk in retrieved_parent_docs:\n",

View File

@@ -97,7 +97,7 @@
" # delete the gpt-4 model_name to use the default gpt-3.5 turbo for faster results\n",
" gpt_4 = ChatOpenAI(temperature=0.02, model_name=\"gpt-4\")\n",
" # Use the retriever's 'get_relevant_documents' method if needed to filter down longer docs\n",
" relevant_nodes = figma_doc_retriever.get_relevant_documents(human_input)\n",
" relevant_nodes = figma_doc_retriever.invoke(human_input)\n",
" conversation = [system_message_prompt, human_message_prompt]\n",
" chat_prompt = ChatPromptTemplate.from_messages(conversation)\n",
" response = gpt_4(\n",

View File

@@ -50,7 +50,7 @@
},
"outputs": [],
"source": [
"from langchain_community.document_loaders import GoogleDriveLoader"
"from langchain_google_community import GoogleDriveLoader"
]
},
{
@@ -339,7 +339,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import GoogleDriveLoader\n",
"from langchain_google_community import GoogleDriveLoader\n",
"\n",
"loader = GoogleDriveLoader(\n",
" folder_id=folder_id,\n",

View File

@@ -99,7 +99,7 @@
],
"source": [
"# Test the retriever\n",
"spreedly_doc_retriever.get_relevant_documents(\"CRC\")"
"spreedly_doc_retriever.invoke(\"CRC\")"
]
},
{

View File

@@ -82,7 +82,7 @@
")\n",
"\n",
"query = \"What is the plan for the economy?\"\n",
"docs = retriever.get_relevant_documents(query)\n",
"docs = retriever.invoke(query)\n",
"pretty_print_docs(docs)"
]
},
@@ -162,9 +162,7 @@
" base_compressor=compressor, base_retriever=retriever\n",
")\n",
"\n",
"compressed_docs = compression_retriever.get_relevant_documents(\n",
" \"What is the plan for the economy?\"\n",
")\n",
"compressed_docs = compression_retriever.invoke(\"What is the plan for the economy?\")\n",
"pretty_print_docs(compressed_docs)"
]
},

View File

@@ -350,7 +350,7 @@
"retriever = FAISS.from_documents(texts, embedding).as_retriever(search_kwargs={\"k\": 20})\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = retriever.get_relevant_documents(query)\n",
"docs = retriever.invoke(query)\n",
"pretty_print_docs(docs)"
]
},
@@ -388,7 +388,7 @@
" base_compressor=ov_compressor, base_retriever=retriever\n",
")\n",
"\n",
"compressed_docs = compression_retriever.get_relevant_documents(\n",
"compressed_docs = compression_retriever.invoke(\n",
" \"What did the president say about Ketanji Jackson Brown\"\n",
")\n",
"print([doc.metadata[\"id\"] for doc in compressed_docs])"

View File

@@ -320,7 +320,7 @@
").as_retriever(search_kwargs={\"k\": 20})\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = retriever.get_relevant_documents(query)\n",
"docs = retriever.invoke(query)\n",
"pretty_print_docs(docs)"
]
},
@@ -382,7 +382,7 @@
" base_compressor=compressor, base_retriever=retriever\n",
")\n",
"\n",
"compressed_docs = compression_retriever.get_relevant_documents(\n",
"compressed_docs = compression_retriever.invoke(\n",
" \"What did the president say about Ketanji Jackson Brown\"\n",
")\n",
"pretty_print_docs(compressed_docs)"

View File

@@ -0,0 +1,689 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "c94240f5",
"metadata": {},
"source": [
"# Apache AGE\n",
"\n",
">[Apache AGE](https://age.apache.org/) is a PostgreSQL extension that provides graph database functionality. AGE is an acronym for A Graph Extension, and is inspired by Bitnines fork of PostgreSQL 10, AgensGraph, which is a multi-model database. The goal of the project is to create single storage that can handle both relational and graph model data so that users can use standard ANSI SQL along with openCypher, the Graph query language. The data elements `Apache AGE` stores are nodes, edges connecting them, and attributes of nodes and edges.\n",
"\n",
">This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the `Cypher` query language.\n",
"\n",
">[Cypher](https://en.wikipedia.org/wiki/Cypher_(query_language)) is a declarative graph query language that allows for expressive and efficient data querying in a property graph.\n"
]
},
{
"cell_type": "markdown",
"id": "dbc0ee68",
"metadata": {},
"source": [
"## Settin up\n",
"\n",
"You will need to have a running `Postgre` instance with the AGE extension installed. One option for testing is to run a docker container using the official AGE docker image.\n",
"You can run a local docker container by running the executing the following script:\n",
"\n",
"```\n",
"docker run \\\n",
" --name age \\\n",
" -p 5432:5432 \\\n",
" -e POSTGRES_USER=postgresUser \\\n",
" -e POSTGRES_PASSWORD=postgresPW \\\n",
" -e POSTGRES_DB=postgresDB \\\n",
" -d \\\n",
" apache/age\n",
"```\n",
"\n",
"Additional instructions on running in docker can be found [here](https://hub.docker.com/r/apache/age)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "62812aad",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import GraphCypherQAChain\n",
"from langchain_community.graphs.age_graph import AGEGraph\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0928915d",
"metadata": {},
"outputs": [],
"source": [
"conf = {\n",
" \"database\": \"postgresDB\",\n",
" \"user\": \"postgresUser\",\n",
" \"password\": \"postgresPW\",\n",
" \"host\": \"localhost\",\n",
" \"port\": 5432,\n",
"}\n",
"\n",
"graph = AGEGraph(graph_name=\"age_test\", conf=conf)"
]
},
{
"cell_type": "markdown",
"id": "995ea9b9",
"metadata": {},
"source": [
"## Seeding the database\n",
"\n",
"Assuming your database is empty, you can populate it using Cypher query language. The following Cypher statement is idempotent, which means the database information will be the same if you run it one or multiple times."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "fedd26b9",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"graph.query(\n",
" \"\"\"\n",
"MERGE (m:Movie {name:\"Top Gun\"})\n",
"WITH m\n",
"UNWIND [\"Tom Cruise\", \"Val Kilmer\", \"Anthony Edwards\", \"Meg Ryan\"] AS actor\n",
"MERGE (a:Actor {name:actor})\n",
"MERGE (a)-[:ACTED_IN]->(m)\n",
"\"\"\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "58c1a8ea",
"metadata": {},
"source": [
"## Refresh graph schema information\n",
"If the schema of database changes, you can refresh the schema information needed to generate Cypher statements."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4e3de44f",
"metadata": {},
"outputs": [],
"source": [
"graph.refresh_schema()"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "1fe76ccd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
" Node properties are the following:\n",
" [{'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Actor'}, {'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [], 'labels': 'LabelB'}, {'properties': [], 'labels': 'LabelC'}, {'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Movie'}]\n",
" Relationship properties are the following:\n",
" [{'properties': [], 'type': 'ACTED_IN'}, {'properties': [{'property': 'rel_prop', 'type': 'STRING'}], 'type': 'REL_TYPE'}]\n",
" The relationships are the following:\n",
" ['(:`Actor`)-[:`ACTED_IN`]->(:`Movie`)', '(:`LabelA`)-[:`REL_TYPE`]->(:`LabelB`)', '(:`LabelA`)-[:`REL_TYPE`]->(:`LabelC`)']\n",
" \n"
]
}
],
"source": [
"print(graph.schema)"
]
},
{
"cell_type": "markdown",
"id": "68a3c677",
"metadata": {},
"source": [
"## Querying the graph\n",
"\n",
"We can now use the graph cypher QA chain to ask question of the graph"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "7476ce98",
"metadata": {},
"outputs": [],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" ChatOpenAI(temperature=0), graph=graph, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "ef8ee27b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\n",
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, Meg Ryan played in Top Gun.'}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"Who played in Top Gun?\")"
]
},
{
"cell_type": "markdown",
"id": "2d28c4df",
"metadata": {},
"source": [
"## Limit the number of results\n",
"You can limit the number of results from the Cypher QA Chain using the `top_k` parameter.\n",
"The default is 10."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "df230946",
"metadata": {},
"outputs": [],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" ChatOpenAI(temperature=0), graph=graph, verbose=True, top_k=2\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "3f1600ee",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Tom Cruise, Val Kilmer played in Top Gun.'}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"Who played in Top Gun?\")"
]
},
{
"cell_type": "markdown",
"id": "88c16206",
"metadata": {},
"source": [
"## Return intermediate results\n",
"You can return intermediate steps from the Cypher QA Chain using the `return_intermediate_steps` parameter"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "e412f36b",
"metadata": {},
"outputs": [],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "4f4699dc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\n",
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Intermediate steps: [{'query': \"MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\\nWHERE m.name = 'Top Gun'\\nRETURN a.name\"}, {'context': [{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}]}]\n",
"Final answer: Tom Cruise, Val Kilmer, Anthony Edwards, Meg Ryan played in Top Gun.\n"
]
}
],
"source": [
"result = chain(\"Who played in Top Gun?\")\n",
"print(f\"Intermediate steps: {result['intermediate_steps']}\")\n",
"print(f\"Final answer: {result['result']}\")"
]
},
{
"cell_type": "markdown",
"id": "d6e1b054",
"metadata": {},
"source": [
"## Return direct results\n",
"You can return direct results from the Cypher QA Chain using the `return_direct` parameter"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "2d3acf10",
"metadata": {},
"outputs": [],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "b0a9d143",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\n",
"RETURN a.name\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': [{'name': 'Tom Cruise'},\n",
" {'name': 'Val Kilmer'},\n",
" {'name': 'Anthony Edwards'},\n",
" {'name': 'Meg Ryan'}]}"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"Who played in Top Gun?\")"
]
},
{
"cell_type": "markdown",
"id": "f01dfb72-24ec-4ae7-883a-ee6646889b59",
"metadata": {},
"source": [
"## Add examples in the Cypher generation prompt\n",
"You can define the Cypher statement you want the LLM to generate for particular questions"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "59baeb88-adfa-4c26-8334-fcbff3a98efb",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts.prompt import PromptTemplate\n",
"\n",
"CYPHER_GENERATION_TEMPLATE = \"\"\"Task:Generate Cypher statement to query a graph database.\n",
"Instructions:\n",
"Use only the provided relationship types and properties in the schema.\n",
"Do not use any other relationship types or properties that are not provided.\n",
"Schema:\n",
"{schema}\n",
"Note: Do not include any explanations or apologies in your responses.\n",
"Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.\n",
"Do not include any text except the generated Cypher statement.\n",
"Examples: Here are a few examples of generated Cypher statements for particular questions:\n",
"# How many people played in Top Gun?\n",
"MATCH (m:Movie {{title:\"Top Gun\"}})<-[:ACTED_IN]-()\n",
"RETURN count(*) AS numberOfActors\n",
"\n",
"The question is:\n",
"{question}\"\"\"\n",
"\n",
"CYPHER_GENERATION_PROMPT = PromptTemplate(\n",
" input_variables=[\"schema\", \"question\"], template=CYPHER_GENERATION_TEMPLATE\n",
")\n",
"\n",
"chain = GraphCypherQAChain.from_llm(\n",
" ChatOpenAI(temperature=0),\n",
" graph=graph,\n",
" verbose=True,\n",
" cypher_prompt=CYPHER_GENERATION_PROMPT,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "47c64027-cf42-493a-9c76-2d10ba753728",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (:Movie {name:\"Top Gun\"})<-[:ACTED_IN]-(:Actor)\n",
"RETURN count(*) AS numberOfActors\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'numberofactors': 4}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'How many people played in Top Gun?',\n",
" 'result': \"I don't know the answer.\"}"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"How many people played in Top Gun?\")"
]
},
{
"cell_type": "markdown",
"id": "3e721cad-aa87-4526-9231-2dfc0e365939",
"metadata": {},
"source": [
"## Use separate LLMs for Cypher and answer generation\n",
"You can use the `cypher_llm` and `qa_llm` parameters to define different llms"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "6f9becc2-f579-45bf-9b50-2ce02bde92da",
"metadata": {},
"outputs": [],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" graph=graph,\n",
" cypher_llm=ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo\"),\n",
" qa_llm=ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-16k\"),\n",
" verbose=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "ff18e3e3-3402-4683-aec4-a19898f23ca1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\n",
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'}"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"Who played in Top Gun?\")"
]
},
{
"cell_type": "markdown",
"id": "eefea16b-508f-4552-8942-9d5063ed7d37",
"metadata": {},
"source": [
"## Ignore specified node and relationship types\n",
"\n",
"You can use `include_types` or `exclude_types` to ignore parts of the graph schema when generating Cypher statements."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "a20fa21e-fb85-41c4-aac0-53fb25e34604",
"metadata": {},
"outputs": [],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" graph=graph,\n",
" cypher_llm=ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo\"),\n",
" qa_llm=ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-16k\"),\n",
" verbose=True,\n",
" exclude_types=[\"Movie\"],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "3ad7f6b8-543e-46e4-a3b2-40fa3e66e895",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Node properties are the following:\n",
"Actor {name: STRING},LabelA {property_a: STRING},LabelB {},LabelC {}\n",
"Relationship properties are the following:\n",
"ACTED_IN {},REL_TYPE {rel_prop: STRING}\n",
"The relationships are the following:\n",
"(:LabelA)-[:REL_TYPE]->(:LabelB),(:LabelA)-[:REL_TYPE]->(:LabelC)\n"
]
}
],
"source": [
"# Inspect graph schema\n",
"print(chain.graph_schema)"
]
},
{
"cell_type": "markdown",
"id": "f0202e88-d700-40ed-aef9-0c969c7bf951",
"metadata": {},
"source": [
"## Validate generated Cypher statements\n",
"You can use the `validate_cypher` parameter to validate and correct relationship directions in generated Cypher statements"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "53665d03-7afd-433c-bdd5-750127bfb152",
"metadata": {},
"outputs": [],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" llm=ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo\"),\n",
" graph=graph,\n",
" verbose=True,\n",
" validate_cypher=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "19e1a591-9c10-4d7b-aa36-a5e1b778a97b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\n",
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'name': 'Tom Cruise'}, {'name': 'Val Kilmer'}, {'name': 'Anthony Edwards'}, {'name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, Meg Ryan played in Top Gun.'}"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"Who played in Top Gun?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.19"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -19,26 +19,22 @@
"\n",
"To complete this tutorial, you will need [Docker](https://www.docker.com/get-started/) and [Python 3.x](https://www.python.org/) installed.\n",
"\n",
"Ensure you have a running `Memgraph` instance. You can download and run it in a local Docker container by executing the following script:\n",
"Ensure you have a running Memgraph instance. To quickly run Memgraph Platform (Memgraph database + MAGE library + Memgraph Lab) for the first time, do the following:\n",
"\n",
"On Linux/MacOS:\n",
"```\n",
"docker run \\\n",
" -it \\\n",
" -p 7687:7687 \\\n",
" -p 7444:7444 \\\n",
" -p 3000:3000 \\\n",
" -e MEMGRAPH=\"--bolt-server-name-for-init=Neo4j/\" \\\n",
" -v mg_lib:/var/lib/memgraph memgraph/memgraph-platform\n",
"curl https://install.memgraph.com | sh\n",
"```\n",
"\n",
"You will need to wait a few seconds for the database to start. If the process is completed successfully, you should see something like this:\n",
"On Windows:\n",
"```\n",
"mgconsole X.X\n",
"Connected to 'memgraph://127.0.0.1:7687'\n",
"Type :help for shell usage\n",
"Quit the shell by typing Ctrl-D(eof) or :quit\n",
"memgraph>\n",
"iwr https://windows.memgraph.com | iex\n",
"```\n",
"\n",
"Both commands run a script that downloads a Docker Compose file to your system, builds and starts `memgraph-mage` and `memgraph-lab` Docker services in two separate containers. \n",
"\n",
"Read more about the installation process on [Memgraph documentation](https://memgraph.com/docs/getting-started/install-memgraph).\n",
"\n",
"Now you can start playing with `Memgraph`!"
]
},
@@ -89,7 +85,7 @@
"id": "95ba37a4",
"metadata": {},
"source": [
"We're utilizing the Python library [GQLAlchemy](https://github.com/memgraph/gqlalchemy) to establish a connection between our Memgraph database and Python script. To execute queries, we can set up a Memgraph instance as follows:"
"We're utilizing the Python library [GQLAlchemy](https://github.com/memgraph/gqlalchemy) to establish a connection between our Memgraph database and Python script. You can establish the connection to a running Memgraph instance with the Neo4j driver as well, since it's compatible with Memgraph. To execute queries with GQLAlchemy, we can set up a Memgraph instance as follows:"
]
},
{

View File

@@ -4,6 +4,9 @@ All functionality related to [Google Cloud Platform](https://cloud.google.com/)
## LLMs
We recommend individual developers to start with Gemini API (`langchain-google-genai`) and move to Vertex AI (`langchain-google-vertexai`) when they need access to commercial support and higher rate limits. If youre already Cloud-friendly or Cloud-native, then you can get started in Vertex AI straight away.
Please, find more information [here](https://ai.google.dev/gemini-api/docs/migrate-to-cloud).
### Google Generative AI
Access GoogleAI `Gemini` models such as `gemini-pro` and `gemini-pro-vision` through the `GoogleGenerativeAI` class.
@@ -620,7 +623,7 @@ docai_wh_retriever = GoogleDocumentAIWarehouseRetriever(
project_number=...
)
query = ...
documents = docai_wh_retriever.get_relevant_documents(
documents = docai_wh_retriever.invoke(
query, user_ldap=...
)
```

View File

@@ -83,7 +83,7 @@ from langchain.retrievers import CohereRagRetriever
from langchain_core.documents import Document
rag = CohereRagRetriever(llm=ChatCohere())
print(rag.get_relevant_documents("What is cohere ai?"))
print(rag.invoke("What is cohere ai?"))
```
Usage of the Cohere [RAG Retriever](/docs/integrations/retrievers/cohere)

View File

@@ -22,5 +22,5 @@ from metal_sdk.metal import Metal
metal = Metal("API_KEY", "CLIENT_ID", "INDEX_ID");
retriever = MetalRetriever(metal, params={"limit": 2})
docs = retriever.get_relevant_documents("search term")
docs = retriever.invoke("search term")
```

View File

@@ -199,7 +199,7 @@
" base_compressor=RAG.as_langchain_document_compressor(), base_retriever=retriever\n",
")\n",
"\n",
"compressed_docs = compression_retriever.get_relevant_documents(\n",
"compressed_docs = compression_retriever.invoke(\n",
" \"What animation studio did Miyazaki found\"\n",
")"
]

View File

@@ -154,9 +154,7 @@
"openai_api_key = os.environ[\"OPENAI_API_KEY\"]\n",
"llm = OpenAI(openai_api_key=openai_api_key, temperature=0)\n",
"retriever = vectara.as_retriever()\n",
"d = retriever.get_relevant_documents(\n",
" \"What did the president say about Ketanji Brown Jackson\", k=2\n",
")\n",
"d = retriever.invoke(\"What did the president say about Ketanji Brown Jackson\", k=2)\n",
"print(d)"
]
},

View File

@@ -69,7 +69,7 @@
"metadata": {},
"outputs": [],
"source": [
"retriever.get_relevant_documents(\"what is langchain\")"
"retriever.invoke(\"what is langchain\")"
]
}
],

View File

@@ -83,7 +83,7 @@
"outputs": [],
"source": [
"query = \"Can AI-driven music therapy contribute to the rehabilitation of patients with disorders of consciousness?\"\n",
"documents = retriever.get_relevant_documents(query=query)"
"documents = retriever.invoke(query)"
]
},
{
@@ -108,7 +108,7 @@
"]\n",
"\n",
"# Retrieve documents with filters and size params\n",
"documents = retriever.get_relevant_documents(query=query, size=5, filters=filters)"
"documents = retriever.invoke(query, size=5, filters=filters)"
]
}
],

View File

@@ -97,7 +97,7 @@
"metadata": {},
"outputs": [],
"source": [
"docs = retriever.get_relevant_documents(query=\"1605.08386\")"
"docs = retriever.invoke(\"1605.08386\")"
]
},
{
@@ -162,7 +162,7 @@
},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
" ········\n"

View File

@@ -117,7 +117,7 @@
"metadata": {},
"outputs": [],
"source": [
"retriever.get_relevant_documents(\"what is langchain?\")"
"retriever.invoke(\"what is langchain?\")"
]
},
{
@@ -263,7 +263,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\"What is Azure OpenAI?\")"
"retriever.invoke(\"What is Azure OpenAI?\")"
]
}
],

View File

@@ -58,7 +58,7 @@
"source": [
"query = \"What did the president say about Ketanji Brown?\"\n",
"\n",
"retriever.get_relevant_documents(query=query)"
"retriever.invoke(query)"
]
},
{

View File

@@ -103,7 +103,7 @@
},
"outputs": [],
"source": [
"result = retriever.get_relevant_documents(\"foo\")"
"result = retriever.invoke(\"foo\")"
]
},
{

View File

@@ -64,7 +64,7 @@
"source": [
"breeb_key = \"Parivoyage\"\n",
"retriever = BreebsRetriever(breeb_key)\n",
"documents = retriever.get_relevant_documents(\n",
"documents = retriever.invoke(\n",
" \"What are some unique, lesser-known spots to explore in Paris?\"\n",
")\n",
"print(documents)"

View File

@@ -83,7 +83,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\"What is Daftpage?\")"
"retriever.invoke(\"What is Daftpage?\")"
]
}
],

View File

@@ -150,7 +150,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\"alice's phone number\")"
"retriever.invoke(\"alice's phone number\")"
]
},
{

View File

@@ -314,7 +314,7 @@
")\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = retriever.get_relevant_documents(query)\n",
"docs = retriever.invoke(query)\n",
"pretty_print_docs(docs)"
]
},
@@ -344,7 +344,7 @@
" base_compressor=compressor, base_retriever=retriever\n",
")\n",
"\n",
"compressed_docs = compression_retriever.get_relevant_documents(\n",
"compressed_docs = compression_retriever.invoke(\n",
" \"What did the president say about Ketanji Jackson Brown\"\n",
")\n",
"pretty_print_docs(compressed_docs)"

View File

@@ -118,7 +118,7 @@
}
],
"source": [
"_pretty_print(rag.get_relevant_documents(\"What is cohere ai?\"))"
"_pretty_print(rag.invoke(\"What is cohere ai?\"))"
]
},
{
@@ -172,7 +172,7 @@
}
],
"source": [
"_pretty_print(await rag.aget_relevant_documents(\"What is cohere ai?\")) # async version"
"_pretty_print(await rag.ainvoke(\"What is cohere ai?\")) # async version"
]
},
{
@@ -198,7 +198,7 @@
}
],
"source": [
"docs = rag.get_relevant_documents(\n",
"docs = rag.invoke(\n",
" \"Does langchain support cohere RAG?\",\n",
" source_documents=[\n",
" Document(page_content=\"Langchain supports cohere RAG!\"),\n",

View File

@@ -143,7 +143,7 @@
")\n",
"\n",
"# find the relevant document\n",
"doc = retriever.get_relevant_documents(\"some query\")\n",
"doc = retriever.invoke(\"some query\")\n",
"print(doc)"
]
},
@@ -216,7 +216,7 @@
")\n",
"\n",
"# find the relevant document\n",
"doc = retriever.get_relevant_documents(\"some query\")\n",
"doc = retriever.invoke(\"some query\")\n",
"print(doc)"
]
},
@@ -313,7 +313,7 @@
")\n",
"\n",
"# find the relevant document\n",
"doc = retriever.get_relevant_documents(\"some query\")\n",
"doc = retriever.invoke(\"some query\")\n",
"print(doc)"
]
},
@@ -388,7 +388,7 @@
")\n",
"\n",
"# find the relevant document\n",
"doc = retriever.get_relevant_documents(\"some query\")\n",
"doc = retriever.invoke(\"some query\")\n",
"print(doc)"
]
},
@@ -481,7 +481,7 @@
")\n",
"\n",
"# find the relevant document\n",
"doc = retriever.get_relevant_documents(\"some query\")\n",
"doc = retriever.invoke(\"some query\")\n",
"print(doc)"
]
},
@@ -658,7 +658,7 @@
")\n",
"\n",
"# find the relevant document\n",
"doc = retriever.get_relevant_documents(\"movie about dreams\")\n",
"doc = retriever.invoke(\"movie about dreams\")\n",
"print(doc)"
]
},
@@ -700,7 +700,7 @@
")\n",
"\n",
"# find relevant documents\n",
"docs = retriever.get_relevant_documents(\"space travel\")\n",
"docs = retriever.invoke(\"space travel\")\n",
"print(docs)"
]
},
@@ -743,7 +743,7 @@
")\n",
"\n",
"# find relevant documents\n",
"docs = retriever.get_relevant_documents(\"action movies\")\n",
"docs = retriever.invoke(\"action movies\")\n",
"print(docs)"
]
},

View File

@@ -158,7 +158,7 @@
"outputs": [],
"source": [
"query = \"Find information about Dria.\"\n",
"result = retriever.get_relevant_documents(query)\n",
"result = retriever.invoke(query)\n",
"for doc in result:\n",
" print(doc)"
]

View File

@@ -130,7 +130,7 @@
"metadata": {},
"outputs": [],
"source": [
"result = retriever.get_relevant_documents(\"foo\")"
"result = retriever.invoke(\"foo\")"
]
},
{

View File

@@ -263,7 +263,7 @@
" url=es_url,\n",
")\n",
"\n",
"vector_retriever.get_relevant_documents(\"foo\")"
"vector_retriever.invoke(\"foo\")"
]
},
{
@@ -313,7 +313,7 @@
" url=es_url,\n",
")\n",
"\n",
"bm25_retriever.get_relevant_documents(\"foo\")"
"bm25_retriever.invoke(\"foo\")"
]
},
{
@@ -371,7 +371,7 @@
" url=es_url,\n",
")\n",
"\n",
"hybrid_retriever.get_relevant_documents(\"foo\")"
"hybrid_retriever.invoke(\"foo\")"
]
},
{
@@ -424,7 +424,7 @@
" url=es_url,\n",
")\n",
"\n",
"fuzzy_retriever.get_relevant_documents(\"fox\") # note the character tolernace"
"fuzzy_retriever.invoke(\"fox\") # note the character tolernace"
]
},
{
@@ -483,7 +483,7 @@
" url=es_url,\n",
")\n",
"\n",
"filtering_retriever.get_relevant_documents(\"foo\")"
"filtering_retriever.invoke(\"foo\")"
]
},
{
@@ -541,7 +541,7 @@
" url=es_url,\n",
")\n",
"\n",
"custom_mapped_retriever.get_relevant_documents(\"foo\")"
"custom_mapped_retriever.invoke(\"foo\")"
]
}
],

View File

@@ -194,9 +194,7 @@
"metadata": {},
"outputs": [],
"source": [
"result = retriever.get_relevant_documents(\n",
" \"How many companies does Elon Musk run and name those?\"\n",
")"
"result = retriever.invoke(\"How many companies does Elon Musk run and name those?\")"
]
},
{

View File

@@ -328,7 +328,7 @@
"retriever = FAISS.from_documents(texts, embedding).as_retriever(search_kwargs={\"k\": 20})\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = retriever.get_relevant_documents(query)\n",
"docs = retriever.invoke(query)\n",
"pretty_print_docs(docs)"
]
},
@@ -375,7 +375,7 @@
" base_compressor=compressor, base_retriever=retriever\n",
")\n",
"\n",
"compressed_docs = compression_retriever.get_relevant_documents(\n",
"compressed_docs = compression_retriever.invoke(\n",
" \"What did the president say about Ketanji Jackson Brown\"\n",
")\n",
"print([doc.metadata[\"id\"] for doc in compressed_docs])"

View File

@@ -131,7 +131,7 @@
"metadata": {},
"outputs": [],
"source": [
"vecstore_retriever.get_relevant_documents(\"How does the multi vector retriever work\")"
"vecstore_retriever.invoke(\"How does the multi vector retriever work\")"
]
},
{
@@ -176,7 +176,7 @@
"metadata": {},
"outputs": [],
"source": [
"parent_retriever.get_relevant_documents(\"How does the multi vector retriever work\")"
"parent_retriever.invoke(\"How does the multi vector retriever work\")"
]
},
{

View File

@@ -114,7 +114,7 @@
},
"outputs": [],
"source": [
"retriever.get_relevant_documents(\"machine learning\")"
"retriever.invoke(\"machine learning\")"
]
},
{
@@ -149,7 +149,7 @@
" template=\"gdrive-query\", # Search everywhere\n",
" num_results=2, # But take only 2 documents\n",
")\n",
"for doc in retriever.get_relevant_documents(\"machine learning\"):\n",
"for doc in retriever.invoke(\"machine learning\"):\n",
" print(\"---\")\n",
" print(doc.page_content.strip()[:60] + \"...\")"
]
@@ -187,7 +187,7 @@
" includeItemsFromAllDrives=False,\n",
" supportsAllDrives=False,\n",
")\n",
"for doc in retriever.get_relevant_documents(\"machine learning\"):\n",
"for doc in retriever.invoke(\"machine learning\"):\n",
" print(f\"{doc.metadata['name']}:\")\n",
" print(\"---\")\n",
" print(doc.page_content.strip()[:60] + \"...\")"
@@ -222,7 +222,7 @@
" includeItemsFromAllDrives=False,\n",
" supportsAllDrives=False,\n",
")\n",
"retriever.get_relevant_documents(\"machine learning\")"
"retriever.invoke(\"machine learning\")"
]
}
],

View File

@@ -198,7 +198,7 @@
"source": [
"query = \"What are Alphabet's Other Bets?\"\n",
"\n",
"result = retriever.get_relevant_documents(query)\n",
"result = retriever.invoke(query)\n",
"for doc in result:\n",
" print(doc)"
]
@@ -225,7 +225,7 @@
" get_extractive_answers=True,\n",
")\n",
"\n",
"result = retriever.get_relevant_documents(query)\n",
"result = retriever.invoke(query)\n",
"for doc in result:\n",
" print(doc)"
]
@@ -251,7 +251,7 @@
" engine_data_type=1,\n",
")\n",
"\n",
"result = retriever.get_relevant_documents(query)\n",
"result = retriever.invoke(query)\n",
"for doc in result:\n",
" print(doc)"
]
@@ -279,7 +279,7 @@
" engine_data_type=2,\n",
")\n",
"\n",
"result = retriever.get_relevant_documents(query)\n",
"result = retriever.invoke(query)\n",
"for doc in result:\n",
" print(doc)"
]
@@ -305,7 +305,7 @@
" engine_data_type=3,\n",
")\n",
"\n",
"result = retriever.get_relevant_documents(query)\n",
"result = retriever.invoke(query)\n",
"for doc in result:\n",
" print(doc)"
]
@@ -329,7 +329,7 @@
" project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID\n",
")\n",
"\n",
"result = retriever.get_relevant_documents(query)\n",
"result = retriever.invoke(query)\n",
"for doc in result:\n",
" print(doc)"
]

View File

@@ -92,7 +92,7 @@
"retriever = KayAiRetriever.create(\n",
" dataset_id=\"company\", data_types=[\"10-K\", \"10-Q\", \"PressRelease\"], num_contexts=3\n",
")\n",
"docs = retriever.get_relevant_documents(\n",
"docs = retriever.invoke(\n",
" \"What were the biggest strategy changes and partnerships made by Roku in 2023??\"\n",
")"
]

View File

@@ -62,7 +62,7 @@
"metadata": {},
"outputs": [],
"source": [
"result = retriever.get_relevant_documents(\"foo\")"
"result = retriever.invoke(\"foo\")"
]
},
{

View File

@@ -296,7 +296,7 @@
"retriever = FAISS.from_documents(texts, embedding).as_retriever(search_kwargs={\"k\": 20})\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = retriever.get_relevant_documents(query)\n",
"docs = retriever.invoke(query)\n",
"pretty_print_docs(docs)"
]
},
@@ -350,7 +350,7 @@
" base_compressor=compressor, base_retriever=retriever\n",
")\n",
"\n",
"compressed_docs = compression_retriever.get_relevant_documents(\n",
"compressed_docs = compression_retriever.invoke(\n",
" \"What did the president say about Ketanji Jackson Brown\"\n",
")\n",
"pretty_print_docs(compressed_docs)"

View File

@@ -123,7 +123,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\"foo1\")"
"retriever.invoke(\"foo1\")"
]
},
{

View File

@@ -109,7 +109,7 @@
}
],
"source": [
"retriever.get_relevant_documents(query=\"LangChain\", doc_content_chars_max=100)"
"retriever.invoke(\"LangChain\", doc_content_chars_max=100)"
]
},
{

View File

@@ -295,7 +295,7 @@
"metadata": {},
"outputs": [],
"source": [
"result = retriever.get_relevant_documents(\"foo\")"
"result = retriever.invoke(\"foo\")"
]
},
{

View File

@@ -53,7 +53,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\"chatgpt\")"
"retriever.invoke(\"chatgpt\")"
]
},
{

View File

@@ -227,7 +227,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"Life and ethical dilemmas of AI\",\n",
")"
]

View File

@@ -100,7 +100,7 @@
}
],
"source": [
"docs = retriever_from_llm.get_relevant_documents(\n",
"docs = retriever_from_llm.invoke(\n",
" \"Hi I'm Lance. What are the approaches to Task Decomposition?\"\n",
")"
]
@@ -120,7 +120,7 @@
}
],
"source": [
"docs = retriever_from_llm.get_relevant_documents(\n",
"docs = retriever_from_llm.invoke(\n",
" \"I live in San Francisco. What are the Types of Memory?\"\n",
")"
]
@@ -182,7 +182,7 @@
}
],
"source": [
"docs = retriever_from_llm_chain.get_relevant_documents(\n",
"docs = retriever_from_llm_chain.invoke(\n",
" \"Hi I'm Lance. What is Maximum Inner Product Search?\"\n",
")"
]

View File

@@ -270,7 +270,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -300,7 +300,7 @@
],
"source": [
"# This example only specifies a filter\n",
"retriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.5\")\n",
"retriever.invoke(\"I want to watch a movie rated higher than 8.5\")\n",
"\n",
"# in case if this example errored out, consider installing libdeeplake manually: `pip install libdeeplake`, and then restart notebook."
]
@@ -331,7 +331,7 @@
],
"source": [
"# This example specifies a query and a filter\n",
"retriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")"
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
]
},
{
@@ -360,9 +360,7 @@
],
"source": [
"# This example specifies a composite filter\n",
"retriever.get_relevant_documents(\n",
" \"What's a highly rated (above 8.5) science fiction film?\"\n",
")"
"retriever.invoke(\"What's a highly rated (above 8.5) science fiction film?\")"
]
},
{
@@ -391,7 +389,7 @@
],
"source": [
"# This example specifies a query and composite filter\n",
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated\"\n",
")"
]
@@ -457,7 +455,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"what are two movies about dinosaurs\")"
"retriever.invoke(\"what are two movies about dinosaurs\")"
]
}
],

View File

@@ -192,7 +192,7 @@
"outputs": [],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs?\")"
"retriever.invoke(\"What are some movies about dinosaurs?\")"
]
},
{
@@ -202,7 +202,7 @@
"outputs": [],
"source": [
"# This example specifies a filter\n",
"retriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.5\")"
"retriever.invoke(\"I want to watch a movie rated higher than 8.5\")"
]
},
{
@@ -212,7 +212,7 @@
"outputs": [],
"source": [
"# This example only specifies a query and a filter\n",
"retriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")"
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
]
},
{
@@ -222,9 +222,7 @@
"outputs": [],
"source": [
"# This example specifies a composite filter\n",
"retriever.get_relevant_documents(\n",
" \"What's a highly rated (above 8.5), science fiction movie ?\"\n",
")"
"retriever.invoke(\"What's a highly rated (above 8.5), science fiction movie ?\")"
]
},
{
@@ -234,7 +232,7 @@
"outputs": [],
"source": [
"# This example specifies a query and composite filter\n",
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"What's a movie about toys after 1990 but before 2005, and is animated\"\n",
")"
]
@@ -273,7 +271,7 @@
"outputs": [],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are two movies about dinosaurs?\")"
"retriever.invoke(\"What are two movies about dinosaurs?\")"
]
},
{

View File

@@ -232,7 +232,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -262,7 +262,7 @@
],
"source": [
"# This example only specifies a filter\n",
"retriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.5\")"
"retriever.invoke(\"I want to watch a movie rated higher than 8.5\")"
]
},
{
@@ -291,7 +291,7 @@
],
"source": [
"# This example specifies a query and a filter\n",
"retriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")"
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
]
},
{
@@ -320,9 +320,7 @@
],
"source": [
"# This example specifies a composite filter\n",
"retriever.get_relevant_documents(\n",
" \"What's a highly rated (above 8.5) science fiction film?\"\n",
")"
"retriever.invoke(\"What's a highly rated (above 8.5) science fiction film?\")"
]
},
{
@@ -351,7 +349,7 @@
],
"source": [
"# This example specifies a query and composite filter\n",
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated\"\n",
")"
]
@@ -418,7 +416,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"what are two movies about dinosaurs\")"
"retriever.invoke(\"what are two movies about dinosaurs\")"
]
},
{

View File

@@ -269,7 +269,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -308,7 +308,7 @@
],
"source": [
"# This example only specifies a filter\n",
"retriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.5\")"
"retriever.invoke(\"I want to watch a movie rated higher than 8.5\")"
]
},
{
@@ -346,7 +346,7 @@
],
"source": [
"# This example specifies a query and a filter\n",
"retriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")"
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
]
},
{
@@ -384,9 +384,7 @@
],
"source": [
"# This example specifies a composite filter\n",
"retriever.get_relevant_documents(\n",
" \"What's a highly rated (above 8.5) science fiction film?\"\n",
")"
"retriever.invoke(\"What's a highly rated (above 8.5) science fiction film?\")"
]
},
{
@@ -468,7 +466,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"what are two movies about dinosaurs\")"
"retriever.invoke(\"what are two movies about dinosaurs\")"
]
},
{

View File

@@ -352,7 +352,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -384,7 +384,7 @@
],
"source": [
"# This example specifies a filter\n",
"retriever.get_relevant_documents(\"What are some highly rated movies (above 9)?\")"
"retriever.invoke(\"What are some highly rated movies (above 9)?\")"
]
},
{
@@ -416,7 +416,7 @@
],
"source": [
"# This example specifies both a relevant query and a filter\n",
"retriever.get_relevant_documents(\"What are the thriller movies that are highly rated?\")"
"retriever.invoke(\"What are the thriller movies that are highly rated?\")"
]
},
{
@@ -438,7 +438,7 @@
],
"source": [
"# This example specifies a query and composite filter\n",
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"What's a movie after 1990 but before 2005 that's all about dinosaurs, \\\n",
" and preferably has a lot of action\"\n",
")"
@@ -520,7 +520,7 @@
},
"outputs": [],
"source": [
"retriever.get_relevant_documents(\"What are two movies about dinosaurs?\")"
"retriever.invoke(\"What are two movies about dinosaurs?\")"
]
}
],

View File

@@ -265,7 +265,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -296,7 +296,7 @@
],
"source": [
"# This example only specifies a filter\n",
"retriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.5\")"
"retriever.invoke(\"I want to watch a movie rated higher than 8.5\")"
]
},
{
@@ -326,7 +326,7 @@
],
"source": [
"# This example specifies a query and a filter\n",
"retriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")"
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
]
},
{
@@ -357,9 +357,7 @@
],
"source": [
"# This example specifies a composite filter\n",
"retriever.get_relevant_documents(\n",
" \"What's a highly rated (above 8.5) science fiction film?\"\n",
")"
"retriever.invoke(\"What's a highly rated (above 8.5) science fiction film?\")"
]
},
{
@@ -389,7 +387,7 @@
],
"source": [
"# This example specifies a query and composite filter\n",
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated\"\n",
")"
]
@@ -450,7 +448,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are two movies about dinosaurs\")"
"retriever.invoke(\"What are two movies about dinosaurs\")"
]
}
],

View File

@@ -197,7 +197,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -219,7 +219,7 @@
],
"source": [
"# This example specifies a query and a filter\n",
"retriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")"
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
]
},
{
@@ -275,7 +275,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"what are two movies about dinosaurs\")"
"retriever.invoke(\"what are two movies about dinosaurs\")"
]
},
{
@@ -305,7 +305,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"what animated or comedy movies have been released in the last 30 years about animated toys?\"\n",
")"
]

View File

@@ -190,7 +190,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -219,7 +219,7 @@
],
"source": [
"# This example specifies a filter\n",
"retriever.get_relevant_documents(\"What are some highly rated movies (above 9)?\")"
"retriever.invoke(\"What are some highly rated movies (above 9)?\")"
]
},
{
@@ -248,9 +248,7 @@
],
"source": [
"# This example only specifies a query and a filter\n",
"retriever.get_relevant_documents(\n",
" \"I want to watch a movie about toys rated higher than 9\"\n",
")"
"retriever.invoke(\"I want to watch a movie about toys rated higher than 9\")"
]
},
{
@@ -278,9 +276,7 @@
],
"source": [
"# This example specifies a composite filter\n",
"retriever.get_relevant_documents(\n",
" \"What's a highly rated (above or equal 9) thriller film?\"\n",
")"
"retriever.invoke(\"What's a highly rated (above or equal 9) thriller film?\")"
]
},
{
@@ -308,7 +304,7 @@
],
"source": [
"# This example specifies a query and composite filter\n",
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"What's a movie after 1990 but before 2005 that's all about dinosaurs, \\\n",
" and preferably has a lot of action\"\n",
")"
@@ -367,7 +363,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are two movies about dinosaurs?\")"
"retriever.invoke(\"What are two movies about dinosaurs?\")"
]
}
],

View File

@@ -209,7 +209,7 @@
"outputs": [],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -219,7 +219,7 @@
"outputs": [],
"source": [
"# This example specifies a filter\n",
"retriever.get_relevant_documents(\"What are some highly rated movies (above 9)?\")"
"retriever.invoke(\"What are some highly rated movies (above 9)?\")"
]
},
{
@@ -229,9 +229,7 @@
"outputs": [],
"source": [
"# This example only specifies a query and a filter\n",
"retriever.get_relevant_documents(\n",
" \"I want to watch a movie about toys rated higher than 9\"\n",
")"
"retriever.invoke(\"I want to watch a movie about toys rated higher than 9\")"
]
},
{
@@ -241,9 +239,7 @@
"outputs": [],
"source": [
"# This example specifies a composite filter\n",
"retriever.get_relevant_documents(\n",
" \"What's a highly rated (above or equal 9) thriller film?\"\n",
")"
"retriever.invoke(\"What's a highly rated (above or equal 9) thriller film?\")"
]
},
{
@@ -253,7 +249,7 @@
"outputs": [],
"source": [
"# This example specifies a query and composite filter\n",
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"What's a movie after 1990 but before 2005 that's all about dinosaurs, \\\n",
" and preferably has a lot of action\"\n",
")"
@@ -293,7 +289,7 @@
"outputs": [],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are two movies about dinosaurs?\")"
"retriever.invoke(\"What are two movies about dinosaurs?\")"
]
}
],

View File

@@ -216,7 +216,7 @@
"outputs": [],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -227,7 +227,7 @@
"outputs": [],
"source": [
"# This example only specifies a filter\n",
"retriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.5\")"
"retriever.invoke(\"I want to watch a movie rated higher than 8.5\")"
]
},
{
@@ -238,7 +238,7 @@
"outputs": [],
"source": [
"# This example specifies a query and a filter\n",
"retriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")"
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
]
},
{
@@ -249,9 +249,7 @@
"outputs": [],
"source": [
"# This example specifies a composite filter\n",
"retriever.get_relevant_documents(\n",
" \"What's a highly rated (above 8.5) science fiction film?\"\n",
")"
"retriever.invoke(\"What's a highly rated (above 8.5) science fiction film?\")"
]
},
{
@@ -262,7 +260,7 @@
"outputs": [],
"source": [
"# This example specifies a query and composite filter\n",
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated\"\n",
")"
]
@@ -285,7 +283,7 @@
"outputs": [],
"source": [
"# You can use length(genres) to do anything you want\n",
"retriever.get_relevant_documents(\"What's a movie that have more than 1 genres?\")"
"retriever.invoke(\"What's a movie that have more than 1 genres?\")"
]
},
{
@@ -296,7 +294,7 @@
"outputs": [],
"source": [
"# Fine-grained datetime? You got it already.\n",
"retriever.get_relevant_documents(\"What's a movie that release after feb 1995?\")"
"retriever.invoke(\"What's a movie that release after feb 1995?\")"
]
},
{
@@ -307,7 +305,7 @@
"outputs": [],
"source": [
"# Don't know what your exact filter should be? Use string pattern match!\n",
"retriever.get_relevant_documents(\"What's a movie whose name is like Andrei?\")"
"retriever.invoke(\"What's a movie whose name is like Andrei?\")"
]
},
{
@@ -318,9 +316,7 @@
"outputs": [],
"source": [
"# Contain works for lists: so you can match a list with contain comparator!\n",
"retriever.get_relevant_documents(\n",
" \"What's a movie who has genres science fiction and adventure?\"\n",
")"
"retriever.invoke(\"What's a movie who has genres science fiction and adventure?\")"
]
},
{
@@ -364,7 +360,7 @@
"outputs": [],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"what are two movies about dinosaurs\")"
"retriever.invoke(\"what are two movies about dinosaurs\")"
]
}
],

View File

@@ -203,7 +203,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -233,7 +233,7 @@
],
"source": [
"# This example only specifies a filter\n",
"retriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.5\")"
"retriever.invoke(\"I want to watch a movie rated higher than 8.5\")"
]
},
{
@@ -262,7 +262,7 @@
],
"source": [
"# This example specifies a query and a filter\n",
"retriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")"
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
]
},
{
@@ -291,9 +291,7 @@
],
"source": [
"# This example specifies a composite filter\n",
"retriever.get_relevant_documents(\n",
" \"What's a highly rated (above 8.5) science fiction film?\"\n",
")"
"retriever.invoke(\"What's a highly rated (above 8.5) science fiction film?\")"
]
},
{
@@ -356,7 +354,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"what are two movies about dinosaurs\")"
"retriever.invoke(\"what are two movies about dinosaurs\")"
]
},
{
@@ -393,7 +391,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"what animated or comedy movies have been released in the last 30 years about animated toys?\"\n",
")"
]

View File

@@ -188,7 +188,7 @@
"outputs": [],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -199,7 +199,7 @@
"outputs": [],
"source": [
"# This example only specifies a filter\n",
"retriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.5\")"
"retriever.invoke(\"I want to watch a movie rated higher than 8.5\")"
]
},
{
@@ -210,7 +210,7 @@
"outputs": [],
"source": [
"# This example specifies a query and a filter\n",
"retriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")"
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
]
},
{
@@ -221,9 +221,7 @@
"outputs": [],
"source": [
"# This example specifies a composite filter\n",
"retriever.get_relevant_documents(\n",
" \"What's a highly rated (above 8.5) science fiction film?\"\n",
")"
"retriever.invoke(\"What's a highly rated (above 8.5) science fiction film?\")"
]
},
{
@@ -234,7 +232,7 @@
"outputs": [],
"source": [
"# This example specifies a query and composite filter\n",
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated\"\n",
")"
]
@@ -280,7 +278,7 @@
"outputs": [],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"what are two movies about dinosaurs\")"
"retriever.invoke(\"what are two movies about dinosaurs\")"
]
}
],

View File

@@ -214,7 +214,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -244,7 +244,7 @@
],
"source": [
"# This example only specifies a filter\n",
"retriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.5\")"
"retriever.invoke(\"I want to watch a movie rated higher than 8.5\")"
]
},
{
@@ -273,7 +273,7 @@
],
"source": [
"# This example specifies a query and a filter\n",
"retriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")"
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
]
},
{
@@ -302,9 +302,7 @@
],
"source": [
"# This example specifies a composite filter\n",
"retriever.get_relevant_documents(\n",
" \"What's a highly rated (above 8.5) science fiction film?\"\n",
")"
"retriever.invoke(\"What's a highly rated (above 8.5) science fiction film?\")"
]
},
{
@@ -333,7 +331,7 @@
],
"source": [
"# This example specifies a query and composite filter\n",
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated\"\n",
")"
]
@@ -375,7 +373,7 @@
"outputs": [],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are two movies about dinosaurs\")"
"retriever.invoke(\"What are two movies about dinosaurs\")"
]
}
],

View File

@@ -214,7 +214,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -244,7 +244,7 @@
],
"source": [
"# This example only specifies a filter\n",
"retriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.5\")"
"retriever.invoke(\"I want to watch a movie rated higher than 8.5\")"
]
},
{
@@ -273,7 +273,7 @@
],
"source": [
"# This example specifies a query and a filter\n",
"retriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")"
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
]
},
{
@@ -302,9 +302,7 @@
],
"source": [
"# This example specifies a composite filter\n",
"retriever.get_relevant_documents(\n",
" \"What's a highly rated (above 8.5) science fiction film?\"\n",
")"
"retriever.invoke(\"What's a highly rated (above 8.5) science fiction film?\")"
]
},
{
@@ -333,7 +331,7 @@
],
"source": [
"# This example specifies a query and composite filter\n",
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated\"\n",
")"
]
@@ -399,7 +397,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"what are two movies about dinosaurs\")"
"retriever.invoke(\"what are two movies about dinosaurs\")"
]
}
],

View File

@@ -279,7 +279,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -310,7 +310,7 @@
],
"source": [
"# This example only specifies a filter\n",
"retriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.4\")"
"retriever.invoke(\"I want to watch a movie rated higher than 8.4\")"
]
},
{
@@ -339,7 +339,7 @@
],
"source": [
"# This example specifies a query and a filter\n",
"retriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")"
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
]
},
{
@@ -369,9 +369,7 @@
],
"source": [
"# This example specifies a composite filter\n",
"retriever.get_relevant_documents(\n",
" \"What's a highly rated (above 8.5) science fiction film?\"\n",
")"
"retriever.invoke(\"What's a highly rated (above 8.5) science fiction film?\")"
]
},
{
@@ -400,7 +398,7 @@
],
"source": [
"# This example specifies a query and composite filter\n",
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated\"\n",
")"
]
@@ -465,7 +463,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"what are two movies about dinosaurs\")"
"retriever.invoke(\"what are two movies about dinosaurs\")"
]
}
],

View File

@@ -374,7 +374,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -404,7 +404,7 @@
],
"source": [
"# This example only specifies a filter\n",
"retriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.5\")"
"retriever.invoke(\"I want to watch a movie rated higher than 8.5\")"
]
},
{
@@ -433,7 +433,7 @@
],
"source": [
"# This example specifies a query and a filter\n",
"retriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women?\")"
"retriever.invoke(\"Has Greta Gerwig directed any movies about women?\")"
]
},
{
@@ -462,9 +462,7 @@
],
"source": [
"# This example specifies a composite filter\n",
"retriever.get_relevant_documents(\n",
" \"What's a highly rated (above 8.5) science fiction film?\"\n",
")"
"retriever.invoke(\"What's a highly rated (above 8.5) science fiction film?\")"
]
},
{
@@ -493,7 +491,7 @@
],
"source": [
"# This example specifies a query and composite filter\n",
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"What's a movie after 1990 but before (or on) 2005 that's all about toys, and preferably is animated\"\n",
")"
]
@@ -558,7 +556,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"what are two movies about dinosaurs\")"
"retriever.invoke(\"what are two movies about dinosaurs\")"
]
}
],

View File

@@ -297,7 +297,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"movies about a superhero\")"
"retriever.invoke(\"movies about a superhero\")"
]
},
{
@@ -323,7 +323,7 @@
],
"source": [
"# This example only specifies a filter\n",
"retriever.get_relevant_documents(\"movies that were released after 2010\")"
"retriever.invoke(\"movies that were released after 2010\")"
]
},
{
@@ -349,9 +349,7 @@
],
"source": [
"# This example specifies both a relevant query and a filter\n",
"retriever.get_relevant_documents(\n",
" \"movies about a superhero which were released after 2010\"\n",
")"
"retriever.invoke(\"movies about a superhero which were released after 2010\")"
]
},
{
@@ -413,7 +411,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\"what are two movies about a superhero\")"
"retriever.invoke(\"what are two movies about a superhero\")"
]
}
],

View File

@@ -334,7 +334,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -366,7 +366,7 @@
],
"source": [
"# This example only specifies a filter\n",
"retriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.5\")"
"retriever.invoke(\"I want to watch a movie rated higher than 8.5\")"
]
},
{
@@ -396,7 +396,7 @@
],
"source": [
"# This example specifies a query and a filter\n",
"retriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")"
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
]
},
{
@@ -426,9 +426,7 @@
],
"source": [
"# This example specifies a composite filter\n",
"retriever.get_relevant_documents(\n",
" \"What's a highly rated (above 8.5) science fiction film?\"\n",
")"
"retriever.invoke(\"What's a highly rated (above 8.5) science fiction film?\")"
]
},
{
@@ -457,7 +455,7 @@
],
"source": [
"# This example specifies a query and composite filter\n",
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated\"\n",
")"
]
@@ -523,7 +521,7 @@
],
"source": [
"# This example specifies a query with a LIMIT value\n",
"retriever.get_relevant_documents(\"what are two movies about dinosaurs\")"
"retriever.invoke(\"what are two movies about dinosaurs\")"
]
}
],

View File

@@ -225,7 +225,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -248,7 +248,7 @@
],
"source": [
"# This example only specifies a filter\n",
"retriever.get_relevant_documents(\"I want to watch a movie rated higher than 8.5\")"
"retriever.invoke(\"I want to watch a movie rated higher than 8.5\")"
]
},
{
@@ -270,7 +270,7 @@
],
"source": [
"# This example specifies a query and a filter\n",
"retriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")"
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
]
},
{
@@ -292,9 +292,7 @@
],
"source": [
"# This example specifies a composite filter\n",
"retriever.get_relevant_documents(\n",
" \"What's a highly rated (above 8.5) science fiction film?\"\n",
")"
"retriever.invoke(\"What's a highly rated (above 8.5) science fiction film?\")"
]
},
{
@@ -316,7 +314,7 @@
],
"source": [
"# This example specifies a query and composite filter\n",
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated\"\n",
")"
]
@@ -374,7 +372,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"what are two movies about dinosaurs\")"
"retriever.invoke(\"what are two movies about dinosaurs\")"
]
}
],

View File

@@ -184,7 +184,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"What are some movies about dinosaurs\")"
"retriever.invoke(\"What are some movies about dinosaurs\")"
]
},
{
@@ -213,7 +213,7 @@
],
"source": [
"# This example specifies a query and a filter\n",
"retriever.get_relevant_documents(\"Has Greta Gerwig directed any movies about women\")"
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
]
},
{
@@ -276,7 +276,7 @@
],
"source": [
"# This example only specifies a relevant query\n",
"retriever.get_relevant_documents(\"what are two movies about dinosaurs\")"
"retriever.invoke(\"what are two movies about dinosaurs\")"
]
}
],

View File

@@ -91,9 +91,7 @@
"metadata": {},
"outputs": [],
"source": [
"result = retriever.get_relevant_documents(\n",
" \"What did the president say about Ketanji Brown Jackson\"\n",
")\n",
"result = retriever.invoke(\"What did the president say about Ketanji Brown Jackson\")\n",
"print(docs[0].page_content)"
]
}

View File

@@ -125,7 +125,7 @@
},
"outputs": [],
"source": [
"result = retriever.get_relevant_documents(\"foo\")"
"result = retriever.invoke(\"foo\")"
]
},
{

View File

@@ -105,7 +105,7 @@
},
"outputs": [],
"source": [
"result = retriever.get_relevant_documents(\"foo\")"
"result = retriever.invoke(\"foo\")"
]
},
{
@@ -185,7 +185,7 @@
}
],
"source": [
"retriever_copy.get_relevant_documents(\"foo\")"
"retriever_copy.invoke(\"foo\")"
]
},
{

View File

@@ -95,7 +95,7 @@
"outputs": [],
"source": [
"# This returns a list of LangChain Document objects\n",
"documents = retriever.get_relevant_documents(\"query\", top_k=10)"
"documents = retriever.invoke(\"query\", top_k=10)"
]
},
{

View File

@@ -110,7 +110,7 @@
},
"outputs": [],
"source": [
"retriever.get_relevant_documents(\"what is vespa?\")"
"retriever.invoke(\"what is vespa?\")"
]
}
],

View File

@@ -202,7 +202,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\"the ethical implications of AI\")"
"retriever.invoke(\"the ethical implications of AI\")"
]
},
{
@@ -233,7 +233,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"AI integration in society\",\n",
" where_filter={\n",
" \"path\": [\"author\"],\n",
@@ -272,7 +272,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\n",
"retriever.invoke(\n",
" \"AI integration in society\",\n",
" score=True,\n",
")"

View File

@@ -98,7 +98,7 @@
"metadata": {},
"outputs": [],
"source": [
"docs = retriever.get_relevant_documents(query=\"HUNTER X HUNTER\")"
"docs = retriever.invoke(\"HUNTER X HUNTER\")"
]
},
{

View File

@@ -314,7 +314,7 @@
" api_key=zep_api_key,\n",
")\n",
"\n",
"await zep_retriever.aget_relevant_documents(\"Who wrote Parable of the Sower?\")"
"await zep_retriever.ainvoke(\"Who wrote Parable of the Sower?\")"
]
},
{
@@ -355,7 +355,7 @@
}
],
"source": [
"zep_retriever.get_relevant_documents(\"Who wrote Parable of the Sower?\")"
"zep_retriever.invoke(\"Who wrote Parable of the Sower?\")"
]
},
{
@@ -407,7 +407,7 @@
" mmr_lambda=0.5,\n",
")\n",
"\n",
"await zep_retriever.aget_relevant_documents(\"Who wrote Parable of the Sower?\")"
"await zep_retriever.ainvoke(\"Who wrote Parable of the Sower?\")"
]
},
{
@@ -445,9 +445,7 @@
"source": [
"filter = {\"where\": {\"jsonpath\": '$[*] ? (@.Label == \"WORK_OF_ART\")'}}\n",
"\n",
"await zep_retriever.aget_relevant_documents(\n",
" \"Who wrote Parable of the Sower?\", metadata=filter\n",
")"
"await zep_retriever.ainvoke(\"Who wrote Parable of the Sower?\", metadata=filter)"
]
},
{
@@ -491,7 +489,7 @@
" mmr_lambda=0.5,\n",
")\n",
"\n",
"await zep_retriever.aget_relevant_documents(\"Who wrote Parable of the Sower?\")"
"await zep_retriever.ainvoke(\"Who wrote Parable of the Sower?\")"
]
},
{

View File

@@ -187,7 +187,7 @@
" embedding=UpstageEmbeddings(),\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"docs = retriever.get_relevant_documents(\"Where did Harrison work?\")\n",
"docs = retriever.invoke(\"Where did Harrison work?\")\n",
"print(docs)"
]
}

View File

@@ -196,7 +196,7 @@
"retriever = KNNRetriever.from_texts(documents, embeddings)\n",
"\n",
"# retrieve the most relevant documents\n",
"result = retriever.get_relevant_documents(query)\n",
"result = retriever.invoke(query)\n",
"top1_retrieved_doc = result[0].page_content # return the top1 retrieved result\n",
"\n",
"print(top1_retrieved_doc)"

View File

@@ -510,7 +510,7 @@
}
],
"source": [
"retriever.get_relevant_documents(query)[0]"
"retriever.invoke(query)[0]"
]
},
{

View File

@@ -197,7 +197,7 @@
"outputs": [],
"source": [
"retriever = docsearch.as_retriever(search_type=\"mmr\")\n",
"matched_docs = retriever.get_relevant_documents(query)\n",
"matched_docs = retriever.invoke(query)\n",
"for i, d in enumerate(matched_docs):\n",
" print(f\"\\n## Document {i}\\n\")\n",
" print(d.page_content)"

View File

@@ -487,7 +487,7 @@
"outputs": [],
"source": [
"# perform simple similarity search on retriever\n",
"retriever.get_relevant_documents(\"What are my options in breathable fabric?\")"
"retriever.invoke(\"What are my options in breathable fabric?\")"
]
},
{
@@ -503,7 +503,7 @@
"retriever.search_kwargs = {\"filter\": filters}\n",
"\n",
"# perform similarity search with filters on retriever\n",
"retriever.get_relevant_documents(\"What are my options in breathable fabric?\")"
"retriever.invoke(\"What are my options in breathable fabric?\")"
]
},
{
@@ -520,7 +520,7 @@
"\n",
"retriever.search_kwargs = {\"filter\": filters, \"numeric_filter\": numeric_filters}\n",
"\n",
"retriever.get_relevant_documents(\"What are my options in breathable fabric?\")"
"retriever.invoke(\"What are my options in breathable fabric?\")"
]
},
{

View File

@@ -291,9 +291,9 @@
],
"source": [
"# This will only get documents for Ankush\n",
"vectorstore.as_retriever(\n",
" search_kwargs={\"expr\": 'namespace == \"ankush\"'}\n",
").get_relevant_documents(\"where did i work?\")"
"vectorstore.as_retriever(search_kwargs={\"expr\": 'namespace == \"ankush\"'}).invoke(\n",
" \"where did i work?\"\n",
")"
]
},
{
@@ -320,9 +320,9 @@
],
"source": [
"# This will only get documents for Harrison\n",
"vectorstore.as_retriever(\n",
" search_kwargs={\"expr\": 'namespace == \"harrison\"'}\n",
").get_relevant_documents(\"where did i work?\")"
"vectorstore.as_retriever(search_kwargs={\"expr\": 'namespace == \"harrison\"'}).invoke(\n",
" \"where did i work?\"\n",
")"
]
},
{

View File

@@ -8,7 +8,7 @@
"\n",
">[Neo4j](https://neo4j.com/) is an open-source graph database with integrated support for vector similarity search\n",
"\n",
"It supports:\n",
"It supports:\n\n",
"- approximate nearest neighbor search\n",
"- Euclidean similarity and cosine similarity\n",
"- Hybrid search combining vector and keyword searches\n",
@@ -694,7 +694,7 @@
],
"source": [
"retriever = store.as_retriever()\n",
"retriever.get_relevant_documents(query)[0]"
"retriever.invoke(query)[0]"
]
},
{

View File

@@ -256,7 +256,7 @@
],
"source": [
"retriever = docsearch.as_retriever(search_type=\"mmr\")\n",
"matched_docs = retriever.get_relevant_documents(query)\n",
"matched_docs = retriever.invoke(query)\n",
"for i, d in enumerate(matched_docs):\n",
" print(f\"\\n## Document {i}\\n\")\n",
" print(d.page_content)"

Some files were not shown because too many files have changed in this diff Show More