From 19a1c9183d9050388515b6bc7e79937f965eea69 Mon Sep 17 00:00:00 2001 From: Tomaz Bratanic Date: Mon, 12 Feb 2024 06:15:46 +0100 Subject: [PATCH] Improve graph cypher qa prompt (#17380) Unlike vector results, the LLM has to completely trust the context of a graph database result, even if it doesn't provide whole context. We tried with instructions, but it seems that adding a single example is the way to go to solve this issue. --- libs/langchain/langchain/chains/graph_qa/prompts.py | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/libs/langchain/langchain/chains/graph_qa/prompts.py b/libs/langchain/langchain/chains/graph_qa/prompts.py index 5d4652453b6..d83aef9a622 100644 --- a/libs/langchain/langchain/chains/graph_qa/prompts.py +++ b/libs/langchain/langchain/chains/graph_qa/prompts.py @@ -100,6 +100,13 @@ CYPHER_QA_TEMPLATE = """You are an assistant that helps to form nice and human u The information part contains the provided information that you must use to construct an answer. The provided information is authoritative, you must never doubt it or try to use your internal knowledge to correct it. Make the answer sound as a response to the question. Do not mention that you based the result on the given information. +Here is an example: + +Question: Which managers own Neo4j stocks? +Context:[manager:CTL LLC, manager:JANE STREET GROUP LLC] +Helpful Answer: CTL LLC, JANE STREET GROUP LLC owns Neo4j stocks. + +Follow this example when generating answers. If the provided information is empty, say that you don't know the answer. Information: {context}