mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-29 02:58:06 +00:00
LLMs struggle with Graph RAG, because it's different from vector RAG in a way that you don't provide the whole context, only the answer and the LLM has to believe. However, that doesn't really work a lot of the time. However, if you wrap the context as function response the accuracy is much better. btw... `union[LLMChain, Runnable]` is linting fun, that's why so many ignores |
||
---|---|---|
.. | ||
__init__.py | ||
test_api.py | ||
test_graph_qa.py | ||
test_llm.py | ||
test_pebblo_retrieval.py |