Compare commits

..

94 Commits

Author SHA1 Message Date
Bagatur
9ebd7ebed8 core[patch]: Release 0.3.16 (#28045) 2024-11-12 14:57:15 +00:00
Changyong Um
9484cc0962 community[docs]: modify parameter for the LoRA adapter on the vllm page (#27930)
**Description:** 
This PR modifies the documentation regarding the configuration of the
VLLM with the LoRA adapter. The updates aim to provide clear
instructions for users on how to set up the LoRA adapter when using the
VLLM.

- before
```python
VLLM(..., enable_lora=True)
```
- after
```python
VLLM(..., 
    vllm_kwargs={
        "enable_lora": True
    }
)
```
This change clarifies that users should use the vllm_kwargs to enable
the LoRA adapter.

Co-authored-by: Um Changyong <changyong.um@sfa.co.kr>
2024-11-11 15:41:56 -05:00
Zapiron
0b85f9035b docs: Makes the phrasing more smooth and reasoning more clear (#28020)
Updated the phrasing and reasoning on the "abstraction not receiving
much development" part of the documentation

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-11 17:17:29 +00:00
Zapiron
f695b96484 docs:Fixed missing hyperlink and changed AI to LLMs for clarity (#28006)
Changed "AI" to "LLM" in a paragraph
Fixed missing hyperlink for the structured output point
2024-11-11 12:14:29 -05:00
Choy Fuguan
c0f3777657 docs: removed bolding from header (#28001)
removed extra ** after heading two
2024-11-11 12:13:02 -05:00
Salman Faroz
44df79cf52 Correcting AzureOpenAI initialization (#28014) 2024-11-11 12:10:59 -05:00
Hammad Randhawa
57fc62323a docs : Update sql_qa.ipynb (#28026)
Text Documentation Bug:

Changed DSL query to SQL query.
2024-11-11 12:04:09 -05:00
ccurme
922b6b0e46 docs: update some cassettes (#28010) 2024-11-09 21:04:18 +00:00
ccurme
8e91c7ceec docs: add cross-links (#28000)
Mainly to improve visibility of integration pages.
2024-11-09 08:57:58 -05:00
Bagatur
33dbfba08b openai[patch]: default to invoke on o1 stream() (#27983) 2024-11-08 19:12:59 -08:00
Bagatur
503f2487a5 docs: intro nit (#27998) 2024-11-08 11:51:17 -08:00
ccurme
ff2152b115 docs: update tutorials index and add get started guides (#27996) 2024-11-08 14:47:32 -05:00
Eric Pinzur
c421997caa community[patch]: Added type hinting to OpenSearch clients (#27946)
Description:
* When working with OpenSearchVectorSearch to make
OpenSearchGraphVectorStore (coming soon), I noticed that there wasn't
type hinting for the underlying OpenSearch clients. This fixes that
issue.
* Confirmed tests are still passing with code changes.

Note that there is some additional code duplication now, but I think
this approach is cleaner overall.
2024-11-08 11:04:57 -08:00
Zapiron
4c2392e55c docs: fix link in custom tools guide (#27975)
Fixed broken link in tools documentation for `BaseTool`
2024-11-08 09:40:15 -05:00
Zapiron
85925e3164 docs: fix link in tool-calling guide (#27976)
Fix broken BaseTool link in documentation
2024-11-08 09:39:27 -05:00
Zapiron
138f360b25 docs: fix typo in PDF loader guide (#27977)
Fixed duplicate "py" in hyperlink to `pypdf` docs
2024-11-08 09:38:32 -05:00
Saad Makrod
b509747c7f Community: Google Books API Tool (#27307)
## Description

As proposed in our earlier discussion #26977 we have introduced a Google
Books API Tool that leverages the Google Books API found at
[https://developers.google.com/books/docs/v1/using](https://developers.google.com/books/docs/v1/using)
to generate book recommendations.

### Sample Usage

```python
from langchain_community.tools import GoogleBooksQueryRun
from langchain_community.utilities import GoogleBooksAPIWrapper

api_wrapper = GoogleBooksAPIWrapper()
tool = GoogleBooksQueryRun(api_wrapper=api_wrapper)

tool.run('ai')
```

### Sample Output

```txt
Here are 5 suggestions based off your search for books related to ai:

1. "AI's Take on the Stigma Against AI-Generated Content" by Sandy Y. Greenleaf: In a world where artificial intelligence (AI) is rapidly advancing and transforming various industries, a new form of content creation has emerged: AI-generated content. However, despite its potential to revolutionize the way we produce and consume information, AI-generated content often faces a significant stigma. "AI's Take on the Stigma Against AI-Generated Content" is a groundbreaking book that delves into the heart of this issue, exploring the reasons behind the stigma and offering a fresh, unbiased perspective on the topic. Written from the unique viewpoint of an AI, this book provides readers with a comprehensive understanding of the challenges and opportunities surrounding AI-generated content. Through engaging narratives, thought-provoking insights, and real-world examples, this book challenges readers to reconsider their preconceptions about AI-generated content. It explores the potential benefits of embracing this technology, such as increased efficiency, creativity, and accessibility, while also addressing the concerns and drawbacks that contribute to the stigma. As you journey through the pages of this book, you'll gain a deeper understanding of the complex relationship between humans and AI in the realm of content creation. You'll discover how AI can be used as a tool to enhance human creativity, rather than replace it, and how collaboration between humans and machines can lead to unprecedented levels of innovation. Whether you're a content creator, marketer, business owner, or simply someone curious about the future of AI and its impact on our society, "AI's Take on the Stigma Against AI-Generated Content" is an essential read. With its engaging writing style, well-researched insights, and practical strategies for navigating this new landscape, this book will leave you equipped with the knowledge and tools needed to embrace the AI revolution and harness its potential for success. Prepare to have your assumptions challenged, your mind expanded, and your perspective on AI-generated content forever changed. Get ready to embark on a captivating journey that will redefine the way you think about the future of content creation.
Read more at https://play.google.com/store/books/details?id=4iH-EAAAQBAJ&source=gbs_api

2. "AI Strategies For Web Development" by Anderson Soares Furtado Oliveira: From fundamental to advanced strategies, unlock useful insights for creating innovative, user-centric websites while navigating the evolving landscape of AI ethics and security Key Features Explore AI's role in web development, from shaping projects to architecting solutions Master advanced AI strategies to build cutting-edge applications Anticipate future trends by exploring next-gen development environments, emerging interfaces, and security considerations in AI web development Purchase of the print or Kindle book includes a free PDF eBook Book Description If you're a web developer looking to leverage the power of AI in your projects, then this book is for you. Written by an AI and ML expert with more than 15 years of experience, AI Strategies for Web Development takes you on a transformative journey through the dynamic intersection of AI and web development, offering a hands-on learning experience.The first part of the book focuses on uncovering the profound impact of AI on web projects, exploring fundamental concepts, and navigating popular frameworks and tools. As you progress, you'll learn how to build smart AI applications with design intelligence, personalized user journeys, and coding assistants. Later, you'll explore how to future-proof your web development projects using advanced AI strategies and understand AI's impact on jobs. Toward the end, you'll immerse yourself in AI-augmented development, crafting intelligent web applications and navigating the ethical landscape.Packed with insights into next-gen development environments, AI-augmented practices, emerging realities, interfaces, and security governance, this web development book acts as your roadmap to staying ahead in the AI and web development domain. What you will learn Build AI-powered web projects with optimized models Personalize UX dynamically with AI, NLP, chatbots, and recommendations Explore AI coding assistants and other tools for advanced web development Craft data-driven, personalized experiences using pattern recognition Architect effective AI solutions while exploring the future of web development Build secure and ethical AI applications following TRiSM best practices Explore cutting-edge AI and web development trends Who this book is for This book is for web developers with experience in programming languages and an interest in keeping up with the latest trends in AI-powered web development. Full-stack, front-end, and back-end developers, UI/UX designers, software engineers, and web development enthusiasts will also find valuable information and practical guidelines for developing smarter websites with AI. To get the most out of this book, it is recommended that you have basic knowledge of programming languages such as HTML, CSS, and JavaScript, as well as a familiarity with machine learning concepts.
Read more at https://play.google.com/store/books/details?id=FzYZEQAAQBAJ&source=gbs_api

3. "Artificial Intelligence for Students" by Vibha Pandey: A multifaceted approach to develop an understanding of AI and its potential applications KEY FEATURES ● AI-informed focuses on AI foundation, applications, and methodologies. ● AI-inquired focuses on computational thinking and bias awareness. ● AI-innovate focuses on creative and critical thinking and the Capstone project. DESCRIPTION AI is a discipline in Computer Science that focuses on developing intelligent machines, machines that can learn and then teach themselves. If you are interested in AI, this book can definitely help you prepare for future careers in AI and related fields. The book is aligned with the CBSE course, which focuses on developing employability and vocational competencies of students in skill subjects. The book is an introduction to the basics of AI. It is divided into three parts – AI-informed, AI-inquired and AI-innovate. It will help you understand AI's implications on society and the world. You will also develop a deeper understanding of how it works and how it can be used to solve complex real-world problems. Additionally, the book will also focus on important skills such as problem scoping, goal setting, data analysis, and visualization, which are essential for success in AI projects. Lastly, you will learn how decision trees, neural networks, and other AI concepts are commonly used in real-world applications. By the end of the book, you will develop the skills and competencies required to pursue a career in AI. WHAT YOU WILL LEARN ● Get familiar with the basics of AI and Machine Learning. ● Understand how and where AI can be applied. ● Explore different applications of mathematical methods in AI. ● Get tips for improving your skills in Data Storytelling. ● Understand what is AI bias and how it can affect human rights. WHO THIS BOOK IS FOR This book is for CBSE class XI and XII students who want to learn and explore more about AI. Basic knowledge of Statistical concepts, Algebra, and Plotting of equations is a must. TABLE OF CONTENTS 1. Introduction: AI for Everyone 2. AI Applications and Methodologies 3. Mathematics in Artificial Intelligence 4. AI Values (Ethical Decision-Making) 5. Introduction to Storytelling 6. Critical and Creative Thinking 7. Data Analysis 8. Regression 9. Classification and Clustering 10. AI Values (Bias Awareness) 11. Capstone Project 12. Model Lifecycle (Knowledge) 13. Storytelling Through Data 14. AI Applications in Use in Real-World
Read more at https://play.google.com/store/books/details?id=ptq1EAAAQBAJ&source=gbs_api

4. "The AI Book" by Ivana Bartoletti, Anne Leslie and Shân M. Millie: Written by prominent thought leaders in the global fintech space, The AI Book aggregates diverse expertise into a single, informative volume and explains what artifical intelligence really means and how it can be used across financial services today. Key industry developments are explained in detail, and critical insights from cutting-edge practitioners offer first-hand information and lessons learned. Coverage includes: · Understanding the AI Portfolio: from machine learning to chatbots, to natural language processing (NLP); a deep dive into the Machine Intelligence Landscape; essentials on core technologies, rethinking enterprise, rethinking industries, rethinking humans; quantum computing and next-generation AI · AI experimentation and embedded usage, and the change in business model, value proposition, organisation, customer and co-worker experiences in today’s Financial Services Industry · The future state of financial services and capital markets – what’s next for the real-world implementation of AITech? · The innovating customer – users are not waiting for the financial services industry to work out how AI can re-shape their sector, profitability and competitiveness · Boardroom issues created and magnified by AI trends, including conduct, regulation & oversight in an algo-driven world, cybersecurity, diversity & inclusion, data privacy, the ‘unbundled corporation’ & the future of work, social responsibility, sustainability, and the new leadership imperatives · Ethical considerations of deploying Al solutions and why explainable Al is so important
Read more at http://books.google.ca/books?id=oE3YDwAAQBAJ&dq=ai&hl=&source=gbs_api

5. "Artificial Intelligence in Society" by OECD: The artificial intelligence (AI) landscape has evolved significantly from 1950 when Alan Turing first posed the question of whether machines can think. Today, AI is transforming societies and economies. It promises to generate productivity gains, improve well-being and help address global challenges, such as climate change, resource scarcity and health crises.
Read more at https://play.google.com/store/books/details?id=eRmdDwAAQBAJ&source=gbs_api
```

## Issue 

This closes #27276 

## Dependencies

No additional dependencies were added

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 15:29:35 -08:00
Massimiliano Pronesti
be3b7f9bae cookbook: add Anthropic's contextual retrieval (#27898)
Hi there, this PR adds a notebook implementing Anthropic's proposed
[Contextual
retrieval](https://www.anthropic.com/news/contextual-retrieval) to
langchain's cookbook.
2024-11-07 14:48:01 -08:00
Erick Friis
733e43eed0 docs: new stack diagram (#27972) 2024-11-07 22:46:56 +00:00
Erick Friis
a073c4c498 templates,docs: leave templates in v0.2 (#27952)
all template installs will now have to declare `--branch v0.2` to make
clear they aren't compatible with langchain 0.3 (most have a pydantic v1
setup). e.g.

```
langchain-cli app add pirate-speak --branch v0.2
```
2024-11-07 22:23:48 +00:00
Erick Friis
8807e6986c docs: ignore case production fork master (#27971) 2024-11-07 13:55:21 -08:00
Shawn Lee
6f368e9eab community: handle chatdeepinfra jsondecode error (#27603)
Fixes #27602 

Added error handling to return empty dict if args is empty string or
None.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 13:47:19 -08:00
CLOVA Studio 개발
0588bab33e community: fix ClovaXEmbeddings document API link address (#27957)
- **Description:** 404 error occurs because `API reference` link address
path is incorrect on
`langchain/docs/docs/integrations/text_embedding/naver.ipynb`
- **Issue:** fix `API reference` link address correct path.

@vbarda @efriis
2024-11-07 13:46:01 -08:00
Akshata
05fd6a16a9 Add ChatModels wrapper for Cloudflare Workers AI (#27645)
Thank you for contributing to LangChain!

- [x] **PR title**: "community: chat models wrapper for Cloudflare
Workers AI"


- [x] **PR message**:
- **Description:** Add chat models wrapper for Cloudflare Workers AI.
Enables Langgraph intergration via ChatModel for tool usage, agentic
usage.


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-07 15:34:24 -05:00
Erick Friis
8a5b9bf2ad box: migrate to repo (#27969) 2024-11-07 10:19:22 -08:00
ccurme
1ad49957f5 docs[patch]: update cassettes for sql/csv notebook (#27966) 2024-11-07 11:48:45 -05:00
ccurme
a747dbd24b anthropic[patch]: remove retired model from tests (#27965)
`claude-instant` was [retired
yesterday](https://docs.anthropic.com/en/docs/resources/model-deprecations).
2024-11-07 16:16:29 +00:00
Aksel Joonas Reedi
2cb39270ec community: bytes as a source to AzureAIDocumentIntelligenceLoader (#26618)
- **Description:** This PR adds functionality to pass in in-memory bytes
as a source to `AzureAIDocumentIntelligenceLoader`.
- **Issue:** I needed the functionality, so I added it.
- **Dependencies:** NA
- **Twitter handle:** @akseljoonas if this is a big enough change :)

---------

Co-authored-by: Aksel Joonas Reedi <aksel@klippa.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 03:40:21 +00:00
Martin Triska
7a9149f5dd community: ZeroxPDFLoader (#27800)
# OCR-based PDF loader

This implements [Zerox](https://github.com/getomni-ai/zerox) PDF
document loader.
Zerox utilizes simple but very powerful (even though slower and more
costly) approach to parsing PDF documents: it converts PDF to series of
images and passes it to a vision model requesting the contents in
markdown.

It is especially suitable for complex PDFs that are not parsed well by
other alternatives.

## Example use:
```python
from langchain_community.document_loaders.pdf import ZeroxPDFLoader

os.environ["OPENAI_API_KEY"] = "" ## your-api-key

model = "gpt-4o-mini" ## openai model
pdf_url = "https://assets.ctfassets.net/f1df9zr7wr1a/soP1fjvG1Wu66HJhu3FBS/034d6ca48edb119ae77dec5ce01a8612/OpenAI_Sacra_Teardown.pdf"

loader = ZeroxPDFLoader(file_path=pdf_url, model=model)
docs = loader.load()
```

The Zerox library supports wide range of provides/models. See Zerox
documentation for details.

- **Dependencies:** `zerox`
- **Twitter handle:** @martintriska1

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-11-07 03:14:57 +00:00
Dmitriy Prokopchuk
53b0a99f37 community: Memcached LLM Cache Integration (#27323)
## Description
This PR adds support for Memcached as a usable LLM model cache by adding
the ```MemcachedCache``` implementation relying on the
[pymemcache](https://github.com/pinterest/pymemcache) client.

Unit test-wise, the new integration is generally covered under existing
import testing. All new functionality depends on pymemcache if
instantiated and used, so to comply with the other cache implementations
the PR also adds optional integration tests for ```MemcachedCache```.

Since this is a new integration, documentation is added for Memcached as
an integration and as an LLM Cache.

## Issue
This PR closes #27275 which was originally raised as a discussion in
#27035

## Dependencies
There are no new required dependencies for langchain, but
[pymemcache](https://github.com/pinterest/pymemcache) is required to
instantiate the new ```MemcachedCache```.

## Example Usage
```python3
from langchain.globals import set_llm_cache
from langchain_openai import OpenAI

from langchain_community.cache import MemcachedCache
from pymemcache.client.base import Client

llm = OpenAI(model="gpt-3.5-turbo-instruct", n=2, best_of=2)
set_llm_cache(MemcachedCache(Client('localhost')))

# The first time, it is not yet in cache, so it should take longer
llm.invoke("Which city is the most crowded city in the USA?")

# The second time it is, so it goes faster
llm.invoke("Which city is the most crowded city in the USA?")
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 03:07:59 +00:00
Siddharth Murching
cfff2a057e community: Update UC toolkit documentation to use LangGraph APIs (#26778)
- **Description:** Update UC toolkit documentation to show an example of
using recommended LangGraph agent APIs before the existing LangChain
AgentExecutor example. Tested by manually running the updated example
notebook
- **Dependencies:** No new dependencies

---------

Signed-off-by: Sid Murching <sid.murching@databricks.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 02:47:41 +00:00
ZhangShenao
c2072d909a Improvement[Partner] Improve qdrant vector store (#27251)
- Add static method decorator
- Add args for api doc
- Fix word spelling

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 02:42:41 +00:00
Baptiste Pasquier
81f7daa458 community: add InfinityRerank (#27043)
**Description:** 

- Add a Reranker for Infinity server.

**Dependencies:** 

This wrapper uses
[infinity_client](https://github.com/michaelfeil/infinity/tree/main/libs/client_infinity/infinity_client)
to connect to an Infinity server.

**Tests and docs**

- integration test: test_infinity_rerank.py
- example notebook: infinity_rerank.ipynb
[here](https://github.com/baptiste-pasquier/langchain/blob/feat/infinity-rerank/docs/docs/integrations/document_transformers/infinity_rerank.ipynb)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-06 17:26:30 -08:00
Erick Friis
2494deb2a4 infra: remove google creds from release and integration test workflows (#27950) 2024-11-07 00:31:10 +00:00
Martin Triska
90189f5639 community: Allow other than default parsers in SharePointLoader and OneDriveLoader (#27716)
## What this PR does?

### Currently `O365BaseLoader` (and consequently both derived loaders)
are limited to `pdf`, `doc`, `docx` files.
- **Solution: here we introduce _handlers_ attribute that allows for
custom handlers to be passed in. This is done in _dict_ form:**

**Example:**
```python
from langchain_community.document_loaders.parsers.documentloader_adapter import DocumentLoaderAsParser
# PR for DocumentLoaderAsParser here: https://github.com/langchain-ai/langchain/pull/27749
from langchain_community.document_loaders.excel import UnstructuredExcelLoader

xlsx_parser = DocumentLoaderAsParser(UnstructuredExcelLoader, mode="paged")

# create dictionary mapping file types to handlers (parsers)
handlers = {
    "doc": MsWordParser()
    "pdf": PDFMinerParser()
    "txt": TextParser()
    "xlsx": xlsx_parser
}
loader = SharePointLoader(document_library_id="...",
                            handlers=handlers # pass handlers to SharePointLoader
                            )
documents = loader.load()

# works the same in OneDriveLoader
loader = OneDriveLoader(document_library_id="...",
                            handlers=handlers
                            )
```
This dictionary is then passed to `MimeTypeBasedParser` same as in the
[current
implementation](5a2cfb49e0/libs/community/langchain_community/document_loaders/parsers/registry.py (L13)).


### Currently `SharePointLoader` and `OneDriveLoader` are separate
loaders that both inherit from `O365BaseLoader`
However both of these implement the same functionality. The only
differences are:
- `SharePointLoader` requires argument `document_library_id` whereas
`OneDriveLoader` requires `drive_id`. These are just different names for
the same thing.
  - `SharePointLoader` implements significantly more features.
- **Solution: `OneDriveLoader` is replaced with an empty shell just
renaming `drive_id` to `document_library_id` and inheriting from
`SharePointLoader`**

**Dependencies:** None
**Twitter handle:** @martintriska1

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-06 17:44:34 -05:00
takahashi
482c168b3e langchain_core: add file_type option to make file type default as png (#27855)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"

- [ ] **description**
langchain_core.runnables.graph_mermaid.draw_mermaid_png calls this
function, but the Mermaid API returns JPEG by default. To be consistent,
add the option `file_type` with the default `png` type.

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
With this small change, I didn't add tests and docs.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more:
One long sentence was divided into two.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-06 22:37:07 +00:00
Roman Solomatin
0f85dea8c8 langchain-huggingface: use separate kwargs for queries and docs (#27857)
Now `encode_kwargs` used for both for documents and queries and this
leads to wrong embeddings. E. g.:
```python
    model_kwargs = {"device": "cuda", "trust_remote_code": True}
    encode_kwargs = {"normalize_embeddings": False, "prompt_name": "s2p_query"}

    model = HuggingFaceEmbeddings(
        model_name="dunzhang/stella_en_400M_v5",
        model_kwargs=model_kwargs,
        encode_kwargs=encode_kwargs,
    )

    query_embedding = np.array(
        model.embed_query("What are some ways to reduce stress?",)
    )
    document_embedding = np.array(
        model.embed_documents(
            [
                "There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.",
                "Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.",
            ]
        )
    )
    print(model._client.similarity(query_embedding, document_embedding)) # output: tensor([[0.8421, 0.3317]], dtype=torch.float64)
```
But from the [model
card](https://huggingface.co/dunzhang/stella_en_400M_v5#sentence-transformers)
expexted like this:
```python
    model_kwargs = {"device": "cuda", "trust_remote_code": True}
    encode_kwargs = {"normalize_embeddings": False}
    query_encode_kwargs = {"normalize_embeddings": False, "prompt_name": "s2p_query"}

    model = HuggingFaceEmbeddings(
        model_name="dunzhang/stella_en_400M_v5",
        model_kwargs=model_kwargs,
        encode_kwargs=encode_kwargs,
        query_encode_kwargs=query_encode_kwargs,
    )

    query_embedding = np.array(
        model.embed_query("What are some ways to reduce stress?", )
    )
    document_embedding = np.array(
        model.embed_documents(
            [
                "There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.",
                "Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.",
            ]
        )
    )
    print(model._client.similarity(query_embedding, document_embedding)) # tensor([[0.8398, 0.2990]], dtype=torch.float64)
```
2024-11-06 17:35:39 -05:00
Bagatur
60123bef67 docs: fix trim_messages docstring (#27948) 2024-11-06 22:25:13 +00:00
murrlincoln
14f1827953 docs: Adding notebook for cdp agentkit toolkit (#27910)
- **Description:** Adding in the first pass of documentation for the CDP
Agentkit Toolkit
    - **Issue:** N/a
    - **Dependencies:** cdp-langchain
    - **Twitter handle:** @CoinbaseDev

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: John Peterson <john.peterson@coinbase.com>
2024-11-06 13:28:27 -08:00
Eric Pinzur
ea0ad917b0 community: added Document.id support to opensearch vectorstore (#27945)
Description:
* Added support of Document.id on OpenSearch vector store
* Added tests cases to match
2024-11-06 15:04:09 -05:00
Hammad Randhawa
75aa82fedc docs: Completed sentence under the heading "Instantiating a Browser … (#27944)
…Toolkit" in "playwright.ipynb" integration.

- Completed the incomplete sentence in the Langchain Playwright
documentation.

- Enhanced documentation clarity to guide users on best practices for
instantiating browser instances with Langchain Playwright.

Example before:
> "It's always recommended to instantiate using the from_browser method
so that the

Example after:
> "It's always recommended to instantiate using the `from_browser`
method so that the browser context is properly initialized and managed,
ensuring seamless interaction and resource optimization."

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-06 19:55:00 +00:00
Bagatur
67ce05a0a7 core[patch]: make oai tool description optional (#27756) 2024-11-06 18:06:47 +00:00
Bagatur
b2da3115ed docs: document init_chat_model standard params (#27812) 2024-11-06 09:50:07 -08:00
Dobiichi-Origami
395674d503 community: re-arrange function call message parse logic for Qianfan (#27935)
the [PR](https://github.com/langchain-ai/langchain/pull/26208) two month
ago has a potential bug which causes malfunction of `tool_call` for
`QianfanChatEndpoint` waiting for fix
2024-11-06 09:58:16 -05:00
Erick Friis
41b7a5169d infra: starter codeowners file (#27929) 2024-11-05 16:43:11 -08:00
ccurme
66966a6e72 openai[patch]: release 0.2.6 (#27924)
Some additions in support of [predicted
outputs](https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs)
feature:
- Bump openai sdk version
- Add integration test
- Add example to integration docs

The `prediction` kwarg is already plumbed through model invocation.
2024-11-05 23:02:24 +00:00
Erick Friis
a8c473e114 standard-tests: ci pipeline (#27923) 2024-11-05 20:55:38 +00:00
Erick Friis
c3b75560dc infra: release note grep order of operations (#27922) 2024-11-05 12:44:36 -08:00
Erick Friis
b3c81356ca infra: release note compute 2 (#27921) 2024-11-05 12:04:41 -08:00
Erick Friis
bff2a8b772 standard-tests: add tools standard tests (#27899) 2024-11-05 11:44:34 -08:00
SHJUN
f6b2f82099 community: chroma error patch(attribute changed on chroma) (#27827)
There was a change of attribute name which was "max_batch_size". It's
now "get_max_batch_size" method.
I want to use "create_batches" which is right down below.

Please check this PR link.
reference: https://github.com/chroma-core/chroma/pull/2305

---------

Signed-off-by: Prithvi Kannan <prithvi.kannan@databricks.com>
Co-authored-by: Prithvi Kannan <46332835+prithvikannan@users.noreply.github.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Jun Yamog <jkyamog@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: ono-hiroki <86904208+ono-hiroki@users.noreply.github.com>
Co-authored-by: Dobiichi-Origami <56953648+Dobiichi-Origami@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Duy Huynh <vndee.huynh@gmail.com>
Co-authored-by: Rashmi Pawar <168514198+raspawar@users.noreply.github.com>
Co-authored-by: sifatj <26035630+sifatj@users.noreply.github.com>
Co-authored-by: Eric Pinzur <2641606+epinzur@users.noreply.github.com>
Co-authored-by: Daniel Vu Dao <danielvdao@users.noreply.github.com>
Co-authored-by: Ofer Mendelevitch <ofermend@gmail.com>
Co-authored-by: Stéphane Philippart <wildagsx@gmail.com>
2024-11-05 19:43:11 +00:00
Tomaz Bratanic
a3bbbe6a86 update llm graph transformer documentation (#27905) 2024-11-05 11:54:26 -05:00
Erick Friis
31f4fb790d standard-tests: release 0.3.0 (#27900) 2024-11-04 17:29:15 -08:00
Erick Friis
ba5cba04ff infra: get min versions (#27896) 2024-11-04 23:46:13 +00:00
Bagatur
6973f7214f docs: sidebar capitalization (#27894) 2024-11-04 22:09:32 +00:00
Stéphane Philippart
4b8cd7a09a community: Use new OVHcloud batch embedding (#26209)
- **Description:** change to do the batch embedding server side and not
client side
- **Twitter handle:** @wildagsx

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-04 16:40:30 -05:00
Erick Friis
a54f390090 infra: fix prev tag output (#27892) 2024-11-04 12:46:23 -08:00
Erick Friis
75f80c2910 infra: fix prev tag condition (#27891) 2024-11-04 12:42:22 -08:00
Ofer Mendelevitch
d7c39e6dbb community: update Vectara integration (#27869)
Thank you for contributing to LangChain!

- **Description:** Updated Vectara integration
- **Issue:** refresh on descriptions across all demos and added UDF
reranker
- **Dependencies:** None
- **Twitter handle:** @ofermend

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:40:39 +00:00
Erick Friis
14a71a6e77 infra: fix prev tag calculation (#27890) 2024-11-04 12:38:39 -08:00
Daniel Vu Dao
5745f3bf78 docs: Update messages.mdx (#27856)
### Description
Updates phrasing for the header of the `Messages` section.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:36:31 +00:00
sifatj
e02a5ee03e docs: Update VectorStore as_retriever method url in qa_chat_history_how_to.ipynb (#27844)
**Description**: Update VectorStore `as_retriever` method api reference
url in `qa_chat_history_how_to.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:34:50 +00:00
sifatj
dd1711f3c2 docs: Update max_marginal_relevance_search api reference url in multi_vector.ipynb (#27843)
**Description**: Update VectorStore `max_marginal_relevance_search` api
reference url in `multi_vector.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:31:36 +00:00
sifatj
aa1f46a03a docs: Update VectorStore .as_retriever method url in vectorstore_retriever.ipynb (#27842)
**Description**: Update VectorStore `.as_retriever` method url in
`vectorstore_retriever.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:28:11 +00:00
Eric Pinzur
8eb38622a6 community: fixed bug in GraphVectorStoreRetriever (#27846)
Description:

This fixes an issue that mistakenly created in
https://github.com/langchain-ai/langchain/pull/27253. The issue
currently exists only in `langchain-community==0.3.4`.

Test cases were added to prevent this issue in the future.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:27:17 +00:00
sifatj
eecf95df9b docs: Update VectorStore api reference url in rag.ipynb (#27841)
**Description**: Update VectorStore api reference url in `rag.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:27:03 +00:00
sifatj
50563400fb docs: Update broken vectorstore urls in retrievers.ipynb (#27838)
**Description**: Update outdated `VectorStore` api reference urls in
`retrievers.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:26:03 +00:00
Bagatur
dfa83531ad qdrant,nomic[minor]: bump core deps (#27849) 2024-11-04 20:19:50 +00:00
Erick Friis
4e5cc84d40 infra: release tag compute (#27836) 2024-11-04 12:16:51 -08:00
Rashmi Pawar
f86a09f82c Add nvidia as provider for embedding, llm (#27810)
Documentation: Add NVIDIA as integration provider

cc: @mattf @dglogo

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 19:45:51 +00:00
Erick Friis
0c62684ce1 Revert "infra: add neo4j to package list" (#27887)
Reverts langchain-ai/langchain#27833

Wait for release
2024-11-04 18:18:38 +00:00
Erick Friis
bcf499df16 infra: add neo4j to package list (#27833) 2024-11-04 09:24:04 -08:00
Duy Huynh
a487ec47f4 community: set default output_token_limit value for PowerBIToolkit to fix validation error (#26308)
### Description:
This PR sets a default value of `output_token_limit = 4000` for the
`PowerBIToolkit` to fix the unintentionally validation error.

### Problem:
When attempting to run a code snippet from [Langchain's PowerBI toolkit
documentation](https://python.langchain.com/v0.1/docs/integrations/toolkits/powerbi/)
to interact with a `PowerBIDataset`, the following error occurs:

```
pydantic.v1.error_wrappers.ValidationError: 1 validation error for QueryPowerBITool
output_token_limit
  none is not an allowed value (type=type_error.none.not_allowed)
```

### Root Cause:
The issue arises because when creating a `QueryPowerBITool`, the
`output_token_limit` parameter is unintentionally set to `None`, which
is the current default for `PowerBIToolkit`. However, `QueryPowerBITool`
expects a default value of `4000` for `output_token_limit`. This
unintended override causes the error.


17659ca2cd/libs/community/langchain_community/agent_toolkits/powerbi/toolkit.py (L63)

17659ca2cd/libs/community/langchain_community/agent_toolkits/powerbi/toolkit.py (L72-L79)

17659ca2cd/libs/community/langchain_community/tools/powerbi/tool.py (L39)

### Solution:
To resolve this, the default value of `output_token_limit` is now
explicitly set to `4000` in `PowerBIToolkit` to prevent the accidental
assignment of `None`.

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-04 14:34:27 +00:00
Dobiichi-Origami
f7ced5b211 community: read function call from tool_calls for Qianfan (#26208)
I added one more 'elif' to read tool call message from `tool_calls`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-04 14:33:32 +00:00
ono-hiroki
b7d549ae88 docs: fix undefined 'data' variable in document_loader_csv.ipynb (#27872)
**Description:** 
This PR addresses an issue in the CSVLoader example where data is not
defined, causing a NameError. The line `data = loader.load()` is added
to correctly assign the output of loader.load() to the data variable.
2024-11-04 14:10:56 +00:00
Bagatur
3b0b7cfb74 chroma[minor]: release 0.2.0 (#27840) 2024-11-01 18:12:00 -07:00
Jun Yamog
830cad7bc0 core: fix CommaSeparatedListOutputParser to handle columns that may contain commas in it (#26365)
- **Description:**
Currently CommaSeparatedListOutputParser can't handle strings that may
contain commas within a column. It would parse any commas as the
delimiter.
Ex. 
"foo, foo2", "bar", "baz"

It will create 4 columns: "foo", "foo2", "bar", "baz"

This should be 3 columns:

"foo, foo2", "bar", "baz"

- **Dependencies:**
Added 2 additional imports, but they are built in python packages.

import csv
from io import StringIO

- **Twitter handle:** @jkyamog

- [ ] **Add tests and docs**: 
1. added simple unit test test_multiple_items_with_comma

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-11-01 22:42:24 +00:00
Erick Friis
9fedb04dd3 docs: INVALID_CHAT_HISTORY redirect (#27845) 2024-11-01 21:35:11 +00:00
Erick Friis
03a3670a5e infra: remove some special cases (#27839) 2024-11-01 21:13:43 +00:00
Bagatur
002e1c9055 airbyte: remove from master (#27837) 2024-11-01 13:59:34 -07:00
Bagatur
ee63d21915 many: use core 0.3.15 (#27834) 2024-11-01 20:35:55 +00:00
Prithvi Kannan
c3c638cd7b docs: Reference new databricks-langchain package (#27828)
Thank you for contributing to LangChain!

Update references in Databricks integration page to reference our new
partner package databricks-langchain
https://github.com/databricks/databricks-ai-bridge/tree/main/integrations/langchain

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: Prithvi Kannan <prithvi.kannan@databricks.com>
2024-11-01 10:21:19 -07:00
sifatj
33d445550e docs: update VectorStore api reference url in retrievers.ipynb (#27814)
**Description:** Update outdated `VectorStore` api reference url in
Vector store subsection of `retrievers.ipynb`
2024-11-01 15:44:26 +00:00
sifatj
9a4a630e40 docs: Update Retrievers and Runnable links in Retrievers subsection of retrievers.ipynb (#27815)
**Description:** Update outdated links for `Retrievers` and `Runnable`
in Retrievers subsection of `retrievers.ipynb`
2024-11-01 15:42:30 +00:00
Zapiron
b0dfff4cd5 Fixed broken link for TokenTextSplitter (#27824)
Fixed the broken redirect link for `TokenTextSplitter` section
2024-11-01 11:32:07 -04:00
William FH
b4cb2089a2 langchain[patch]: Add warning in react agent (#26980) 2024-10-31 22:29:34 +00:00
Eugene Yurtsev
2f6254605d docs: fix more links (#27809)
Fix more broken links
2024-10-31 17:15:46 -04:00
Ant White
e3ea365725 core: use friendlier names for duplicated nodes in mermaid output (#27747)
Thank you for contributing to LangChain!

- [x] **PR title**: "core: use friendlier names for duplicated nodes in
mermaid output"

- **Description:** When generating the Mermaid visualization of a chain,
if the chain had multiple nodes of the same type, the reid function
would replace their names with the UUID node_id. This made the generated
graph difficult to understand. This change deduplicates the nodes in a
chain by appending an index to their names.
- **Issue:** None
- **Discussion:**
https://github.com/langchain-ai/langchain/discussions/27714
- **Dependencies:** None

- [ ] **Add tests and docs**:  
- Currently this functionality is not covered by unit tests, happy to
add tests if you'd like


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

# Example Code:
```python
from langchain_core.runnables import RunnablePassthrough

def fake_llm(prompt: str) -> str: # Fake LLM for the example
    return "completion"

runnable = {
    'llm1':  fake_llm,
    'llm2':  fake_llm,
} | RunnablePassthrough.assign(
    total_chars=lambda inputs: len(inputs['llm1'] + inputs['llm2'])
)

print(runnable.get_graph().draw_mermaid(with_styles=False))
```

# Before
```mermaid
graph TD;
	Parallel_llm1_llm2_Input --> 0b01139db5ed4587ad37964e3a40c0ec;
	0b01139db5ed4587ad37964e3a40c0ec --> Parallel_llm1_llm2_Output;
	Parallel_llm1_llm2_Input --> a98d4b56bd294156a651230b9293347f;
	a98d4b56bd294156a651230b9293347f --> Parallel_llm1_llm2_Output;
	Parallel_total_chars_Input --> Lambda;
	Lambda --> Parallel_total_chars_Output;
	Parallel_total_chars_Input --> Passthrough;
	Passthrough --> Parallel_total_chars_Output;
	Parallel_llm1_llm2_Output --> Parallel_total_chars_Input;
```

# After
```mermaid
graph TD;
	Parallel_llm1_llm2_Input --> fake_llm_1;
	fake_llm_1 --> Parallel_llm1_llm2_Output;
	Parallel_llm1_llm2_Input --> fake_llm_2;
	fake_llm_2 --> Parallel_llm1_llm2_Output;
	Parallel_total_chars_Input --> Lambda;
	Lambda --> Parallel_total_chars_Output;
	Parallel_total_chars_Input --> Passthrough;
	Passthrough --> Parallel_total_chars_Output;
	Parallel_llm1_llm2_Output --> Parallel_total_chars_Input;
```
2024-10-31 16:52:00 -04:00
Eugene Yurtsev
71f590de50 docs: fix more broken links (#27806)
Fix some broken links
2024-10-31 19:46:39 +00:00
Neli Hateva
c572d663f9 docs: Ontotext GraphDB QA Chain Update Documentation (Fix versions of libraries) (#27783)
- **Description:** Update versions of libraries in the Ontotext GraphDB
QA Chain Documentation
 - **Issue:** N/A
 - **Dependencies:** N/A
 - **Twitter handle:** @OntotextGraphDB
2024-10-31 15:23:16 -04:00
L
8ef0df3539 feat: add batch request support for text-embedding-v3 model (#26375)
PR title: “langchain: add batch request support for text-embedding-v3
model”

PR message:

• Description: This PR introduces batch request support for the
text-embedding-v3 model within LangChain. The new functionality allows
users to process multiple text inputs in a single request, improving
efficiency and performance for high-volume applications.
	•	Issue: This PR addresses #<issue_number> (if applicable).
• Dependencies: No new external dependencies are required for this
change.
• Twitter handle: If announced on Twitter, please mention me at
@yourhandle.

Add tests and docs:

1. Added unit tests to cover the batch request functionality, ensuring
it operates without requiring network access.
2. Included an example notebook demonstrating the batch request feature,
located in docs/docs/integrations.

Lint and test: All required formatting and linting checks have been
performed using make format and make lint. The changes have been
verified with make test to ensure compatibility.

Additional notes:

	•	The changes are fully backwards compatible.
• No modifications were made to pyproject.toml, ensuring no new
dependencies were added.
• The update only affects the langchain package and does not involve
other packages.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-10-31 18:56:22 +00:00
putao520
2545fbe709 fix "WARNING: Received notification from DBMS server: {severity: WARN… (#27112)
…ING} {code: Neo.ClientNotification.Statement.FeatureDeprecationWarning}
{category: DEPRECATION} {title: This feature is deprecated and will be
removed in future versions.} {description: CALL subquery without a
variable scope clause is now deprecated." this warning

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: putao520 <putao520@putao282.com>
2024-10-31 18:47:25 +00:00
Ankan Mahapatra
905f43377b Update word_document.py | Fixed metadata["source"] for web paths (#27220)
The metadata["source"] value for the web paths was being set to
temporary path (/tmp).

Fixed it by creating a new variable self.original_file_path, which will
store the original path.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-10-31 18:37:41 +00:00
Daniel Birn
389771ccc0 community: fix @embeddingKey in azure cosmos db no sql (#27377)
I will keep this PR as small as the changes made.

**Description:** fixes a fatal bug syntax error in
AzureCosmosDBNoSqlVectorSearch
**Issue:** #27269 #25468
2024-10-31 18:36:02 +00:00
1067 changed files with 8302 additions and 43401 deletions

2
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,2 @@
/.github/ @efriis @baskaryan @ccurme
/libs/packages.yml @efriis

View File

@@ -1,7 +1,7 @@
Thank you for contributing to LangChain!
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for purely docs changes, "templates: ..." for template changes, "infra: ..." for CI changes.
- Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes.
- Example: "community: add foobar LLM"

View File

@@ -307,7 +307,7 @@ if __name__ == "__main__":
f"Unknown lib: {file}. check_diff.py likely needs "
"an update for this new library!"
)
elif any(file.startswith(p) for p in ["docs/", "templates/", "cookbook/"]):
elif any(file.startswith(p) for p in ["docs/", "cookbook/"]):
if file.startswith("docs/"):
docs_edited = True
dirs_to_run["lint"].add(".")

View File

@@ -7,12 +7,17 @@ else:
# for python 3.10 and below, which doesnt have stdlib tomllib
import tomli as tomllib
from packaging.version import parse as parse_version
from packaging.specifiers import SpecifierSet
from packaging.version import Version
import requests
from packaging.version import parse
from typing import List
import re
MIN_VERSION_LIBS = [
"langchain-core",
"langchain-community",
@@ -31,29 +36,61 @@ SKIP_IF_PULL_REQUEST = [
]
def get_min_version(version: str) -> str:
# base regex for x.x.x with cases for rc/post/etc
# valid strings: https://peps.python.org/pep-0440/#public-version-identifiers
vstring = r"\d+(?:\.\d+){0,2}(?:(?:a|b|rc|\.post|\.dev)\d+)?"
# case ^x.x.x
_match = re.match(f"^\\^({vstring})$", version)
if _match:
return _match.group(1)
def get_pypi_versions(package_name: str) -> List[str]:
"""
Fetch all available versions for a package from PyPI.
# case >=x.x.x,<y.y.y
_match = re.match(f"^>=({vstring}),<({vstring})$", version)
if _match:
_min = _match.group(1)
_max = _match.group(2)
assert parse_version(_min) < parse_version(_max)
return _min
Args:
package_name (str): Name of the package
# case x.x.x
_match = re.match(f"^({vstring})$", version)
if _match:
return _match.group(1)
Returns:
List[str]: List of all available versions
raise ValueError(f"Unrecognized version format: {version}")
Raises:
requests.exceptions.RequestException: If PyPI API request fails
KeyError: If package not found or response format unexpected
"""
pypi_url = f"https://pypi.org/pypi/{package_name}/json"
response = requests.get(pypi_url)
response.raise_for_status()
return list(response.json()["releases"].keys())
def get_minimum_version(package_name: str, spec_string: str) -> Optional[str]:
"""
Find the minimum published version that satisfies the given constraints.
Args:
package_name (str): Name of the package
spec_string (str): Version specification string (e.g., ">=0.2.43,<0.4.0,!=0.3.0")
Returns:
Optional[str]: Minimum compatible version or None if no compatible version found
"""
# rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
spec_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", spec_string)
# rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1 (can be anywhere in constraint string)
for y in range(1, 10):
spec_string = re.sub(rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y+1}", spec_string)
# rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
for x in range(1, 10):
spec_string = re.sub(
rf"\^{x}\.(\d+)\.(\d+)", rf">={x}.\1.\2,<{x+1}", spec_string
)
spec_set = SpecifierSet(spec_string)
all_versions = get_pypi_versions(package_name)
valid_versions = []
for version_str in all_versions:
try:
version = parse(version_str)
if spec_set.contains(version):
valid_versions.append(version)
except ValueError:
continue
return str(min(valid_versions)) if valid_versions else None
def get_min_version_from_toml(
@@ -96,7 +133,7 @@ def get_min_version_from_toml(
][0]["version"]
# Use parse_version to get the minimum supported version from version_string
min_version = get_min_version(version_string)
min_version = get_minimum_version(lib, version_string)
# Store the minimum version in the min_versions dictionary
min_versions[lib] = min_version
@@ -112,6 +149,20 @@ def check_python_version(version_string, constraint_string):
:param constraint_string: A string representing the package's Python version constraints (e.g. ">=3.6, <4.0").
:return: True if the version matches the constraints, False otherwise.
"""
# rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
constraint_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", constraint_string)
# rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1.0 (can be anywhere in constraint string)
for y in range(1, 10):
constraint_string = re.sub(
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y+1}.0", constraint_string
)
# rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
for x in range(1, 10):
constraint_string = re.sub(
rf"\^{x}\.0\.(\d+)", rf">={x}.0.\1,<{x+1}.0.0", constraint_string
)
try:
version = Version(version_string)
constraints = SpecifierSet(constraint_string)

View File

@@ -41,12 +41,6 @@ jobs:
shell: bash
run: poetry run pip install "boto3<2" "google-cloud-aiplatform<2"
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Run integration tests
shell: bash
env:
@@ -81,7 +75,6 @@ jobs:
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # for airbyte
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}

View File

@@ -95,9 +95,30 @@ jobs:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | grep -P $REGEX || true | head -1)
PREV_TAG="$PKG_NAME==${VERSION%.*}.$(( ${VERSION##*.} - 1 ))"; [[ "${VERSION##*.}" -eq 0 ]] && PREV_TAG=""
# backup case if releasing e.g. 0.3.0, looks up last release
# note if last release (chronologically) was e.g. 0.1.47 it will get
# that instead of the last 0.2 release
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | (grep -P $REGEX || true) | head -1)
fi
# if PREV_TAG is empty, let it be empty
if [ -z "$PREV_TAG" ]; then
echo "No previous tag found - first release"
else
# confirm prev-tag actually exists in git repo with git tag
GIT_TAG_RESULT=$(git tag -l "$PREV_TAG")
if [ -z "$GIT_TAG_RESULT" ]; then
echo "Previous tag $PREV_TAG not found in git repo"
exit 1
fi
fi
TAG="${PKG_NAME}==${VERSION}"
if [ "$TAG" == "$PREV_TAG" ]; then
echo "No new version to release"
@@ -231,7 +252,7 @@ jobs:
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
poetry run pip install packaging
poetry run pip install packaging requests
python_version="$(poetry run python --version | awk '{print $2}')"
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml release $python_version)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
@@ -246,12 +267,6 @@ jobs:
make tests
working-directory: ${{ inputs.working-directory }}
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Import integration test dependencies
run: poetry install --with test,test_integration
working-directory: ${{ inputs.working-directory }}
@@ -289,7 +304,6 @@ jobs:
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # for airbyte
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}

View File

@@ -47,7 +47,7 @@ jobs:
id: min-version
shell: bash
run: |
poetry run pip install packaging tomli
poetry run pip install packaging tomli requests
python_version="$(poetry run python --version | awk '{print $2}')"
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml pull_request $python_version)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"

View File

@@ -72,9 +72,7 @@ jobs:
- name: Install dependencies
working-directory: langchain
run: |
# skip airbyte due to pandas dependency issue
python -m uv pip install $(ls ./libs/partners | grep -vE "airbyte" | xargs -I {} echo "./libs/partners/{}")
python -m uv pip install $(ls ./libs/partners | xargs -I {} echo "./libs/partners/{}")
python -m uv pip install libs/core libs/langchain libs/text-splitters libs/community libs/experimental
python -m uv pip install -r docs/api_reference/requirements.txt

View File

@@ -31,7 +31,7 @@ jobs:
uses: Ana06/get-changed-files@v2.2.0
- id: set-matrix
run: |
python -m pip install packaging
python -m pip install packaging requests
python .github/scripts/check_diff.py ${{ steps.files.outputs.all }} >> $GITHUB_OUTPUT
outputs:
lint: ${{ steps.set-matrix.outputs.lint }}

View File

@@ -66,12 +66,12 @@ spell_fix:
## lint: Run linting on the project.
lint lint_package lint_tests:
poetry run ruff check docs templates cookbook
poetry run ruff format docs templates cookbook --diff
poetry run ruff check --select I docs templates cookbook
git grep 'from langchain import' docs/docs templates cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
poetry run ruff check docs cookbook
poetry run ruff format docs cookbook cookbook --diff
poetry run ruff check --select I docs cookbook
git grep 'from langchain import' docs/docs cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
## format: Format the project files.
format format_diff:
poetry run ruff format docs templates cookbook
poetry run ruff check --select I --fix docs templates cookbook
poetry run ruff format docs cookbook
poetry run ruff check --select I --fix docs cookbook

View File

@@ -59,7 +59,8 @@ For these applications, LangChain simplifies the entire application lifecycle:
- **[LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/)**: Turn your LangGraph applications into production-ready APIs and Assistants.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_062024.svg "LangChain Architecture Overview")
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_112024.svg#gh-light-mode-only "LangChain Architecture Overview")
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_112024_dark.svg#gh-dark-mode-only "LangChain Architecture Overview")
## 🧱 What can you build with LangChain?

View File

@@ -62,4 +62,5 @@ Notebook | Description
[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
[oracleai_demo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS.
[rag-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag-locally-on-intel-cpu.ipynb) | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release.
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.
[contextual_rag.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/contextual_rag.ipynb) | Performs contextual retrieval-augmented generation (RAG) prepending chunk-specific explanatory context to each chunk before embedding.

File diff suppressed because it is too large Load Diff

View File

@@ -530,7 +530,6 @@ def _out_file_path(package_name: str) -> Path:
def _build_index(dirs: List[str]) -> None:
custom_names = {
"airbyte": "Airbyte",
"aws": "AWS",
"ai21": "AI21",
"ibm": "IBM",

View File

@@ -1 +1 @@
eNptVWtsHFcVdpIKpapEAoKKVoVMVoilrWd2XvuyZRp7N3bWsb1+OzYFc3fm7M545+WZO+u1S4uaFqSoiGZKK/UhpW3i7ILlOAkxaerEJSEQAU5EhJWGVLgVAvWNaIGqjSIU7qzXxFYyP3bn3nvud75zznfO7C4XwHZU01g3rRoYbCRhsnC83WUbRl1w8OMlHbBiypOd6Z7eA66tXrlPwdhy6kIhZKmMaYGBVEYy9VCBC0kKwiHybmlQgZnMmPL4G+uYhwI6OA7KgROoo779UEAyiS8Dk0UgRSmoABSiLGTIyKGSCKNmG+lABeVskBpTsUJJpubqhkMFG3MQpIgdFWxGNgQZasBWMZBzGShsUr5nl6yx4u/ZNmjIp0FlAI8BGJV9PGau4DFUN2DXNqh2ZOdlc8ygsqZNmHSOk4CNZVTHUC0LcMWpYWJFNXIUaA4wgVoqYJsa+DG4DtiBh79DdnRyR/O3chamBSZME/yM6dsaZJcj/w62AelkkUUEhmxg0C2Sd2LoY7FM9OGyAkgmVXmzZvOkYjrYm1mb6cNIkoDgg0EoEkLeodyEatVSMmRJwDBFsmtApY7eVB7AopGmFqC0fMs7gixLU6VKZkIjjmlMV6tB43ELbj6e8qOjSe0M7M2mCYnGVKiaIY4J8wx7pEg7GKmGRkpMa4jwKVmV85OrDywk5QkIXZWbV1q+PLPaxnS8g+1ISvesgUS2pHgHka1HxGOr923XwKoOXjnRebO76uENdwLDsYx4dA2wM25I3sFKIV5ZcxmwPU5LJsHwXmZLkmnmVfCu/Gt4WMoOZ/SGUWY0zLpopzuyyxyMpRuZtv7uvogYG2qx4gWc7h8oDAliUu1vk/M0F+XjEZ6NRAWaY1iGYzi6qPRn7eHGUXWgwA5vH5LScmuTU+S1CLNjtHUIN4lsKiy0J5LtmZTTCn2DI63y6E4YyghJkLv4drVLacpa4azQvzMitwy6SbelZ4jrqqcIO7egyg0d0TamS9/RrPCKklBRX4uYSm3vyU6EBwdizR1N0T4hFQ+39RfcVNcqejwn0myVYYQVY6z/zKxoQwMjhxXvgMDyP7PBsUhzw2MlkjLsOrsn/VY5/7tytcn3p3fekPCdk6SjwZvvdYnYeZ5KS5jiWV6kuFgdx9VxUaqlvXc6UXXTe0sJHu21keFkiQy3r0i+LCmukQd5KnFLsc/7YieV9OmTLqWhaJkO0FVW3vQuunt5vNGp5LHlzqJNO4cMdaLi1puvqH5sojgmS64sK4UxnY1PiIKaAVfKzlavWLbpuyGEaN3xJjmRC89Uj1aEN0WCZWmOpVlurkiTPgdN1VWS0MpvdciSu2GS7RM3G2AzD2Qcl8VKOdjXVlvYoBPF+s5vwIjxePzUrY1WoARiEhcic2utHFjNhuN158TNBlWI/awzXVyxplXZu/J1shiO8lE5i/h4RghnQYzGgCcvgijLGSnGyVH2VTL7VImg+NW0TBvTDkjki4LHvSu1Oir6Q6ZB4MJChERaT6mGpLky9LiZpOnH4NRTFhnpJpIPJ5rpBJIUoHsqAvTKycGOxvZU4vguerWS6LS1/DUrGyYZ5NlsqQdsUhhvStJMVybT0oYSwepuHPRmY3IkE+UhiuKCIGTiAt1E5tAK2v91N+mP2jLSCPeC5B1ThIZAnSgKgXpKRw2xCClT5Zv3aMmP1cj9dt2jW57YWFN5NmhdZzrWc1869c8Bcf6l2/fClzu/uOnuMNe16RvnNmw+jrecvGQML36rt+mzsUVAW4vqn56LLj532f3NVwbu2sIxdzT+YnbOufb9Fy9c+nRh6eR7C5OfXv33xx+V/nb58Jmzi5f2vvLY+ndepNTy+621hz7XrRyfxdTG3Xnz2Qc3PVVkPxxI7rnrXWqBdpm5/36eXrqnfjT03uU/rH/mtj+2aPdw5/48V5do/umZ0JajJ+YTrR999++nW65tPXci8dXNzKmBfY9vfdPYsedHp3uPLoq/vP++i20/Tr6z7Ym9+5aOaJyx9PbF5wsbHrlz5slfN+YnHhHjnPr81ybqd9XOb1w63/3B3W99eHbk/KGzrx6+2vBaZ/PVzNvbiuLWZ4OzpzLRL0iLI/cuHNj3l7pa+sLGX52+7ezCX1//SfSti3tT82zHjqboZ98L1tkvwLYm9QdHepkf1lL5p0dx5z/+c+F67fFPnv59+sH9b+y59kzwm7k7wLZe//n7L0z1GfsfILm+fn1Dzcdd3aGD62tq/gdFO3xA
eNptVX9QHNUdJ8ZMrDPWkLGpY5pxvUlELLvs3h73A0I7wB0ECBw5CALR4tvdd7fL7e5bd9/eD9K0TcyM05bpuLXaX2ongdxZiiQWBjUBbU0TW3VUaqZTdKKjqY2mjSG1pZqxtW+Po4FJ9o+7fe993+f7+X6/n+939+VT0LQUpK8aU3QMTSBisrCcfXkT3mdDC+/PaRDLSBrpiHZ2DdumMnenjLFhVVdWAkNhkAF1oDAi0ipTXKUoA1xJ3g0VFmBGBCRl31zF7PZo0LJAAlqeamrXbo+IiC8dk4WnmZJBClKAMoAuAYsKAwwaTaBBqkyKl1FpBcuUiFRb0y2qrC4ByyhiR5U1AhOWMdRdpoIhOZcghRHlerbJGsvunmlCFbg0KAHiNIR6YR+n0RIeQ8Ugtk2dagNmUkJpnYojkzDpyJKA9UVUS1cMA+KCUx1hWdETFFQtyHgqKI+JVOjGYFvQ9Oy5h+xo5I7qbiUMTPNMFU3wBeTa6mSXI/8WNiHQyCIOCAzZwFAzSN6JoYvFMoE9eRkCiVTl7ZJ1IzKysDO+MtOHgShCgg91QpEQcp5MDCpGBSXBOAkYjpLs6rBQR2c0CaFBA1VJwdziLecIMAxVEQuZqRywkD5WrAaNswa88njUjY4mtdOxMxklJOqaK4sZ4pgqL+M9kqEtDBRdJSWmVUD45IzC+bHlBwYQkwSELsrNyS1eHl9ugyznUBsQo50rIIEpys4hYGp+38TyfdPWsaJBJ9/QcaW74uFldzzDcUzgqRXAVlYXnUOFQjy94jLEZpYWEcFwDrA5EaGkAp25f/T3i/F+QattAlJnONbeW7/TFlpR4i7Jn8nG0shs9Lf472tVt7c28RaQU0lFi9BcgGdDoQAXCNIcwzIcw9GhLNvbVhXf7reMRLK+rquNb07W6V41oTBqZ4zZ0b4zGIzx3bwexN12n9UqdYQjHaLcGk2GBpKaz5foTRotRrzVO9Dd3dTSE8j2DUTq62oows5OKVJtc6anZbthRncY4cZQtgWpYa8mtEU6d+rxHqOBiQQSTdu7bNsbySSX0fMG/TRbZOhnfUHWfcaXtKFCPYFlZ5hnvU+Y0DJIc8P7cyRl2Lb2jbit8srv88UmPxhtvSzhDSOko6Ez0yXbFRQboNpRivKyXh/F+at5vpplqaa2rrGGopuuq0rwqS4T6FacyDCyJPm8KNt6EkqjDVcV+4wrdlJJlz7pUhpmDGRBusjKGeuhY4vjjW4OTyx2Fo3MBNCVwYJbZ6ag+vRgJi2JtiTJqbTGhgZ9vCJAW4xPFq8YJnLdEEK0ZjnDPs43XjxZ0t0oiZWlOZZmuaMZmrQ5VBVNIfks/BZnrOWMVJFkP3OlAUZJSKZx3leoBvvccgsTakSwru/LML5QKDR9daMlKJ6YhHj/0ZVWFlzOhvNq1jNXGhQhDrLWWGbJmlYkZ24zWfRDwSdwfq8gVQGWF/xcwB+AARgnKykgBuPCs2T0KSJBcYtpIBPTFhTJBwVnnbkKDWTcGVPLc1W8n0RaQym6qNoS7LSFMHJjsGoog0x0BKTDDY10AxBlSHcW9Ofkw73tdW3NDVM99HIh0VFj8WOW1xGZ4/F4rhOapDDOqKgiWyLD0oQ5ghWr63UmgxKEQjAeqhIAISsG6HoyhpbQ/i+7EXfS5oFKuKdEZ0Lmaz3VPh/vqaE0UBv0kzIVPnl7c26seuLEqgO3fv+6ksKzeij22/Yz7E3TZ79a0X902+QAODC5fi1zz8Jkfd8XvhvbMv6fH978la+Lz5es+dc7zuk/Tr4ZeW99OY8a+Quf1opDyRP5oZf7HlQf/bD/9HTtZ9/678d/uzQ+PfPTtlc+iBx+nDk58eOFC+z8mSd+8/7qbZe6btjsf7VEmdzYN7ju2lnb97VN7befmT3Blt5wJ7ujO3TuF1989dKpS1sWHhh+aOLJXzfSt82X/zwxxC08/hw3vHP+pXzFp5tH9q4Lr/nm0Y3C0G5f6bueazdt63nvxU3ls+v/fNOxO65/+uAt52ce/qD25tI3JlpzL2UvjO57W1h38daL6du3/nND09qH/j11x/f2oBvhxvDf47vEqU8Gv/yne9/fyhx/5J13D01Vn3p2benrJ+/d+/rF343zJzco5efroh9t9f/AfKHpyOz82Yz44jEe/+UPKfDog/s/nP84Ort31/h39vsB+uXd1/fef+5Hp9eEf+X7qHTuG2e/ffyz13a/seutczULm27bErzm4ZobW14+/sLpnzzwybbnb3nrfHDNX8GXrvvZ9Kmpay5Mt7+2+THzsT2kFp9/vrrkkYbyu7esLin5H4DplPE=

View File

@@ -1 +1 @@
eNqlV3uQFMUZPyQVH6moUJRIyortFirozdzszr7JlbkH935yD7zTq73emd6bYWem5+axtwt3UICYKFTMJPkLHzFw3FrniahXCipa5aOCQBnjK0UETGJSIdFYFWOh0STmm9ldbk+IaOw/dqenv/7619/369/XsyWfIYYpU23BjKxZxMCCBR3T2ZI3yKhNTOu2KZVYEhUnuzp7enfbhnzsBsmydDNeVYV1maU60bDMClStyvirBAlbVfCsK8RzM5mkYu63F/11g08lpolHiOmLo1s2+AQKa2kWdHwD1EYSzhCEBQFskEURRjrWRGwiEVs4ZWCVoGExNcyiUmsiBkEy2EoEUdvSbQvRlGvDSgSLK1ayFk2o2EiLdExbsXI4fqtWmjn3NDw8PNcZL/302EZGzhDR63QJCgZE46jDhfDV2jjqIdmS15oRUvAuJxVZGzGrenRqm8RENUmKjeJi2ICImFV1kqyI8Fg+hhpgEI3fqo0zDBN3f0qtrBMfjzNfvZ2ZNH6Ww7PalxuLuzgR4grIC62sw8Nju8GizjHYYxM2DMhjrYFtTTxPPFWskFKqAoFy759r/i8Y84BE2EDI9eThnGdd1vF7OE0WtVBJcxGKCsmhFQ0KheQIBN7IIyMm6pVwjhgrUZ2tupmFSSniIfUc8dGvgTPiZwNRni/iDPwPnF48ZROAtmFZw3A25HRa1oh2Pn6W4wyEzxezL45nzA1oASd/3nhiwRUWAIotCa1ok5Ucasc51EWIshI12JZBFIWcGycf+hrxDPGsH53Je/A8/FwrK4qMVYCpGTlUA5C0L8/PL8R53nhGWa7Iz3OK1dxTI0iVBnIJWmIgT61BdCvRmCFbxBPHrhxIt4YEKhJPWDVzDCxla05K1xDLNjTU2dE24M3IYEUW580DKUYatSSgNyKKSebm1lPteguBRNogj7lz6LiWQ4qcNLAhQ77BBaxtSVhDSVtWLEbWSutQDcbddQq6z/oqkc+gCnHrg5kzLaL6JirRvLKxFirN9YUKIFADKIPdvaMkscaIG5ORAvAUCGe5NzdSvokheKPC3hT31YhuMTwbYiAOSeraavDWD/8mMBGr0Elh2Da8ABw6VEgwdH1xbMR9R6mSECQqC+67DT4rp3sLpWzNq6SuwzPProEGtcQ10L2tJwyiKwlsWr6JiaK3Yon8vx2BnUhMwZD1oqmvphRnU4KjxaI+02WH7OWIZIlgA1v0UsZVFeIGx7RZc8uqKVFbESGswLICNeYbwjmRINy26bKj6JMqlcikcGuQVTfthfoMIziZNEhGxhaUVwaOS5og03YLuYVECpZAMqRQmp5nmCQpENziAmAJpAGmGUUqe0zRsXtFgLuL6cVFN+BOYlgyKXThWBg57+lzUfHIbWqyrhPLjYRhezEuxR2yD0sW0uJehWSDiG5mig6Hykxpch0R3AxODE3k3RsIQDlZcfmkRE3L2Tv/lvSwezyAcVBDqAgLOA+NrJf1SiSSFFCYTAPFNeIl2ZlOE6IzEPQMmSrMcvZhXVdkweN61TqTajPFI8G4WM4ennb5zsBh0CxnthNA1DRXFbngZ0MBltuXZUwL6oYCR5ZRMOCZKiT4qfIBHQtpcMIUr4rOVGHy3nIbajp72rHQ2TPPJTYEydmDDTUcfKz8PQTbpYeTr+s6e7ni4NxyINwcG3xknmMzpwnOHu9oPjFvMrGMHCNQ8OH8gpsSgFAycY59kEgIqURSrR5lR0OcjVvtdTfTgWhnDdvWv6YvHIwONuqxjNXZvzYzyAfr5f42Mc34I4FYOMCFIzzjZznWz/qZrNSfMhI1o/LaDJdYPSh0ii21ZjaghNmm0ZZBqzbINYf49rr69mSz2UL6Bta1iKOtZDDJ1xOxO9Aud0u1KT2U4vtbw2LjgF1vN/YM+rtXIUBnZ2SxuiPSxnarTQ1SQJLqZNzXGGxuXt2TWh8aWBtt6KiN9PHNsVBbf8Zu7i6DF/AHGa6IMMwFo5zb9pa4AaVrxJKcyUAgGnjAIKYON3OydQpiZtnmlkn3IBw9lC/e0Hd1ts5x+IrJeiClc7DXBv2DS1enYKEAFwgifzTu98cDHGps752pK67Te04OPtJrwGFNAQ9XlzifFyRbSxNxuu6cbD9YKCGMix+EmyFZnZqEKaJyZm5m1hS+TZjm+scKR4uhxgjW5PXess5Bj/Zj67NjomCLopQZU7nY+iAvJ4ktpGaLU0Ao3GUAEKOazu6QP7q3OFIi3jTslWP8HMP5n8wyoPxEkVUZAur9Fj+QTGcyBNHef7aBRdMEPqXyQS8d3DPlFgbcamTNXXvOTTAWiz19bqOSKx5MQtHgk/OtTFKOxh9Qzf1nGxRd7A6r5ky2ZM7IonNsOXQSRORiwWCAC/ERLiykBDEUFfhANMalUlwKi/wBVxAFcONmU6eGxZhQOeCmkXOOVao466pMNe8P8WHY6irQaUGxRdJjJ+upuwlzFdKhSlMsPlzXwNRhQSJMj0dAJ18/0FHT3lz3+M1MOZOYTr3wKZrXKMh0KjXVQwzIjDMtKNQWQS4NMgW+1tQMOLNRMZyM8JyfSwmET8Z4phaEqOTtDO8mXa3NY7h7mBnBeUziq33xYJD3rYJqVB0NQ568D9bNUwXxf3HBq1dvv6jCawuVNUfufItbcuLTG184MbltebSq8w8nvr9q26uXiqFty69c/czG2UcnW37D3ffjTVUbXz8t7tp2+Uen7jETS+/yHxIuWvbDHT9dseOe6v98vDj/3N6tBJ1amX3rhU8Pv/f4TUvHSWJ4kP/wn5FLPvnVzm8NXXnHz9JvPPec7/qZvqWXHRna9+HWh96u2Paj7jdnX39FCb199IHZkep/LD3S/sujFvtKzaFLvyvsGNr51MvdL358yTVpa2fT5pf/vuSa927yf7P/SPyC4weOL1r7jdcWLBlcNB1X+1piFzSll13M/mn4+H1HTlUH72xdpvIvvfPvW05/MPPM3zYd3Dh07bt//uOG9+761+BfDgcvvnbP4omtDdmrL6NHr1z44vbb0zccnr2z8jvLRhsji3cseEIeuH9XfGL/Vb+ndzwer83eFo1ddd1dKzumlA9eYDo68gdeeXPjJ+O33fj0awfWf3Qy33135XV73nn03cTSZ08vdOKh/p+faN90zUt97z96yDfWfu/vmu5+86nT8lX3j0dauf6lt/te3dT1xpZ3fzD04IXP7/32qeSJzZN3HFmUfmLnyV9vTy6/L/L8+x+16g7+XtcE7T3e+NqGxermAy03PZC9d/H+w4uu6P7Jg0seeiT37I6Tb3x6YUXFZ58trFi973jT9oUVFf8FrWip8w==
eNqlV3uQFMUZP5RKoMwDq6xoosZ2IwJ6s7e7s29ylne7x8G9vTvgTsS73pnenWFnpsfpmb3b4y4pIGoiUDqhSEqtPJDlTk88PCEqhosFRCt/EB8lEVGDSsoiqMEqn1WpAPlmdhf3hIjG/mN3evrrr3/9fb/+fT1rx3LEYDLVZmyXNZMYWDChw+y1Ywa5zSLM/NmoSkyJioWO9q7urZYhH75OMk2dxWtqsC57qU40LHsFqtbk/DWChM0aeNYV4roppKiYf3XWs6s9KmEMZwjzxNGK1R6BwlqaCR1PL7WQhHMEYUEAG2RShJGONREzJGITpw2sEtQvpvu9yGmLiUGQDHYSQdQydctENO2MeyWCxfkLvCbtU7GRFemANn9Bf/wWzZlV/O3v7y8+DJd/uiwjJ+eI6HY6BAUDgmHU5iz51dow6iKDZa91GVL0LqcUWcuwmi6dWowwVJei2Cgthg2IAKtJSLIiwmPlGFoEg2j4Fm2Y47i481NuFZ34cJz76u3MpOGzHJ7VvtxY3MGJkK+IvNgqOjw8thpe1D4Ae1yMDQNyV29gSxPPE08VK6ScqkCg0vvnmv8LxlwgEW8g5HhycU6zruj4XZzMi5qopDkIRYXk0fxFCoXkCATeyJkMQ90SzhNjAUpYqpNZmJQmLlLXER/9Gjgjfm8gyvMlnIH/gdONp8wAaAuWNQznQc5mZY1o5+NnJc5A+Hwx++J4xpyAFnHy540nFhwhAaDYlND8FlnJo1acRx2EKAvQIss0iKKQc+PkQ18jniHe60dn8h48Dz+Xy4oiYxVgakYe1QEk7cvz8wtxnjeeUa+vxM9pIlX8bQR50kASQT8M5CoyCGs1GjBkk7gi2JEHedaQQEXiiqfGBsBSNoty2UlMy9BQe1tLr2udw4osTpsDUos0akpAZ0QURorzklSbZyKQQwukMH8OjdbySJFTBjZkyC1MhzVNCWsoZcmKyclaeQ2qwbizRlHTvZ5q5DGoQhztZ3lmEtUzUo2mlYTlUEXmFRVeoAbQAzt7RiliDhAnFpki6DSIZKU3J0KekZXwRoV9Kc6rjG5yvDfEQQxS1LHV4K0f/hmwDqvQSWPYMrwAHDpUPzB0fPm8EecdpUqfIFFZcN6t9ph53V0obWlulXQcnnl2DDSoG46B7m69zyC60oeZ6RkZKXkrlb//2xHYiYQJhqyXTD115TgzCY6RFy1lDitkN0dkkAgWsEQvZ1tVIW5wJJdoTtlkErUUEcIK7CrSYrohnAkJwm0xhxkln1SpRozCjUBWnbQX6y+M4FTKIDkZm1BKOTgaWYKY5RRqE4kULIFgSKE0O80wRdIgrqUFwBJIA0wzShR2maJjp/zDvYS5cdENuG8YpkyKXTgORt59+lxUXGIzTdZ1YjqRMCw3xuW4Q/ZhyWJanGuObBDRyUzJ4coKU5paRQQngyMrR8acGwZAOVI1pyBRZtoT029AO5zjAYyDekFFWMB+JDMk69VIJGmgMBkHimvETbI9niVE5yDoOTJanGU/inVdkQWX6zWrGNW2l44E52A5e3jc4TsHh0Ez7V3tAKJuSU2JC35vKOANPDrIMRNqhAJHllMw4BktJviPlQM6FrLghCtdA+3R4uSJShvK7G2tWGjvmuYSG4Jkb8OGGg7urHwPwXboYY8lOs5erjT42XIg0n5vZHKaY5bXBHubezSfmDaZmEaeEyj4sLf4RgUglEzswx/09QnpvpRa24jFrmRnW2/9UivVTDPLxfBgvnOAGovCTeHbmpWW5kaeYSmXldUGzh/hfbFYxB+Jcn6vz+v3+rlY3tfbGkq3hJmeydbXdbfyS7J1WkDJyF6lq9N7U9vSaLSTX8ZrUXOZdTNrFjuSDR2C1Nyeja3KqsFgpjerN+np5sCqZcsam3oi+ZtXNdTXLUSAzsrJYu2SwZ6mFt1ov0lPLorlm6iSDKip1oaupVq6R094GyKZxpZuywo0DGYr4AWiYc5XQhj2BaM+p02UuQFlKmNKdiEQCIYfNAjT4dZN1o1CzEyLrS04B+HAX8ZKt+8H2ps/4/D3CkkgpT3VLVnVyBdBbTSHAr5AEPnDcZ6P+3jU2Nq9PVFap/ucHJzsNuCwpoGHDWXOjwmSpWWJOJ44J9uniiWEc/CDcHNkUKeMcCVU9vYerrP43cEtSe4sHi2OGhmsyUPusvaUS/uBocEBUbBEUcoNqL7YUJCXU8QS0rtKU0AonGUAEKcyeysfDU2URsrEG4e9+ji/j/P5nxrkQPmJIqsyBNT9LX38MLsQgmg/ebaBSbMEPpPGgm46fH+qtDDgBiNrztqfuQnGYrE95zYqu+LBJBTjn5puxUglGn9AZU+ebVBysTWssu2DZXNOFu3D10CnL+zj+VAqGk4H/SSU5lNBHIlif4hPp6NCJB327XYEUQA3TjZ1apgcg8oBN4y8fbhaxYOOytTyMCEMW10IOi0olki6rFSSOptgC5EOVZpicUdiEZfAgkS4LpeA9liyt62udUni8R6ukklcu178zBzTKMh0Oj3aRQzIjD0uKNQSQS4NMgq+Out67V1RkZBUzBcJx/xCNC1EuHoQorK3M7wrOFo7huHuwXKCvVPiaz3xYJD3LIRqVBsNQ57cj9E1o0Xxf2bGB1etn1Xltgs3dLXe/apvztSx5T1sy+aqvY+tCO/eMj6lXK9++ugKq86+7dq9b74yUfvQ4otPTzXtu3ZN50Ox5Cd78vSGkVmXXXjvjX++UYn+4vXxF4++9v6/9o/4X3vq7vDJf89d8NKVzycKI53Vv+3v8R8YWr/2SI+AjYfvKbz47eorjH1NVvx3rfv0G6aefmjN69c8eN3NaTl4+RsbH9Sz/H0t9ETbgY32BbXJ90L9GyYTJ3cYB2efuDq16a375za+n79jzh/aNqzhxx+r+vTWoaP39b+yZp9Uv3LbIxdtG6r6/tbU3s3vXBB5Y8vAiR88s/mu/YHh2T9989TVzVcdP/SPsY8/PLFueOjUkfWHFv/68bb53C7p8oP7V6w2L97gf+HHO3Y/Xtj44mTi4dSJ4/Vr/W96rzcOdm/77p1z/1P9wubmn//qnsKOdQ13rPtbz2XHjiw99Pzrb/V//JPZjdGXYt849NzDt95/xejtx945+PSmk698J3nvgfjOTy49eXv3quORycSJd+N7N022vndd25Uz7trZu9a69olLNh3eMjGTHsNbv/n3CeufH30y85dXTwxt18N7Hrh03uJbXt6vfdqsT+4tbM2tfnZ98r4fzvlo/NTC+7Ovtr7dmTn9ozdm1P9+bkvqXZ99dM/R5/6KP77k9Nsv53a3XTZ8fPaH0c3xROY3ylUJ/VunILmnT19YNeujeTMvmllV9V/1Y8Fm

View File

@@ -1 +1 @@
eNrtXM1v28gVb09Fc+qlpxbFQCiQpBBpfojURxAsZDmO7Vi2Ezt27NhwRuRQZMQvc0hZSuJDtz0XEHruoV2vXQRpdhe7aLfbbs89FL1nDz30T+hf0DeUFMtxHGWzzm4sDRHIJGf4+N5vHt8H9VPeP2ySiDqB//0njh+TCBsxHNDfvn8YkZ2E0PjXBx6J7cDcX1pcXvkgiZxnv7DjOKSliQkcOmIQEh87ohF4E015wrBxPAH7oUtSMfu1wGx/9YP/Psx4hFJcJzRTQncfZowA7uXHcJDZ9NeDBOGIoN0gajh+He06sY0wCrFvYopMHGMrwh5Bjo+W2qCKL6IVmyCfnQssFMP+wCSK7pnWPXHTX7FhH/6x8YjQxI3Z7HthBHZeMi3RJti8dPnyvdKm/wjBxj6Wk6jpNImZHiwZLqYUdheY4K+3PULLpNWXWq6TrnSn5oKBdGI5DBJKKCrXAhz1bgYI+DGdqNiOa8Lu4BiaZvA8Aj0FQSixj/42cFB6VBK+/vb8okcnBJ7YXm+s9CjFU+pq3t0GDlTYrUYiWtwFG2dwFMEKTUY48c0heHrYJT08kaIMSn9hk18xliqSFxWNSUr1PDZ74EBO9aQimgtsn2louqSNLk27ASyOQeCMU69TcETcJtFlVEk8trJwkUVSTVNBauEb6JmXRaWgqj09lVP0TPF0KCg6jx0foxniNOApAnCH4Dmop6IPw+zVeBYZoF091aF4YoPFFVAUw2N+ad5x26iK22iJEPcymk7iiLguebmeqvYN8NRUUUbP1z03xD/XHNd1sAdq+lEblUEl//X985V6DsWzIEqpf2ayKBMFLmFRkrZpTLzMXhYdC55rEG8vdkOcEUSAHGZhF9VIvEtAYQi4CKIosiB+vNeNhThG9QigJxE78E+9lF2SXkvTkIjd9wbVgegVZfa24IwXmMRlp+phLKiiJsRJVAvYXB/OyvCXwopiDw7iKCFwDHaEkGdgHhMliXl2LgjcXmqI22F6Cyvx01TERD3fL4H5LO6zCWGaC7YjErrbmMZsnkmoETlhb2qm3MsXiNrgUyK6TQnYy2AIEGkRI4kJ6koBCDwPrAX/nPXDJIYrgsQ1AQ1IRGC8Y74wERzEBpQSyhJWT2bgZhENIFs6kOxQkMRMEIzgWi0iTQcwN5EAftIgDFRIVTEyA5jpBzFyg6BxbGKNWBBpejeAmZD82kESwZLQXRKJzNoQs4wHK0lTXMIIcnEUO6R7CA9Z1E73XkDFgBVD1HfCkMQMiShJMe7jDssFt8zs7bHlhhLAiYjJVqYncGtgalC7T4wYpu5t7R2ydAqq/Od7P9q3Axp3nh6vDj7ChkHARSB4BibcoPOn+gMnzCKTWOB55DE4tU/SRe48bhASCgB6kxx0r+p8jMPQdYzURSfu08B/0nsIBKbLyeHHzEEFcH8/7ny2CEqUZyd6viCLmiJKH7cEGkPAdKEsEVwM+hx0F/hvgwMhNhogROiVSJ2D7sVPB+cEtPNhFRuLy8dE4siwOx/iyNNznw6eB7CZe3QOK0snb9cbPLodRCxJzH1yTDBt+0bnQwu7lPzl2MUkjtqCEYCMzu+lAwMcyiGdZ//b3jas7Zp3dUfc0aQE30ju3wnWC4tlcX711m09V9i4Hhab8eLqWnNDzU05q/NmQ5DzSlFXJD2vCrIoibIoCy171Yq2yzvOWlPavrZhLJpzk7SluLo4szO3EU/mpFlNrVamqrVZOkdur9+fM3dukI2aOkXMm0rVuWlPWqFmqas3dPP6ejKVXF/ekG9eQaBd0nTMqwv5efGmNzNtK7ZdcfDt67nZ2WvL1gNtfa0wvTCZv63OFrX51WYye3NAPUXOCVJPQ13KFSS2Pe37BsTsemx39hVJLvwRSsAQKlLyqwPALE7o+/vsQfjXPw97lekfFm8c+fCP96fAKTtfrrCIBdXGohEjRVJySC6UZLkEZ65XV55UevdZYT74DMWkFU+QJjvTjXlXENTDESXx1SS2hMInKxE8vRY45rX+Q3Bo2InfIObjykvd/0vm/rC2zCAIvQJphQElQk/NzpM7wq1ukS7MTn3afdaEIKpj33mQPgudL9PnYPdBa9c0EtO0m7ueVHyQU50aSQzrs94lEDnYbUAhwaOdD1RNfdob6XviYzBeEmRJkOQvWgIEb+I6ngMIp5+9ToF29jWA//OTE+KgQXzaOcyl6yP9Y3BGBPnd8dm9j8TkisXi318+qS9KhSm6Uvzi+CzAekCMrHj085MTeiI+0Dz6pNWfLjhm59nP4WC7lpNVrajkNSUPHxrRTKum5bBpyZpFVLPwVxYhDRDDVjMMIlhtSCWRE7c7z7IebrGwc1WVNVUHU69A4DbcxCTLSW0qYEbQKyiEbBtg86PKtFDBhk2E5dQjO4dT6wvl6mzlz3eEQdcSFtPYDeN+AHHbsg6WSQQr03lsuEFiQvyMyAHIulVe73xWMPVaXpUsHStYrRVVYXJx+RC7oGTT6Hxqq1czpVxOzVyBPHS1oMOCpC3aLw+6Yf+rn/ybtVIsazgQ9TOsnzOgmxPK81OmUw2X7sypUnXH252ZNKRddbVyZ+P6rfVMtp8KuleIRx2gmDo4TDDSogNk9h/egpLtFw/HawfwMUWDK7olz7YFapEo7doyJT9xXZBlB47B0hyUC45vklamJGUhzbkxzpQe9oqUDPRuDjzqcFn2qGTqCmC5etvArvuijK7RMLB9f6q5jOs3bWvyhqetFrwdvLqz1KisgLBu+hsoUAbqk355crI6yeConnisx4NRyJlbWUivVkKx29VqL5txgzo8ijXaVxNMd6gNMjBlstNZW3sXLpz/JTp1AQaRHATs4WaKGQfpVSB1KzQO0xCYNjMl7k3DYTItjtEwjO7evchBGgZSuU44SMNAupjlGA3DCPGHbShG0xyiYRBBu8RBGhqPtrZEjtIwlNibc47SMJQuXeauNBQkNzA4SEPrbV4B8HKbl9u83OblNi+3R6vc5hgNf2+7N24gDbd1EMYBm+9OLS5c27pw4SxpnD9UOY2T0zg5jZPTODmNc5RonMeVSbPM0ZQBFgU6lrJP42qyXIuGcShen9I5UAAgxgJIv+PehIPNjGndvXsRgvLFLLrIQit7ZckwuHRZdAPjhaHnxcNx5CVRlhVVKeiqXiwWc7KuDOLD7D1m9/brWMd5sZwXy3mxnBf77vNiVU07W16sOtq82MJ54cWqb4EXa6m6aeSJoeYkrFtFM0f0vERMomKZgMMZ54EXK2tSERffgBf7s5+e/i7h/gMpT9bD1Y2F5prSqFRxoswoxgKN3uxdgvpO82JxMBMlZUkp4Elppa4sre00/OlqVX63ebHnZYm+U17syIJ0trzYkYXpbHmxIwvTGfJiRxajs+TFjixI0xyib/O7w5EF6Qy5DCOLEeLxaChG7FszjtIwlNj3ihylb5U+PLIonSl9eGRROkv68MiCdJb04dFtS3gVwLsS3pXwroR3Jbwr4V3JGHYlHKNvk2V9TkB6p1jWF37DWdacZc1Z1pxlzVnWnGXNWdansqzfGqCnU5zODtAULoCtn0NfRPXk+KnQKpou5wtaDmqkQlGX1NeG9lQ7OYGdE9g5gZ0T2N99AnteUs6WwJ4bZQJ7TtLPCYFdzr8FArteNHVVJgVFJrJBdEKKpCbDyZpOFDNPlHNBYMd5i7zJf+ws/u701zQNdaM1U/btOmneXppMqq1Vac6f3ylee7PXNLnvgsCeybwN3vh5QeYIhhWbZMbU9MHWbGwx6LWkY2s/ro+v/0P7Mra2W+mX8mNqvEPH1+dDsLnleKwUbI8tCuNquDSuhovjargsK+NqenZsw9uu7bjjm91j3tLxlm6MW7qxLu3Huafrf5vO+zre1/G+jvd1vK8bXcMVTedrPm7hbcUmUUqH4S09b2p5U8ubWt7U8qaWN7W8qR1R03s/JhjjpI99XvHwiodXPJyZxQseXunyVzsjZfhr/I6exkH4sl/Q/x/QUZLh
eNrtXEtv29gVbldFs+qmqxbFhVAgSSHSfIjUIwgG8jN+O5ZsxxMbzhV5KdKiSIaXlCUnXnTadQGhy67a8dhFkGZmMIN2Ou103UXRfWbRRX9Cf0HPpaRYjuMok3EysXSJQCZ5Lw/P+e7heVCf8sFxg4TU8b3vP3a8iITYiOCA/vaD45DcjwmNfn1UJ5Htm4cry6Xyh3HoPP2FHUUBLYyN4cAR/YB42BENvz7WkMcMG0djsB+4JBFzWPHN1tc/+O+DVJ1QiquEpgro7oOU4cO9vAgOUlveph8jHBK054c1x6uiPSeyEUYB9kxMkYkjbIW4TpDjoZUWqOKJqGwT5LFzvoUi2O+bRNE907onbnllG/bhHxsPCY3diM2+F4Rg5zXTEm2CzWvXr98rbHkPEWzsoxSHDadBzORgxXAxpbC7xAR/s+0hKpFmT2qxSjrSnYoLBtKxUuDHlFBUrPg47N4MEPAiOjZhO64Ju/1jaJrB8xD0FAShwD56W99B4WFB+Obbs4senhF4Znu1scLDBE+po3ln6ztQYXcxFNHyHth4C4chrNB4iGPPHIBnHbukiydSlH7pz23yS8YSRbKiojFJiZ6nZvcdyImeVERzvu0xDU2XtNC1adeHxTEInHGqVQqOiFskvI4m4jpbWbjIIommiSA19y30zMqiklPVrp7KOXomeDoUFF3AjofRLeLU4CkCcAfg2a+nog/C7OV45hmgHT3VgXhig8UVUBTDY35twXFbaBG30Aoh7nU0HUchcV3yYj1V7VvgqamijJ6te2aAf244ruvgOqjphS1UBJW8V/fPl+o5EM+cKCX+mUqjVOi7hEVJ2qIRqacO0uhU8NyAeHu1E+IMPwTkMAu7qEKiPQIKQ8BFEEWRBfHjvU4sxBGqhgA9CdmBd+6l7JLkWpqEROy+168ORK8wdbANZ+q+SVx2qhpEgipqQhSHFZ/N9eCsDH8prCiuw0EUxgSOwY4A8gzMY6IkMcvO+b7bTQ1RK0huYcVekoqYqGf7BTCfxX02IUhywU5IAncH04jNMwk1QifoTk0Vu/kCURt8SkRrlIC9DAYfkSYx4oigjhSAoF4Ha8E/Z70gjuAKP3ZNQAMSERjvmM9NBAexAaWYsoTVlem7aUR9yJYOJDvkxxETBCO4UglJwwHMTSSAn9QIAxVSVYRMH2Z6foRc36+dmlghFkSa7g1gJiS/lh+HsCR0j4QiszbALOPBStIElyCEXBxGDukcwkMWtpK951AxYMUQ9ZwgIBFDIowTjHu4w3LBLVMHB2y5oQRwQmKylekK3O6b6ld2iRHB1IPtg2OWTkGV/3zvR4e2T6P2k9PVwcfYMAi4CARP34QbtP9U3XeCNDKJBZ5HHoFTeyRZ5PajGiGBAKA3yFHnqvYnOAhcx0hcdGyX+t7j7kMgMF3ODj9iDiqA+3tR+/NlUKI4O9b1BVnUFFH5pCnQCAKmC2WJ4GLQ56izwH/rHwiwUQMhQrdEah91Ln7SP8en7Y8WsbFcOiUSh4bd/giHdT3zWf95AJu5R/t4YuXs7bqDJ7eDiCWL2U9PCaYtz2h/ZGGXkr+cuphEYUswfJDR/r10ZIBDOaT99H87O4a1U6nfnMFmaXJ1aXN8La7M+9UNU2+2VqHkmtbn9Pvz7sL8jEqx3ag59SlBzqpSPp+VszlBFiVRFmUh35I2FzVrQadBtTZeLC+qs7Wip7hVR3RLq+LtpbVcblVdV71ctB6/T+fNlcmpFcOeX67ld2v1TKa6WQvmAmte2V1fn5m7k229vzs1XryBQLu44Zg3Z5t35haCcPl2MDmdb8357qRSryxOldY8604wIU5lqzML5ThWppq1PvWUnC5IXQ11KZOT2Pak5xsQs6uR3T5UJDn3RygBA6hIya+OALMoph8csgfhX/887lamf1ieP/HhHx9OglO2vyrbcRpJWbTkN5AiKRkk6wVVLUB8nlksP57o3qfMfPApikgzGiMNdqYT824gqIdDSqKbcWQJuU/LITy9FjjmVO8hODbs2KsR89HEC93/K+b+sLbMIAi9AmkGPiVCV8324zvCaqdIF2YnP+s8a4IfVrHn7CfPQvur5DnY22/umUZsmnZjry7l9zOqUyGxYX3evQQiB7sNKCTUafvDjKY/6Y70PPERGC8JsiRI8pdNAYI3cZ26Awgnn91OgbYPNYD/i7MTIr9GPNo+ziTrI/2jf0YI+d3x2L1PxGTy+fzfXzypJ0qFKbqS//L0LMC6T4ys1OkXZyd0RXyo1enjZm+64Jjtpz+Hg52MRpRKBVu5ipbR81rWxLJCZLBMURRiaNm/sghpgBi2moEfwmpDKgmdqNV+mq7jJgs7N1VZU3Uw9QYEbsONTVKKK5M+M4LeQAFkWx+bH09MCxPYsIlQSjyyfTy5uVRcnJ348x2h37WE5SR2w7jnQ9y2rKMSCWFl2o8M149NiJ8hOQJZq8XN9uc5k5BKXlaMCsnkLCMrjC+XjrELSjaM9me2ejNVyGTU1A3IQzdzOixI0qL98qgT9r/+yb9ZK8WyhgNRP8X6OQO6OaF4e393cWNttiwVs9Ob8oo70XJsqVELN9bXSCrdSwWdK8STDlBMHBwmGEnRATJ7D28uk+4VD6drB/AxRYMrOiXPjgVqkTDp2lIFL3ZdkGX7jsHSHJQLjmeSZqogpSHNuRFOFR50i5QU9G4OPOpwWfqkZOoIYLl6x8Cu+7yMjtEwsOPZxvwdzQjn5ldnXH1/5ZbkB/trdQeEddJfX4HSV5/0ypOz1UkKh9W4zno8GIWcuZ2G9GrFFLsdrQ7SKdevwqNYoT01wXSH2iADUyY7mbV9cOXK5V+icxegH8l+wB5sJZhxkF4GUqdC4zANgGkrVeDeNBgm0+IYDcLo7t2rHKRBIBWrhIM0CKSraY7RIIwQf9gGYjTNIRoEEbRLHKSB8Wh7W+QoDUKJvTnnKA1C6dp17koDQXJ9g4M0sN7mFQAvt3m5zcttXm7zcnu4ym2O0eD3tgejBtJgW/th7LP57uTy0tT2lSsXSeP8ocppnJzGyWmcnMbJaZzDROM8rUySZU6m9LEo0KmUfR5Xk+VaNIhD8eqUzr4CADEWQPId9xYcbKVM6+7dqxCUr6bRVRZa2StLhsG166LrG88NPSseTiMvibKsqEpOV/V8Pp+RdaUfH2bvKbt3XsU6zovlvFjOi+W82HefF6tqGufFfgNerHRZeLGZN8CLlSQrW8noiqzlclmzohoa1iSzopt6RdGUCr4UvNhc3rTyr8GL/dlPz3+XsCRtLHjWXKYcr/v2+MZsthTfzu82ct7rvUvQ3mlerLI3Xppd3di4tX5rshaUFas5aUD82n23ebGXZYm+U17s0IJ0sbzYoYXpYnmxQwvTBfJihxaji+TFDi1I0xyit/nd4dCCdIFchqHFCPF4NBAj9q0ZR2kQSux7RY7SW6UPDy1KF0ofHlqULpI+PLQgXSR9eHjbEl4F8K6EdyW8K+FdCe9KeFcygl0Jx+htsqwvCUjvFMv6ym84y5qzrDnLmrOsOcuas6w5y/pclvUbA/R8itPFAZrABbD1cujzqJ4dPxdaRdPlbE7L5HUll9cl9ZWhPddOTmDnBHZOYOcE9nefwJ6VlIslsOvDTGCX89lLQmCX9TdAYFckM6OaFtY0xVQyJtFMMyerOJvL6IpqkcvxHzublqXmXoPALv7u/Nc0y0VvukhX1/dm9jduz9SX31/Z0/apdb/xeq9p9O+CwJ5KvQne+GVB5gSGsk1SI2p6f2s2shh0W9KRtR9XR9f/oX0ZWdut5Ev5ETXeoaPr8wHY3HTqrBRsjSwKo2q4NKqGi6NquCwro2p6mhc2I2h7xBs63tCNcEM30oX9KAe+3nfpvKvjXR3v6nhXx7u64TVc0XS+5qMW3so2CRMyDG/oeVPLm1re1PKmlje1vKnlTe2Qmt79KcEIJ33s8YqHVzy84uG8LF7w8EqXv9oZKsNf4Vf0NPKDF/1+/v8gasg2

View File

@@ -1 +1 @@
eNqdVmtwE9cVdgJtSNppqIG006RlR+3EgbLrXT0tO6a1JT+EbWT8kLGLca9273rXWu2u9yHLJmaIC81MHWayLUnaSdsMwUiJ62AYKOFR0mQChiRQSukwtpsQh4Hy6CRMkyZk2gI9K8nBDvyqfkh77373O+d+57vnaiCdwJouKvJdI6JsYA2xBgx0ayCt4W4T68bGVBwbgsIN1Ycbm7aZmjixVDAMVS8uLESqSCkqlpFIsUq8MMEUsgIyCuFZlXCGZiiqcL2Td19Y54hjXUedWHcUEz9a52AViCUbMHCECAElMIEIDhmI11AcEwUcX0AgmSN6kGwQhkLEZKWHMARMsIqmYQnZ3EQUGz0Yy5n5grJOnF1SUIk0eGQVyYzLumMZ4dAUCduBTB1rjv52mIkrHJbsqU7VIF2UhzRMLarYWBlmGfjVDQ2jOAx4JOkYJgwcV0EcANpcNOWz5xRF6mAFRWTtuXUOo1fNBOJNOSOiTfj5sw2QYXM2QO0FQeUODatSB9INR39/ji2nzv9NBDgO66wmqjmoo4yoz2AIXcCSRBHNOgbBRN1WFScxaxqYyLKAZPE4KKhTREhWTQNWKKbEgcxQmwSSRO4LQIpoEUB+UxflzmlORVpG6AoYRoR6E4pp2ETwBkWjGk6IyMAcQRJxFMOEDlISokFwCiBlxSAkRYnNAkYxrwAmGwCQokz0KqYGZdZ7sEbZu1WRbRiwrZ7RRdXAjpoh4uwQ/Kv1Zp6+oAoLBiB0WVRVnPGXZmY0ntYdqg8hs2WxT4GoYc6uTI6wfQZUiXZh1q5gf3t/WsCIg1TO5s0fEhTdsHbMPiCjiGUxOA7LEB8CWC939onqMoLDPFgaD8OhkHGmyNZwDGOVBNETOJVdZe1EqiqJbMb7hV26Io/kDhFp53L762Hb7yQcOdmw9oQhibJQYc4LDOVxUvTOJKkbSJQlOJmkhCCfVLbAB2e+UBEbAxIy1yWsVHbxjpkYRbe21yE23DiLEmmsYG1HWtzr3j1zHsS27WGlA/W3h8u9vBXORTE05d41i1jvlVlre+ZovjJrMTa0XpJVgMPaSqdYMJSIrYmPOjpYviMaL+2muj20iWrMrtVKa1G4jKqNNDR73UVtVao/YYQjLYk2lzsoRmq5GMn4nH6vk/b6XCRD0RRDMWRSiPBaR1m32JKgOyra2DC3olxPOiUvVd29os0od9Mhj6suEKyLhvQVuLm1awXXXYPboq4g5lY568RVQjmvenhXpMbLVbWaQbOqsY1ZVUJAdmZC5EpX+mqpVfHqSsEpCAERNVe5Q6GKRr7P09pSVLmy3NfsCvk9tZGEGVo1Iz0n4ybpXIZe2l1E258d096QsNxpCNY2n5t5UcO6Cj0Z/yQFkhmmPjBkn4Pjx9K53vxCuOaWhR8YCoInrUNNJrQ/p5MIswbhpJ1ugikqZphixk9U1TWNBHJhmu5owV1NGpxVHmxYMW35NCuYcgxzw4E7mv2QbXaopJ0+9G0SJ1VFx2QuK2tkNdmQvZXIUHB39mSRitaJZLEvE9Y6lHF9T1+yh2NNjhMSPXHa3+d2iVFssvye3BLoE3YYSIiM6yCOi96RezPtu2HYK00yNEkzB5IkNH4siXER9Mx8565G3RrygNj7bgcYSgzDJZp2Z6pBvzoToeE4GNaOfYvG7ff7/3Bn0DSVCyB+j//AbJSOZ2bDOOP6vtsBOYoXaH0kOY0mRc6a+B4MOjyYiWLGw2PG7fJxjF2BKPZgt4v3FXmdUed+ux2ywGIXU1U0g9Th3tBEo9eaWBZHSbvHlLoYj8sLOy2BLs1KJocbzWhQsfeglxAq3NkK4kYDlWQAsQImGzP+s9LB1pVldaHA3tXkTCORYTX7HyQtK9CkeT7ViDUojDXMSorJQbPUcAq4GsparT1FnDfqc/I+D4s8rqjfRZZDG5pm+9x2Q3anTSMJck+w1m7BVeoodrtdjhK4i0qLvFCmzD+Vx1PZ1n/krvHFg/PyMp85UkN7+G/0wv6Tozs/rPEteXmyHJ14cMFzD7sP//K+/Njvjpny8shR76ebK1/693vpx57a2PCD1WdKl198Z/TQ0W9yX5/71eBbofOVXxtdv7YqeaMwcvmNbY/d4BefOP33a59duPHp5aPcfevf/PNzf3zk0mZPfrEvf8np4Z9uqli5engllf548PJn5z6q+dL54vLWv+QH3th9Sqre9O6l95PO33iON3Xe/e250mVmbPzVA28/1LyxcP7OXTcfePOU/7fE3A8Kxl7raq3cUPKs/9zPXrv6w62Lti49Hdm0pXTDnme2TjUkxj6ZuHa2fdu+Le+Pjn5l8uz13kP71x5fvNe3f3/L4MH/frL3YkvN1dqn69c8csr74hOv/HPq2JNM/aOj+/iqlyJbK74b+PBS9QAz5V2iNTWl7v/Poke/IfR8f15k/pG3L0wtff3xk19+b/+R1/tK8ycnw8+3qBt6iu+5+td178Z+cWSfusZ7PXbuiQTRUDB2ZXPJlcLDv3przbfazu8YXvCQ2Lju2eVPz1l4lb937Mo79dV9g8MLf13b+fvvGAfPBE4++JS7e2Lw+RPXF/ShjYd3d41/sP5equHJ4ivj/1h07dS/tgTbf5z6+f1L126bK1+c9zH9p6aBM+Mj7ZVXp67fk5d38+acPGflQM3gnLy8/wG1OUq0
eNqdVmtwE9cVthMozYMmhVCaIUk3KlOX4l3vSrIl2XUbIxviN/htA3Gvdq+0a+3uXfbuypbBZUIobYnTZvuCttAGMBI4rgMxIZRXJ4Q8yoQJIR1aEkI7bUMLMzwmdJpOO4GeleRgB35FP6S9d7/7nXO/891ztSadwCZViJ4/ougWNpFowYA6a9ImXmFjaq1NadiSiTS0uLG5ZZttKqe/JluWQUuLipChcMTAOlI4kWhFCaFIlJFVBM+GijM0QxEiJd+57f2VHg1TimKYekqZpSs9IoFYugUDTzUjowRmECMhC0VNpGGmQIoWMEiXmF6kW4xFmLhOehlLxoxITBOryOVmItjqxVjPzBdUxHB2ScFCZMKjSFRb06mnkPGYRMVuIJti0zOwHGY0ImHVnYoZFuvjilnLNiPExeowK8AvtUyMNBhEkUoxTFhYM0AcALpcPBdw5whRu0WZKKI7t9JjJY1MoKitZ0R0CT9+dgE6bM4FGEkQVO82saF2I2p5BgZybDl1PjUR4CRMRVMxclBPBbM4g2GojFWVY1opBsEU6qqK+7BoW5jJsoBkmgYKUo6p1g3bghXEViWQGWqTQKoifQLIMe0yyG9TRY+NcxK1kKEEDKNAvRliWy4RvEGRiIkTCrKwxLCMhuKYoSAlo1iMRACpE4tRCYlPAkZwlAAmGwCQis4kiW1CmWkvNjl3twZyDQO2pRldDBPsaFoKzg7Bv2Yy8/QJVUQwAEN1xTBwxl+mndF4XHeoPoTMlsU9BYqJJbcyOcLlE6Ak0oNFt4IDywfSMkYSpHI2794hmVDLGZ18QJ5DoojBcViH+BDA+U2sXzEKGQlHwdJ4GA6FjjNFdobjGBssiJ7AqewqZxcyDFURM94v6qFEH8kdItbN5ebXw67fWThyuuXsaYQkKqqLcl4QuGIv593Vx1ILKboKJ5NVEeSTyhb4wMQXBhLjQMLmuoSTyi4enYgh1Nlej8TG5kmUyBRlZzsytRL/2MR5ENu1h5MOL745XO7ljXA+ThC4wO5JxDSpi872zNF8cdJibJlJViTA4WzhUyIYSsHO6Q+6u8Vod0QrX4Sk5sqmhs4FrXaklsTapZK+ZFMvMReW1JSsqFXrahf5KJITcUWrYoWAjw+FAkIgyAoczwmcwIaSfGd9cbSuhBqx+IKKlnpfdbxC96oxhVObm7glDa3BYJOvzacHrTa7i9ZKiyurFotybWM81BPX/P5YZ9yoMaK13p62tkU1HYFkV0/VgooyBrKzE4pUXt3XUVNnmI1LjMqFoWQNUSu9WqS+qrlVj3YYYa4qEFtU12Lb3qq++IT0vMESls9lWML7g7z7GR33hor1mCU72wJ+YYeJqQE9GT+RAsksm64Zcs/BG6+nc715a2PtDQt/YagSPOkcapHtQoYPMA0kwXh5r58RSkp9vlJeYBbVt4yEc2FabmnB3S0mnNUo2LBq3PJpUbb1OJaGw7c0+yHX7FBJN33o2yzuMwjFbC4rZ6SDbcreSmx15Vj2ZLHEjCFd6c+EdQ5lXN/b39cribYkyYlejQ/1+31KBNtidE9uCfQJNwwkxGrU2ebz86O5N+O+G4a98qzAs7ywv4+Fxo9VRVNAz8x37mqkzlAxiL3vZoBF4hgu0bQ/Uw3+8ESEiTUwrBv7Bo0/FAodvDVonMoHkFBxaP9kFMUTsxG8Gt13MyBHsZWnI33jaFaRnNNzYdDtDQb9Pui7QigqihEUEgNIwH6hWEQlkhCKhH7rtkMRWNxiGsS0WAr3hqlYSed0oYb63B5T7hOKfSWw0zLo0qJqS7jZjlQSdw+0jDHgziZIei68kA0jUcZsc8Z/Trqys6Givjq8t4OdaCS20cj+B0nrBJp0NJpqxiYUxhkWVWJL0CxNnAKupopOZ09QwjgS4vli7EPBqBhgF0AbGmf72HZDbqdNIxVyT4jOmOwr95T6/T5PGdxF5cESKFPmn8rjqWzrfyX/8pee/Gxe5nP7YPNL69/l7z30v/lHl51cMOOv01HgsHYXPr99Ss07P5qy7XVx6c7CnSc3qjM+uPxK4lTi+Xl/mPblc8cPfhj1l04dfLyOiTBmw9ZdPf86GLx0eP9H6dkbBsaKThdderFTfgx/+0zM98W2WdED4QrSKqI7n3166K3phQ+YR9YXr9rUcO7qa1fmjTx+ZO6O4effOCmX/uqpHdZS3y/W3jM74syZdyJ8tC1fHNx8YfX7Swr+c+fnf3n3nz1T+9/0rd1RH7l705LWKfs+PDnzj1Ne/tx9C2deOLaX++mFfGlw/dT24Zn64ebLZ6avnz/r6q+fPP9WR9e1P1383YZLY9w/au/nZpcfWvffb8XDys6qsmceOvvqtjnCq49c3Tr6wrRjT/3gwa5ZtDFw32B+b2fnM6lj39h06kq6srcjvGfd3Afv+tuZ7/7w2ZfouieO91z5TtkD6ilj3fyfUePywfPTf4+6+ke5tw8cffOi/dUfd8b2Xovuf7qGHXio7O+HL834+v3vyu2lL9z2k/bAHad+fqSp7lzgwnp+1cbiVSe0rns2P/aI92jrN6/5D748u+DRZafa39M3/MXz2tD8i3sHN3zfX8780/7o7PHySym6c+z6lst5D5+Y8ZXd9sZHV7y3YsO5Xf3fu14TTy+b85nVV9ZWbarZvGWsYW/jw8Fpq/Pz8q5fvz2PbHr733dMycv7Px+JTHM=

View File

@@ -1 +1 @@
eNqdV39wFNUdD2AR1DKMRWtHKdsrLcJkN3e39yOXCEN+ELiEECAJIeFH2B/vbjfZ3bfZt3tJgEwrgiAtwk0VBapWCQnGSFSQIjZUmHGwDoNOodOJFToUR5QOtlQFLBT7fXt3yYXwT3t/7L597/u+3+/7vM/n+96t604gi6jYGNWrGjayBMmGD5Jc122hFgcRe32XjmwFy50Lq6prdjuWOjBDsW2TFOTlCabKYRMZgspJWM9L+PIkRbDzoG1qyHXTKWK5/aOxX6zx6IgQIY6Ip4BZtsYjYYhl2PDhqccOowgJxAiSBDaMjRmBMRxdRBaDY4wpGLJAGFmwhZglgBeOyfzmIQsxKgFzItCI1NzCrYSJWVhnkCApQ9MYcMPYCmLMdliMwUhYRvAt2EwreHcIkmngODIAARu5limnBcuNTLyh1qpVq1J+hrrkWKOPU5AgPzyds3GjLljNMm41Hp4+ZDJr1ixmLW3QR1EcpT7KBFjF2uXGWpZlC+gj601bMMIw3pSt30/fYc4fpF7cEV9qhM+Hd9jH+fN5Pj3iT88JpeZE6KTUCJ+eE4R3kOd8zKC3QNYIk89503Gy1r0c1nz75fv/l+Wnlw2NasdKqAnA/xYMslu3AJEBYJAJtHsQjUEUhoZ92ZBkoBg5zGfjMXI4kA3KLbGHITSSMHNhhQYQFYhmMa6sQB2MIGLHdrk2RO9cptVS0wxcmM1VkIVBWmG6ag9JoBQb02xGIMQBirffRklGO6OpoiVYKiIMBqcWZb3BiI6q2axqZGJgA8apRlJ6G4pQKTSDFBzLzcBCMapKQ2unXzTFBHgWRA0m66BnWBRsJCwrgThPLuOxsIaoxkk7sZHu6chlhknfQrZjGSkA1Bi4RoYEik71SNiykCa4QInIbkUUwHhKyDFKnoyib2c4aEBcdgladjZ0EzwdK6BHB2Q12hU3bZbngizkI2Jqa0CvD97EtpCgw0dM0AiCDliHSWsEIAK9Xi5M+zDWGiUFqxLtW+Ox2003UMwx3GpKHQ62qYEBO00NUipqtJCpNQrE9nR0pL2ly+T/7QjsZEQkSzXTpp6izE4TBWkax9QSyjHVZQlqQ5JjZ9VGXQfsgARRwwSCEgU7mgzYAoEBSlW+xZBj6hTA3CGqEc/4xFouQzCcHKpOiefY1BGt1aJooYQKNVZmWEYfJJdqMzIGSwPbjIZx8zBDEcUw2KQCgCUwBLhupQXhMs0UqHrg/CIuLqYF55Jlqyj1CYqz2t3WLai40iKGaprIdgnuuBhncIfdh5CpbaHHoWohme5M2uGKLFMsNiGJ7mDHio5uWgghlbM5EzsVTOzkvuEnZR8VKDAO6I5lCJB8Nb5aNXMZGcWAx6gHJGIgd5OTPc0ImSyAnkBdqVnJ1wTT1FTJJXxeE8FGb1pSLM1l5HAP5TsL0jHs5IEqSKIompfmgo8L+jnva20ssQXVABUTVhMgn67UBr+dPWAKUjM4YdPXhWRXavK+bBtMknsqBamqephLwZKU5B7B0kOB/dn9ADalR7K7ZOHIcOnBoXBQl71c4PVhjkm7ISX3uNL87bDJyLbaWQmDj+SL3i4JCKWi5MC/GhulWKOoz2zhWoJeR6hwmpbi+vyqIm7+ksW1oUB+w1wzkrCrltQlGvhAqbpkvtzM+sL+SMjvDYV51sd5OR/nY9uUJTGrsahFrUt4G+c0SFVyeTFp82shbl5LeYNdHPBGg3xlSWmlGCXlqLa+qVxuqUANIl+K5EX+SnWRUhwzgzF+SUVInlvvlDpzqxt8iwoZyM5JqPLMBeH53CJ9XpniV5QSVaidG4hG51THVgfr6/LLFhSHa/loJDh/ScKJLspKz+8LsN50hiFvIN9Lf/sy3NCQEbeVZKcvHA7stRAx4XaGHusCzGyHrOukQjjxXnf6lvZSVcUQh+/vLAVSJvtrHKh/cAOpkmzG7/UHGF9+gc9XAPeLuZU1vSXpODW35eDrNRaIFSo8OyfD+W5JcYxmJPeU3Jbt/alDjKX5Q+FmUZuJCWLTWSV7l7KLU/dTNlq6PyUtFltxwVBXu2GT/S7tW1e3tcqSI8tKolX3RlYHeFVEjhQ7kJ4ChYKGgYRYnQA6fMS/Lz2UYV4PLNbL+rys13e4jaXXQ03VVUDUfaZvyTA3CHAfGmlg42YE9+nugLsf3iPZFhbSgbI0+JCbQCQS+d3tjTKueDAJ+4KHh1sRlJ2Nz6+TQyMN0i52B3TS25YxZ1U5OTAVPholJPJ+OYBEUYxE+GAkFhRDkXBY9MViMIS8b9GKKIEbup0mtmyWwNEBF5b25ECuLrTRMjOT9wX5ECy1EAq1pDkyqnbEUkwXQQoZE85qLMh9JWVsCdzQEVvtMjDZXVq/oKgyWnJwKZtNJbbKTP0f6TYw1OlYrKsaWbAzyR5Jw44M9dJCXeBrcVF98kC+HBKpDoIRAfFihGeLoRJlvA0Sr5MW224Brj8kISX3K/xMT0EgwHsK4TiamR+CfXL/tTzalar+746eNOUX43Lc3xi7unLrR96J/Rfqlh75sPznzFNbt5wec7Hn0uaDG1e83HNtG/p11GzR+i71bp9z4+r70XmjK8im3vYvz+66/Muto4qfPZ5f/OylvdKO+/u/2Xp96Y2bPzjcfXzj8XdfiIeWTT0/EH6649xrx1d8sPAvv3/n4x1/is84NfqhujfW7xh18Z1XJnSfOzP1ofmx86Pfq7nn2KnFl1bvvnnogrW/tv+o96G6E8/97Z7cJz+cMXvci9Llq8H3H1jZsn7sGTJ1TLN6V37PryYeKxs/5ejpBmPy0xNaH9w2/fPZ/34DcaV3m+Xjj+2duKXlm8nLLnz60t6Jn1y3en526Rl8teGvL9/J71qwfWDt5e1v3ij//DHurnLrDxfvbjrnPHLYXzKuYud9yzvMwp88MYod9+YX5aN/NL9yZ5x5Lm/KlD9fi9xJlp5Xtuzwj73j6wnru2bf25TTUdZS/PgrJx/ZcLLrH9Wznnq07/Ts+Jct/dv15y9M3LXr+zeL50QnfPBWsHC58/zmSZsqNjw7bfwD205cP5O7e+PNv1tHG89/78hn1yZ9XIAbxgS3fbJ9c9Mfa2/U9D258OVQU3NfaNS05dbbPx67l/3pulerD391ZdaGMSfzDk5o3u2/tKl6zbaV/zyl1Fw892lt4DO54dEFkRk7r7UtZK/s+eETRxet7EX/ufcBVLjpwYuMErx69qvtpQPMM58XPv2bxceurJosTz3n/87u7/a9UGnedXNUTs63347JOXhf19fBO3Jy/gtz4IV3
eNqdV3t0FNUZB1GMtECKp1VPVa6rVtDMZGdns49wQgl5NQkhIRsgiXDC7Myd3cnOzB3nsclGUhWonqMcPetRVKxYTUg8EQMcoKISn6ittLVVrEalttoeqHhqq7U+jtR+d2Y32RD+8HT/mJl7v+/+vsf9fd+9u2k4jU1LIfrMXYpuY1MQbRhY2U3DJr7OwZa9ZUjDdpJIgy3NsbYBx1TGr07atmGVl5YKhsISA+uCwopEK01zpWJSsEvh21CxCzMYJ1Lm7dlPXu/TsGUJCWz5ytG11/tEArZ0Gwa+DuKgpJDGSBBF0EE2QQLSHS2OTURkZAi6JFhIEmxBNgVAYRH9/QSbGCkWqFoCtUZVTdJjIdkkGsKCmJxcggAC2UmMjAwEoiORSBjGgo16ANmxsESNJrAO0dvY1fRAy9fp1Jb33LBhg7feG0pyF8cmsSAtWszapEsTzJREevRFiz3x0qVL0Ub6QR+VCewNagXweuM6fSPDMOX0UfCmXyBByO/pBgL0HWYDZRTFlXCehI/AO8yxgQjP5ySB3JqQtyZKF3kSPremDN5lPMuhCbRggQRFWH/OTi7WdRDn9JAD3zbkXKjwEXPMtJKGHJ8Wd+HXacHng0YTP39BBiYinxRzhWnIhz9dzBfmYLo4WJiI02xPZGWSEHUQlQ4EBAKZyC0VYDwS4sSxXQ5NUrYE9ZhKjlkthRwEqutWDyxXbI/W1US/ykaCZTlA28wZKkPPIFWJm4KpYAsRADQpk3UUdxTVZhQ9j090kFPee/XjoTcJKaC2Y7qWTSzTCtPVDB1R19KAKsRVWKhBbUIwsGkQThqzvhLkM4mKab1aGcvGmq+/BE0pYxPbjql7gSsyQGNdhAr1ZkRimlgV3ATFsd2DaeISXmHKlCj5Cj2T4oSC5TJJUAu9ocn39a+HGQ0yqtKphGEzPFvGgD9xQnV1mOXgbdkmFjQYyIJqYZiAOAxa85ARmPWzYTpHiNolJoki0rnrfXbGcA3Jju52Rgo48U0VdNhhquBVSpeJDbVLsGxff38OLdfy/m8g0JOwJZqKkVP1VeZ32UpiVWXRaotyS3EZgnux6NgFvU7TIHdAgHrdAGJaSeKoEuQWiAupVKTTFFm0Ngk5dyxFT+QxiVqCLAKngKJR0jk2BaK9Nx43cVqBnikhBmkT5FJsJBHQ1ImNVEJSUxTjWCag4xkATWAI8NzMFYLLNEOgVQNnkeXmxTDhjDFtBXtDqDQz436dlhW3pCxdMQxsuwR33Bzn8w67Dya9baFHm2Jiie5MDnB9gSqJd2OR7mD/+v5h2vDAlT/NKB5MEsvOjk499XbT4gTGAd2JBAayjyX6FKMESVgGHuMRKBEdu5ucHUlhbDCQ9DQe8lZl9wiGoSqiS/jSbovou3IlxVBfpotHKN8ZKB3dzu5vBicq60tzXODYsgAb2NPLWLag6FDFFqMK4M+Qt8FPFQoMQUwBCJM7+rND3uLRQh1iZXc2CWJzbAqkYIrJ7E7B1ELBfYXzkGxKj+xwVct0cznhpDnowRwb3jsF2MroYnanW5qPT1mMbTPDiAQwsg/5h0QglIKz4590dYlyV1yrqBOkWHXryo7lq514I0mslUK9mdYeYtaGGkLXNaorGut4S0imU4pWw3Bh3h+NhrlwhOFYP8uxHBPN+DuayuQVIctIpJZXtjXx9alKPaAmFFaNtbKrVq6ORFr5Nbwesdc4nVaj1FJd0yImG5tT0e6UFgwmOlJGgyE3BrrXrKlraA9nOrtrllcuQeCdk1akivre9oYVhtm8yqiujWYaiFod0OJNNbHVutxuVLE14UTdijbHCdT0pgrcC0RCjD/nYcgfjPjpbzTPDRXrCTuZHeTCgcAjJrYMuGnhzUOQM9uxNg3SQvjNr4ZzN66HmxsnOfyDwWogZXasLemUIH8YrSRpFPAHgogLlfN8uT+C6pradlXl7LSdkYN720woVujwTE2e88Ni0tFTWBqpOiPbx7wDjKH+Q+NmcK9BLMzkvMruamdavbsmU1+9zysthpgJQVf6XLPZMZf2PX29PZLoSFIy3aP5o31BXoljR5T355ZAo6BmwCFGsyA7XJQfzYnyzBuBYP0M52f83JO9DL3uqYqmQEbdZ+7GC2vLIN0HpyvYJIXhbjwcdPfD/3Shhok1oCw1PgkTjEajh86slIfiQSUcCD85VcvChd5wAc06OF0hBzEQ1KxdvXl1RpGy41fAoIuXomJACAhiJIijIT4QlARZ4jAX9GPZHxWlJ2hHFAGGbqdBTJux4OiAi0omO16iCb20zVTwXBkfglCXQKMWVUfCMSdeTWgQ1hJkwFlNBGl3VS1TBTduzMRcBmaHqztWVjbVV/2ynSmkEtNseP8thnUCfVqWh2LYhJ3JjogqcSTolyYeAqzWyo7s/oiEcTwa4AU5FInIYphZDp0ojzZBvEHabIcFuPpYaTG7L8lX+MqDQd63BI6jikgI9sn9B3LTkNf9XzyrbOFtRTPc36ytbevveNtfvPHV3Xu+Xn7NzPcH7rq8deHoaw/0ja15+M2Gx25HRyL3Hz66d/07D7UcvmH3z5ctWHRV2fxV+/fx+1IfywvRlszWqi0dtZccOXDi2E+/3Lfjht7t37w18he5e/y3p15ol//+8h9eP/ZRxT/6bu587KYnzn/01bbRcyKj0jn8rKM3rb1k5cubx9XyQ/dtWzYaL96xrfvNH35+VcXae+/+cO7NA7/O3rf4d8++X8yctaX47XtI18BW4baq4Ikdvmh7Y+xZ4YEtxaGGL4qG/miefMQ++M6NK+rPfe0e5juXdd4Yayiu/Vly66OJ977S5906p/LSj5/+9NUn/vujY+mxU6cO3r9rx48r/vrp8W1dV469jivrL+voX8Ycuv2WC+Qiof7TOx/tYWN3Xvj7TS0n2SuKnz86WDPvtS8/+dfS55+Z43+q6Jbvya9cWHx0SfH2s+ofmfnveS+edxFb/vhie1v/g5+MzHk3NuuBv51cdKe243jxtffOfa9lVbZu+N3tFZ3209sWF6+dwzUv8KP7Pn+qs3b+2AnzpTv+OfNBZ2PRaH/ptRd0o8Ul33/5/qUvzf/sRNHd3QNr/hNfYDx/5eMXccu0zOVvle+8hnwcK//z++qHe0JXv3l75LPzBy/YyF+6tu/g+K2z+5dUbFoZvXr7F5Jx8UdWxYLNA4cuPtJ3/oX4osNzD4wXdR5ed+DGX7zx2TNvrHt9tPx44NTCL2dv+O7+zfcmZ3GJ45eEldlfN3+wc/9Xdx3ZtPdYx7nR+oMrnku+cPLA3HnPdQE/vvlm1ozzXjn7gwNnz5jxPzpsi1M=

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
eNqlV21wFdUZDiKWqf6g7RTtVOT0Dh2+spvd+52ktOaDhISESz5ICGKTc3fPzZ5kv7Jn9yaXJHakdlpRyqwOY4cWWyAklUlBBRWFoJ0WWoodWywqqaDTH8WvmYrMiLW19N2995IboaL1/Lh3z573vOc57/uc5z27cSxNLEYNfcY41W1iYcmGDnM3jlmkzyHMvmdUI7ZiyCOrEy2tuxyLnl6i2LbJykpKsEl5wyQ6prxkaCVpsURSsF0Cz6ZKfDcjSUPOTM5+azCgEcZwN2GBMnTHYEAyYC3dhk6gw3CQgtMEYUkCG2QbCCMT6zJmSMY2TllYI6hLTnXxKN9WEIsgCrYKQYZjm46NjJRnwysEy4sW87bRqWGrVzb69UWLu8rW6/mZU09dXV1TnaH8T4tjpWmayH5ntaRiQDSEVnkQPlsbQi1kIO+1optkvdOkSvVuVtJiGg4jDFUkDWzlFsMWRISVVClUleGxcAzVwCAaWq8PcRxX5v3kW0GnbKiM++zt8qShKxxe0T7dWJmHEyEhizzbCjoheGy0eJTohz2uwJYFeay0sKPL14inhlWST1UwWOj9Y038hDEfSIwPRjxPPs5p1gUd0cfJeFRvKLqHUFZJBi2qUQ1IjkTgDe3uZqhVwRliLUZVjuZlFialiI/UdxSKfw6cMZEPxkOhHM7g/8Dpx5MyANqAqY7hbNDeXqoT/Vr8LMQZjF4rZp8cz1IvoFmcoWvGE0uesABQbCtoUQNVM6gRZ9BqQtTFqMaxLaKq5Oo4Q5HPEc9IiBfR5byHr8HPdqqqFGsAU7cyqAIg6Z+en5+I85rxjPNCjp9XFaupp1qQKh3kErTEQr5ag+gWo36L2sQXx9UZkG4dSYZMfGHVWT9YUntKSpuJ7Vg6Sqxq6PBnpLFK5WnzQIqRbtgK0BsRlZGpudWGvtBGIJEOyGPmKjquZ5BKkxa2KOQbXMDatoJ1lHSoanNUz69j6DDurZPVfT5QjAKWoRKvPrAMs4kWGC5G08pGO1SahdkKIBkWUAZ7e0dJYvcTLybdWeApEM5Cb16kAsN3whsN9qZ6r7pNmwvxEQ7ikDQ8Wx3eivDPgIlYg04Kw7bhBeAwoUKCoedL4GPeO8NQOyXFoJL3bjBgZ0x/oZSj+5XUc3j52TPQoZZ4Bqa/9U6LmGonZnZgeDjnLVci/29HYCcTJlnUzJkGKvJxZgocLR6tYR47qJ8jMkAkB9hi5jOuaRA3OKZ1uldWmWI4qgxhBZZlqTHdEM6JAuF2mMeOnE9DLUbMgFsD1by0Z+szjOBk0iJpim0orxwcl16CmOMVchvJBlgCyZBqGL3TDJMkBYKbWwAsgTTANCtHZZ8pJvauCHB3YX5cTAvuJJZNSbYLx8LK+E8fi4pPbqZT0yS2FwnL8WOcjztkH5bMpsW7ClGLyF5mcg7vLDA1kj1E8jI4fOfwmHcDAShni+aMKAaz3b3Tb0n7vOMBjIMaYsiwgPur7g3ULEYySQGFyR6guE78JLt7egkxOQh6moxmZ7mPYtNUqeRzvaSHGfp47khwHpYrh/d4fOfgMOi2eyABICrqSnJcEPlIkBceHeCYDXVDhSPLqRjwjGYTfKhwwMRSLzjhcldFdzQ7eW+hjcHc3Y1YSrRMc4ktSXF3Y0uLhvcXvodge/Rwx6pWX7lcbnBqORBugQ8/Ns0xy+iSu9s/mk9Nm0xsK8NJBvhwdwijEhCKEvf0e52dUqozqS3r4/sigoNXOj1rjY54ooJvaGteEw3H19WapWk70daeXhcKV9O2BrmXE2PB0mhQiMZCnMgLvMiL3IDSlrI6K/poe1roXL5OSsj1lWwgqEb5FX316+zKsFAXCTVWVTcm61g9WdPRUy/3rSTrkqFqIjcFG2mTUpkyI6lQ28qoXNvhVDu1LevEpnIE6Jw0lZetijXwTdqKGiWoKFUUr6kN19Utb0ltiHS0x2tWVcbWhOpKIw1taaeuqQBeUAxzQg5hVAjHBa/tzXMDSle3rbgjwWA8+EuLMBNu5uT7oxAz22EbR7yD8Pzvx3I39J2JlVMcnjtSDaR0J1od0D+4dCUkGwWFYBiJ8TJRLBNLUW1j63hVbp3Wq3LwsVYLDmsKeLg8z/kxSXH0XiLvqboq2yeyJYTz8INwc2TANBjhcqjc8bVcc/bbhKur3p89WpxhdWOdbvCXdSd82vdvGOiXJUeWlXS/JpRuCIdokjhS6kBuCgiFtwwA4jTm7orExb25kTzx9sBeBU4UOEF8ZoAD5Scq1SgE1P/NfSAxdyQC0T54pYFt9BL4lBoL++kQjhRaWHCrobq39pSbcGlp6eGrG+VdhcAkEg8/M92KkUI0YlBjB680yLnYFdXY+EDenKOye3oBdDrDwKKIHA6SmBBPihiHJSxIYlwIiim5NCnHnvYEUQI3XjZNw7I5BpUDbhoZ93Sxhgc8lVkWEiOhKGy1HHRaUh2ZtDjJasPbBCtHJlRpA8v7qmq4KiwphGvxCeiOVXesqmisq3pyLVfIJC5hZj9Fx3QDZDqVGm0hFmTG3SOphiODXFpkFHw1V3S4B+JyNBkLpiSMcTKULA1xlSBEeW+XeTfiae0YhrsHS0vufiW0LFAWDocC5VCNlsWjkCf/g/Xu0az4H53x6vz7Zhf5baba3LhlUpgzca59LduxtWjWJFmu/3XfgZOb0k+MzV0t/6zuby+dP/hky5e+8Z/BrddXdr38hRPbzl08OzTvumO3j84R58y6uDPc8967Hyz+48PP/u611uWXkpt+ftuFMy/27PveO8c5/KqgXPz6D76zf3Rj/a4/rTw5r7jY2rL5rXfuOrL++d8c/ejxWfX1TZEDr35w/NCbZ37bnq590Iw+Wy7e+MB186rVecKxV3ZM/PRWZ9Pgj9bGv11bb//43OymS/eKj45sm22eG7mwfvCWztsrJnfpJ1/a2jzzua88Um3u5o+dKb3p7EM1fbsfGPjmzef3f/S1h+aff7v76TtOnfph55KlN9y13dYm73m8/r4jFTMPb6mNpe6/++/kROSJkhUL526uP3gh9uX7ZzyVWbPz/cYPzVeee+2L2sTdykz65He33Gbduvn0Gzc+FZ/UxzaOD19IL/nqvU78YS1m3rP0qPTu5GC58pOj49b67WWZTfP/9edFf9i8e/uhN8Ny88r2BXOf7l16w7nj9tnNibeLTp1q/fXaJZsT3/qo4szsI/9oe2Teuczb/Bv/PjD/9eMs0sW9ML50YsHyF0j5+3e9eP3Ns0Zv7rnlpk2D9MNt1e2/+Ms/b9z6+sj1e9O3nVjwYFlt7/g7LwfjN1yaUVR06dLMItZz+NDWmUVF/wULibGL
eNqlV310FNUVh6P2cKpW2x5RCtV3tiqgmcnszGY/gjm6+SRfJGYDJAiGtzNvdyY7M28yH7tZklhJeyytFDu1H3q0VDQkGiMKqIgardhS21prtWKxPdSjPbRHOVKl1h612juzu7gRClrfH7vz5t133+/d+3u/+2Z0IktMS6H67ClFt4mJRRs6ljs6YZIBh1j2N8c1YstUGuvsSHTf4ZjK/ktk2zas6spKbCgsNYiOFVakWmU2WCnK2K6EZ0MlvpuxJJXyL83ZOxTQiGXhNLEC1eiqoYBIYS3dhk6glzpIxlmCsCiCDbIpwsjAuoQtJGEbp0ysEbRWSq1lkdeWEpMgBexkgqhjG46NaMobZ2WCpUWLWZv2adjMSDSnL1q8tnq17s0q/K5du7bwMFz6SThmVskSye90iioGBMNombfkp2vDKEEGS17jaVLwriRVRU9blQmDOhaxUDxJsVlcDJsQAauyTlZUCR7Lx1AjDKLh1fowwzDV3k+plXWqh6uZT9+OTho+xuEx7ZONVXs4EeIKyAutrCPAY7vJoo4c7HEpNk3IXa2JHV06STw1rJJSqni+3PvHWvAEYz6QCMtXeZ58nDOsyzpBH6fFohYq6x5CSSV5tKhRpZAckcAbJZ22ULeM88RcjOoczcssTEoRH6nvSIh+BpyRIMtHBaGIk/8fOP14KhYAbcOKjuE8KJmMohP9ZPwsx8mHTxazE8cz5gW0gFM4aTyx6AkJAMW2jBa1KWoeteM86iREXYwaHdskqkqOj1Oo+gzxrBLYIDqa99BJ+LlSUVUFawBTN/MoDpD0T87PE+I8aTyjLFfk5wyRKvw2gTzpIImgHybyFRmEtQLlTMUmvgh25kGedSRSifjiqVs5sFTsglx2EdsxddSxrK3Xt85iVZFmzAGpRTq1ZaAzIqpFCvPqqb7QRiCHDkhh/jgareeRqiRNbCqQW5gOa9oy1lHSUVSbUfTSGlSHcW+NgqazgQoUMKlKPO238pZNtMBIBZpRElZCFVlYUHiRmkAP7O0ZJYmdI14s0gXQKRDJcm9ehAIja+CNBvtSvVdpw2YEtoqBGCSpZ6vD2yD8W8A6rEEnhWHL8AJwGFD9wNDzxbER7x2lap8oU0X03g0F7LzhL5RydL9Keg6PPnsGOtQNz8Dwt95nEkPtw5YdGBkpeiuWv//bEdhJxBJNxSiaBuKlOFsyHCMWLbc8Vih+jsggER1giVHKtqZB3OBINute2bRk6qgShBXYVaDFTEM4EzKE27E8ZhR9UrUCWRRuBIrmpb1Qf2EEJ5MmySrYhlLKwNHIEGQ5XqG2kUTBEgiGVEozMwyTJAXiWlwALIE0wDSzSGGfKQb2yj/cSyw/LoYJ9w3TVkihC8fBzPtPH4uKT2xLVwyD2F4kTMePcSnukH1YspAW75qjmETyMlN0uKbMlCb7iehlcGTNyIR3wwAoB2adPSZTy3a3zbwB3esdD2Ac1AsqwQLuPel1ilGBJJICCpNJoLhO/CS7kxlCDAaCniXjhVnufdgwVEX0uV7Zb1F9qngkGA/LscOTHt8ZOAy67d7fASDizZVFLgTZKp7l7xtkLBtqhApHllEx4BkvJPiR8gEDixlwwhSvge54YfK2chtquVvbsdiRmOESm6LsbsWmFg7tLH8Pwfbo4U7UdR67XHHwo+VApINsZPsMx1ZeF92t/tHcNWMysc08I1Lw4W7hxkUglELc/W/19YmpvqRW04SlRH3Xst7a5U6ylaZXSuHBfFeOmo3hlvBAq9rW2iRYWM5mFK2BCUYELhaLBCNRJshybJANMrE819telWoLW0Y6UxvvbheaM3GdV9MKqya62CuXLY9Gu4QVgh61VzirrFaps76hU5RbOzKx/owWCqV7M0aLkWrl+1esaGrpieRX9TfUxpcgQOdkFammebCnpc0wO6406htj+Raq1vNasr0hsVxP9Rh1bEMk3dTW7Th8w2CmDB4fDTNcEWGYC0U5r20rcQPKVNqW3TGeD4XvNIllwK2bfGMcYmY71uiYdxCefmqiePu+vaP1Iw7PHasHUrrT3bJTgbgIWkaziOf4EAqGqwWhmuNRU3v3VF1xne7jcnB7twmHNQU8bChxfkKUHT1DpMm647J9ulBCGA8/CDdDBg1qEaaIyp3qYboK3x1Mc/3OwtFiqJnGurLOX9ad9mmfWzeYk0RHkuRsTuNi60KCkiSOmLq/OAWEwlsGADGa5d4REqLbiiMl4k3CXjkmyDFc8OFBBpSfqIqmQED93+LHj+WOVUG0HzrWwKYZAp9JEyE/Hdxj5RYm3GAU3Vv7IzehWCz26PGNSq4EMKmKhR6eaWWRcjRBXrMeOtag6OKOsGZNDZbMGUVy918InT4pIgYFjiN8FRfFISESJimuKpTiQgSHQ1KM3+0JoghuvGwa1LQZCyoH3DDy7v4KDQ96KlMjBKuEMGx1Cei0qDoSSTjJeuptwlqCDKjSFEv31jUydViUCZPwCehO1Pcui7c31z3Yw5QziekwCp+ZEzoFmU6lxhPEhMy4k6JKHQnk0iTj4Ksr3uveH5UIScY4IZLkuGhKjDC1IEQlb0d5N+Zp7QSGu4eVFd2dslATqA6FhMASqEY10TDkyf8YXT9eEP9fzP73BdfPmeW3UzYm/rTpJe7skd+t7DlSe+nsV3Zcheg/x++ZvEd+4LUzO6Vbm/+xb/uaoQCDPphu2XPx+q678humYtYzN8+74azZIhLnzNvALHR7X7jl6wcuv2b1Te9NX/PAzeyB4f/w73LTQ398ddcA/8VnmfNSRzaMbu4dwfzd3xv7/TkVC8w9tU503lmpnb8yqi9Z//TP7/z8qv67Im/85V+LcvOfn39G066FB/ctXX9+7YNnnCce6vzg+q84GzvO3hF/J/b9Le9c1jDn9Qv3/mze8vprhyprBtqf1a7AoS3sr/ft7mKu3Xil+3JL1d4XcvJPbokPbL39p7nDBx7bfOcF7x0+Kxlcevlvswev1g8++aO+i6Z3knjztoaa2+Ze/csnTxf2XnFk6q8vfVX77qYFq84Z6Dj3Sxtn55TNW8ZvH45OvZzbpO8aNU4b2LE6e3Bx9uGpI5uXHJp6cQTfOvSmeVpT9PnY51585vV9Ny24ePffXgs9nni/p28Df5eTerXy4Onf/sNz9Gvf+vOPR0YXipce3jNhX/v+IfaJ6ht7v/PU+7Vr5hxoS/3w/NAj+2578pYvL73uof7tqVffOGd6UeMrOx49tO7xU8+9auP8/p3py27YqsyO3/3643fvHl/jsM67l791Jjp11Z7oji+Ii59Zoj+3YG7Hm33K2z/4zegThy/Dcy/qmvj71I1vi3M2PXEBpPjDD0+ZRbbNn3f6qbNm/Rf+NLbx

View File

@@ -1 +1 @@
eNqdVnt0FNUZD9IC0p7qQUpRsU73aCk0M5nZZzaceMyLuEDIk7w0jXdm7maGzMydzGM3G0yPpfZYi6d0pNYXtIWELOwJIQSo4X30IOk5emgUqCdIfdRjY9pTBKtWKYV+s7uRRPir+8fu3Du/+/u++/t+97u7PhnDhikTbUa/rFnYQIIFA9NZnzRwh41N67E+FVsSEXurKmvremxDHlsqWZZuFuTlIV1miI41JDMCUfNiXJ4gISsPnnUFp2l6eSImzt70wTqPik0TtWHTU0A9uM4jEIilWTDwRCgJxTCFKBFZKGogFVOLxehiCmkiFUeaRVmEatdInLIkTAnEMLCCXG6Kx1YcYy09v7ioDWeWLF6ODHgUiGKrmunJpTwGUbAbyDax4elugRmViFhxp9p0i/YxAdqyDZ64WA1mOfg1LQMjFQZRpJgYJiys6iAOAF0ulgm5c4QorYJEZMGdW+exEno6UNTW0iK6hF8+uwANNucC9AQIqrUaWFdakWl5uruzbFl1/m8iwInYFAxZz0I9RVRVGkOZElYUhlpjYhBMNl1VcScWbAtTGRaQTFVBQZOhIppuW7CC2IoIMkNtYkiRxa8AGapBAvltU9baJjmJkkuZBAwjQ70pYlsuEbxBPG/gmIwsLFI0paJ2TJkgJSVblEgAqRGLUghpnwbkcZQAJhMAkLJGJYhtQJnNODYYd7c6cg0DtjXTuugG2NGwZJwZgn+NRPrpK6oIYADK1GRdx2l/GXZa40ndofoQMlMW9xTIBhbdymQJW6ZACb8WC24Fu1u6kxJGIqTyds6tvRIxLWdg+gHZjQQBg+OwBvEhgLOrrUvWcykRR8HSOAWHQsPpIjupdox1GkSP4b7MKmcQ6boiC2nv5601idafPUS0m8v1r1Ou32k4cprl7KuEJIoieVkvcEzAy7CDnbRpIVlT4GTSCoJ8+jIFPjT1hY6EdiChs13C6cssHpiKIaazvQIJlbXTKJEhSM52ZKhB/96p8yC2aw8nWVJ1fbjsy2vhfAzHMv4904jNhCY429NH88Vpi7FlJGiBAIezle0TwFAydsY+bm0Voq28WtjBdARYG6201zaSpvzKImZVfc2aoD+/uVwPx6zK+oZYs89fKtevEttpLuQNB71sMOSjOYZlOIajO6X6qNFa1CE3xNjWsmahUlxRbHZ6lSDzQMeKZqvYz0YCvoqS0go+Yq7Aa5rWrhA7VuJm3leKxWpvhVwtFUf1QNRXvzIoljfZpXZ5bTNXvYyC7OyYLBauDq1iqtUHlkteSSqR0ZpyfyRSVhvtCjQ15C9fXRxa44uEA6vqY3akekp6Xs5Ps9kMg6w/n3U/A5PeULDWZklOT8jP7TCwqUNPxj/tA8ks21zf656D1/6YzPbmbZUrr1l4QW8peNI5UmdD+/N6qUrBorys109x+QUcV8DlU+UVdf0l2TB1N7TgnjoDzmoUbFg2afmkINlaOxZTJTc0+xHX7FBJN33o2zTu1ImJ6WxWTn8jXZO5lehI6d7MyaKJ0YY0uSsd1jmSdn28qzMuCrYoSrG4yoa7/D6Zx7YQ3ZddAn3CDQMJ0aoJ4gQDA9k3k75LwV5ZmmNpljvYSUPjx4qsyqBn+jt7NZpObwDEHr4eYJF2DJdo0p+uBnt0KsLAKhjWjX2Nxh8Ohw/fGDRJ5QNIOBA+OB1l4qnZcF7VHL4ekKXYxpr9nZNoWhadsXtg0Or3CiKQIy4UFFneG8IhP0YsNF8+CGl5+QNuOxSAxS2mTgyLNuHeMGQr4YzlqqjT7TGFPi7gC8JOl0GXFhRbxLU2X0rcPZjLKB3ubILE3SXL6RIkSJiuTfvPSZY2rS6qiJT8oZGeaiS6Us/8B0lqBJp0NNpXiw0ojJMSFGKL0CwN3AdcNUVNzr58MciHvFGOF3ns48M+uhja0CTbl7brdTttEimQe0xw9kq+Qk+B3+/zLIO7qDA/CGVK/1P5SV+m9b8y4/TdG+bkpD8zlZpXf3mWnf+X8R/2H9r6dM7FH1QNlfxrNPfZVKrqzNwqcfMLx2tH3ty/JTXn4kc996x6Cl14eej8xfD40dUzBEqYc/vPn1zn3Blo3PXR+5+XXT667vOB4NjGEf5S5X1Xz/VORA/+ejTvdquz6L13GkuKDlR/eO/EHQsWHBipsQ8/mzu+6fVH9ux4dDNfdy8d+DF718lZRwdn3/HkiU1jP3vOmf8hJb71KHdi9ZmWv+0QLt/yvTs/fccTLhipfWzwi+MLG2vw14bNNxIDhZ88WPWrz3a8us/eWjf7QunSeQ/t/PPwfcmTh9T5DUu3vXzzv8VXLv0Dfdy+pOH9x7+4bdexwvjltRM9O+etmEs/P8u+8P3v/ug5btbz499+c1H38oIhqe74lmNPPfzN4YoNi/bnnXr81LeuLJA2LpsTv3XxWyp5mt/9p66N53hy7O65Q5+dGr9r83tbToaHP307flbklixqeOZKy1/Ho9QvaoSJdwcufeOJ029gp+x8quClhHd/fKI2fP9F7TcP535wovy3V37PfH1kU9Ou2f8csEfzvjO6aTx12++Y/4Q+2baw8Jkzgy/8d95D4sIXT1ePLjnfM0E/cfD+naduWfpS2c3nXk+81lP69/m79gw1nXh30eGbcnKuXp2ZUxweeWTDzJyc/wFGZ0j4
eNqdVntwVNUZjwpTsGUmZVqwlhnObLUMae7u3Ud2s0lDm2QDJiEkZBNMeBjP3nt2783ee8/1PvYRpFVIYawZ6nUs1pGMQkJWQxrloSLPqY+Kji0gpUyACjq2MBbCYJvGsTr0u7sbSYS/un/s3nPu7/y+7/y+3/nOrs8kiKaLVLltSFQMomHOgIFurc9o5CGT6Eb3gEwMgfL9TY3hlj5TE0eKBMNQ9TKXC6uik6pEwaKTo7Ir4XZxAjZc8KxKJEvTH6F8+sztf1/rkImu4xjRHWVo1VoHRyGWYsDAUYsEnCAIIx4bOKphmaAFfHQBwgqPklgxkEFRXKFJZAgEcVTTiIRtbhQhRpIQJTu/oDJGcksWLMYaPHJUMmVFdxQjh0YlYgcydaI51q2BGZnyRLKnYqrBeJ0ljGFqEWpjFZh1w69uaATLMIhiSScwYRBZBXEAaHOxzoA9R6nUwQlU5Oy5tQ4jrWYDRU0lK6JN+PWzDVBgczZATYOgSodGVKkD64Zj3bo8W16d/5sIcDzROU1U81BHJWrKYpAuEElyoladgGCibqtKUoQzDYJyLCCZLIOCuhPVKqppwApqSjzIDLVJYEnkvwF0ovsFkN/URSU2wUmlYqRTMIwI9UbUNGwieIMjEY0kRGwQHjFIxnGCdJASiQbiKSAVaiCJ0vgUYIREKWByAQApKihNTQ3KrCeJ5rR3q2LbMGBbPauLqoEdNUMkuSH4V0tnn76hCgcGQLoiqirJ+kszsxpP6A7Vh5C5stinQNQIb1cmT7hmEpRGOglnV3DdmnUZgWAeUvmwoLBfoLphDU89IC9hjiPgOKJAfAhg/T7WJarFiCdRsDQZhEOhkGyRrcE4ISoDoifIQG6V9TJWVUnkst53depUGcofIsbO5ebXg7bfGThyimHtbYQkKmtdeS+4nSUep+flFKMbWFQkOJmMhCGfgVyBD0x+oWIuDiRMvktYA7nFw5MxVLd2NGCuMTyFEmucYO3Amuz37Zk8D2Lb9rAy1U03h8u/vBHO63S7nYFdU4j1tMJZO7JH87Upi4mhpRmOAoe1jR3gwFAisUY+6+jgoh0RuWIJ5sOh5mXtVa1mpJ7G7uf9qXRzkmqL/XX+h+qlpfVLvDoWEnFRrmHcAS8bDAbcgVLG7WSdbqebCabZ9oaS6FK/rsbiVZUtDd7aeKXikWKiUwo3O5cvay0tbfau8CqlxgpzpV7PN4VqmjihvjEe7IzLPl+sPa7WqdF6T+eKFUvq2gLplZ01VZXlCLIzEyJfUZtqq1uqao3L1dDiYLqOSiGPHGmoCbcq0Ta12lkTiC1Z2mKanppUfFJ6nlI/w+Yz9LO+Utb+DE94QyJKzBCsvoDP/YJGdBV6MtkwAJIZpr6+3z4H7x/N5Hvz9sb6Gxae0x8CT1qHWgSzGLEBtIwmkIf1+JDbX+b1lrFutKShZag6H6bllhbc1aLBWY2CDWsmLJ/hBFOJE36w+pZmP2SbHSpppw99myEpleqEyWdlDbUxzblbiakN7cmdLIZqMayIXdmw1qGs65NdqSTPmTwvJJIyG+zyecUIMbno3vwS6BN2GEiIkXWrz+tlh/NvJnw3CHtlGTfLsO79KQYaP5FEWQQ9s9/5q1G3+ktA7H03AwwaJ3CJZnzZarCHJyM0IoNh7dg3aHzBYPDgrUETVF6ABEuC+6eidDI5G7dH1vfdDMhTbGf1odQEmhF5a+QeGHRwQcK7cbCULQl4ON7DeTx8gAtGeezxlHC8u+R1ux1ywGIXU6Wawehwb2iikbZGimWcsntMhddd4vXDTsuhS3OSyZOwGQlRew96OVLhzqaYf6l6MVONOYEw4az/rEyofVllQ231q23MZCMxjWruP0hGodCko9GBMNGgMNYgJ1GTh2apkQHgaq5st/aW8oRESqNcNOgOwk+AqYI2NMH2te367U6bwRLknuCsPYK3wlHm83kd5XAXVZT6oUzZfyqPDuRa/9u3jc1/fEZB9nNHT/hq/Cxb+NXokapzT0tPdH+J7hl7bfOJYSz1DPc8Oa3vvtiqx4oXPrl6y8Yvr76Z6Cxvv3PsWz8a/duh8VGjbFr3IxcKZz+4cKzYuzv533+a3utd4188fGTthb6MtfWYf9x1/tWR5KJTx5mif3dteP9I69gzf+4JufYOPhi1uorbNp28duEf0bKts+rRyXlk+Za7zv9wfEHFc2dfGZ21se/dFz2n1m/b8ETx7VUzgk9df6Pv48/nVnkuHa7xGJt/PAP3hma0MY/PaLrY/a+W9CU/OvleX5G0c/V7cy68suW7fc3+N04lhd5nK5cv7Lt44tg7m+Yf+OTY5p8dvvS7L86fzbgeHl3Z9f2xD5/a+PHqXV0/Zw5u3jQ30v3I9vb2+H8WnSraLfCh3iPPo2kvNHz2nd+6Lr+JZx6Yd+UXMz+anToXO3Om8bm5vWenle+subh55eWhLWq3dwunXjnYOevdyNbyosaT5W//Zsy8r/eunadHI/sLv33v6rvb+4cvz956euADp1E3/egxZXbPqWeuWadXtUz/SPW9+PnzI3vnlC169IG1RWevtY/f2fHWifDyB8xDa3p2fy8dPiP94E+vH2eQYH515Nn4udaLVffO30YL/ng89Ym16+j08qv7ave0/fSt8Q8+Xfj0xj9c6X2n+Ne/qjt+908+zRTO2/BLqO/163cUsH/5a+HMaQUF/wOC7Fya

View File

@@ -1 +1 @@
eNrtWGl0FFUWDvsijDiDMiKMRR+YBEl1urp6SRoiZF/IQlbIgp3qqtfpItVVnVo66ZDAEWEEEpUWwQURh4REM5AgICAYHUERZHEZkUHUEdDhzKDoIEfQIzK3qrtJBxjQkTlH5/B+dNd779Z9X937vXvfu/PavEiUWIHvtZblZSRStAwd6eF5bSKqVpAkz291I9klMC3TcgsKmxWRPXyXS5Y9ki0mhvKwesGDeIrV04I7xkvE0C5KjoFnD4c0NS0OgfG9P2jAbJ0bSRJViSSdDSubraMFWIuXoaMrERTMRXkRRtE0yGCygFGYh+IZSsIYSqacIuVGWAXjrNBjoZaORISxIOtCmKDIHkXGBKcqo3chiokar5cFu5sSqxihho8aX2Er50Nvdj9VVFR0d+pDPwWK6GW9iNE602iOAkT1WI4K4ce1eqwA1Ya0JlSigHbWwbF8pRRT4BEUCUlYgkOgxOBilAgWkWKSXCzHwGP4HJYKk1h9OV+P47hN/Qm1sI6t3ob/+HbxpfrLFF7WfticTcWJYYYA8kAL65DwmC3qsdwa+MZ0ShTBj4kipfDMNezppjgUcpXRGK79kkZcZU4DYtUbzaomDWcP6bAOoeGU9Fim4OJVhAyHfFhUKieAc2gEI2xlpYQVuigfEsdjSYpb9Sy85EQaUk0RGfsTcFoJvTGWJIM4jf8Bp2ZPVgKgWRTLU7A32Koqlkf8tfgZjtNouZbNrm7PONWgAZzkNe1J0WpgAaCU7MKisljOh2VTPmwaQtx4LFWRRcRx6Mo4SfNPsKeZ1BPYRb+brsHP6SzHsZQbYPKiD0sASPwP5+dVcV7TnrF6Q5CfVwxW3U9pEKp4CJcQS0RMi9YQdKOxGpGVkRYcp/kgdPMYLTBIC6y8VAOSrNwdSpMFPlLGIMwpEOJ8V4jFvA/jWIdIiSz4TAClImimeMyhsJyMs3xoDYGHeQjbwdit70aZjySPABMMKyJaBm+DWhVcCDC8SoetjXhBqXRhLO8URDelCfSArovGdKLAITV5SD5JRm5dQzTWI6dMhzQUGUgPtCACnwJqHEiuQarBIByrSJ0QVcO1qWbUNcyEETcYjFOHKj0yTurNuKyIDkGV5WGUgH8JaEq5oeOkOAnBAODwQPoEQVWXQW9VxwSBs9MugaXVsdk62efRFnIqvJZmVYUXn1UBHhKNKuDRbGoXkYezU5Ksa2gIagvmz/9aEcgxSKJF1hMU1SWEHCi5YN/psSJJpQ6rOR/VIloBKnlCNHK7wW6whzN4NedKLkHhGDArUNBLcSxziSBsIheYW5EgMoZ0Clw0JglwpGDdKp8CyRtmKIdDRF6WkiH34rCXqhAmKWqWlzFGAElekDFOEKp6CDoQcAQFFwBJYCPQSAySRWOKh1LPD3CwkTS7eEQ4sIgyiwJdoKDo054usYq2YySe9XiQrFpCVDQbh+wO3oclA25Rz0nAbEb1TFDhzDBRwTELWA+iDTMb2tTjCUD5KGJYi0uQZH9HzyNUp7rvgHGQYAQGFvCvq6xjPdEYg5xAYdQOFOeR5mR/exVCHhyM7kWtgbf86ymPh2NpjesxsySBXxvcEriK5fLpdpXvOGwGXvZvygUQCRkxQS4QerNRb1hfi0syJBUOYgHOUYCnNeDg7eETHoquAiV48Bzpbw283BEuI0j+NdkUnVvQQyUl0i7/Gkp0W0wbw8fB2Co9/G1J0y5fLjjZvRxEdYPe9FwPxZKPp/1rtK25pcfLSBZ9OC2ADv8fDa00EIpF/sOn7XbaaXe446v11WaDQk1VZs0QSmJzE/RZxflFFlNsaZonzivnFk/3lpKmZLY4i6nCCasxzmI0WKwkTugNekJP4LWuYqdoT6hmp3sN9pRSOpfJTJRqjZxFn16dWSonmgwZZjI7KTnbkSFloqKSWZlM9VRU6iCTEZNnzGbzXIlOj9lJFk+1MGklSrKSVlBK5E3EAJ3iZZn4HGuWPs+dnuoyulxJLFWUZsrISClw1plLpsem5iRai8iMOHNWsVfJyAuDZyRMuCGI0GIwxRrU1hHiBuS1StnlbzGSBPGMqIVqCd3XCjaTFWlei7oR9u1uCx7fV+dO7ebwbS3JQEp/V6EC8Q9OZLm0jBkNRhNGxNoIwmYksLTswrVJwXUKr8jB5wpF2KxO4GFKiPNttEvhqxDTnnRFtncFchOu4ofAjaNajyAhPIjKv3YGnh+4uOAZyRsDWwsXxEqKZ+u0Zf1dGu1r6mprGFphGJe3xm2IqzORrAMptHNT8BUIFOoyAAh3S/5mK0F2BGdCxGuHbzXghAE3ENtqcYj8iGPdLBhU+w3eniR/ixmsvfVyAVmoQrzkbzNp7jC8FC4hwpGH5dW1u9WY4uLiXryyUEgVCSJmq3VbTykJhaMhjG5p6+UCQRXNFre0tjYkjrOM//BY6NgNjDmOjEMOi8lhNjOxNG2xmAirlXEaHIQ51mp6QQ2INKhRvekRRBmXIHPAMcTnPxztpmrVKBNPEmbSAp86EeI0zSkMKlAcyYL6EdJEzANZWqCYzqRUPImiXQgv0Ajob0suyUnIzkjaPAMPZxKeq4VqmOcFCNNOZ2sBEsEz/naaExQGwqWIWkFXfkKJf1MsY3FYSYOZYiwU6Ygj8UQIRCFtF3nXosbaNgoONZKX9m90kfE6m8lE6iZCNoqPtYCftNvsva2B4P9ar4N3Ng6M0FofLn9m1RHD8IazncPOjhs9dszfMs4X29675Y17Njcl/LpJZz8yC2Us7Xh9w5ILd3Ntm1ek3pz4QbN04LytX9O9TWMYzJij2zLryy+/iztQnPNEs33h5Du2fGN5/Pz7+3YtrT1jWzBnj+H+EcPP7Bo5eGnZgrEPdFBHl/xmefvxTz+ZQ9yOFx8fvDu7qb1lzfL+fzlaP2XBxMdecU163P7Y4/7e8fO5Owy7/vrstr2jlKa5Y5jm78vGpn3hWzzMsuq3fSoLb+91quvMq+aEl/steHv+tqyyEZlKhDE1Zd6EgyMPnXjXt+WBY5kpKafJZWfOnfuOWdx54aXOE5OX5578xnbq3MF30srGPTK81DT0nx9nT3rRSA8xvTmhIz7fKX/yVvvC2dserhiydWrjKPOyf+xkB7x8k/z66PkTFy79fPPZJY7ON+vsW3d+NGfy8F0vbM+dGLtz4krJc7Lri017qPzaDv357a++uX/KIxnryAFzD9rTW5bf3ThKtB8c9FTTuq6pq97otbhmY9+TWfx43VeLtw1/cl3r87/btP291ccP9xubVK7MOLv3+6FdUanHNoz0brTPaxxyfLltsO21BlPXuLwRUcq3CeO2Rt07uW7HpMyWnLdLIlegIQdemQv+u3ChT0TphSXHGvtERFzPwsdA943Cx43Cx43Cx43Cx43Cx43Cx8+98NFTF69wXJgIGIqFOwGvlRACdQ2Ku1o5gmW0ezQI2RVncSUrJyVlpQlKel1BprnQmpZpzmR+TNWCEivBUZAn1PnZ5YGLdjl0ynWMs6wsEjJNZDQWqeaLyJkz9erXR40v1zXo1Hv3JVYKo52aoLBQoinnA124zmkNWKknCCNpjCvntUR0sd8tE25G1Sw9zGP/IUa4bgWnG+WFG+WFn295gTBbr295wfh/VF4wkb+U8oLZ8j8oL5BWIo4GC5MEwLXCo9XqMNGEGRljY+No0y+jvOCwMgR9HcsLvrDyQt5efodh2IsnJhTf3SezdufQRe2PPtW+Ob+oaHfs1GPHavHZeb6FcztH7Wie13gz+dS/9p1bEjFwZeP4gUWZxcvIzw/vnRTzYeS6xi7vV21n874mDn9eeeTQ0Y13NBYXvnV6wK2rikanLCN3P/HZouLfjaPl/akf7lm/4levl2QXPtK8aMOO6u2Hbt/kO/7ouzkbTz2dt7zl1v7Egs6+ER9PW0U076+e0Lpk84H0BWN2937tjDhoSq3xlluMfUfPKL1t3YRndxvft31dtujp3bNNxl1/GHc6ZnW/Eix9xc7Mur6jp9hOsH+asMflXeLdsPbIyQ/q6srd85pWfrbrsf6T70kcedeTbch17qaX3im+Xzya138DMa1CXO18aNCfa+J3/d0++eikogdHZFVtenv95H05c45tu7Bs/7dD97z6zgfPpN73Xp+XUx6Mfn6QVRo8rmjRN0VP/N766ak7G5jqu55xV5942JSj+6w3/fwMtuzpmnsO3Vn4kC/r+2AJYJ1/zcpVvSMi/g15gOqZ
eNrtWGl0FFUWDsIIDo4yRw+DB8Sy1QE8qU539ZpAlOxkD+kQwhKT11WvuytdXVXU0p2G5KighhlA7ADuCw4hGWIIIHEBRQZmUDTgegAh4AJqUFTcEBeWuVXdjR1hREfnzDiH96O63nu37vv63u/d+96d3RbEkswKfL8OllewhGgFOnLz7DYJz1CxrNzSGsCKT2BaykpdFctUid19rU9RRDktJQWJrFEQMY9YIy0EUoLmFNqHlBR4Fzmsq2lxC0x4z6ADswwBLMvIi2VDGjFtloEWYC1egY5hiqASPhTEBKJpkCEUgUCEiHgGyQSDFOSRUAATtYyn1khobQKWMMGCnA8TgqqIqkIIHm3e6MOIGT3GqAg1AST5GSHEjx5Tmzad176KPmtra6MvDfGHS5WCbBAzeqeM5hAgaCBKtCV/WmsgXLg+rjXDi6PaWTfH8l45xSUKqoxlIsMtICm2GJLAAnJKlo/lGHhNnCNyYZJomM43kCSZpj3iLaGT1pBG/vR26qOG0xSe1n7cXJqGkyBMUeTRltCxwGuxZCRKQ/AfJyBJAt9lSkjlmbPYM4A4HHcVRSVq/14z/8CcDsRhpGyaJh1nH+mEjlnHKRuJAsHHawgZDoeJ0bmcAM6hMYywXq9MVPhQGEtjiCw1oHkWPvJgHamuyOL8GTgdZiPltFhiOKl/gVO3JysD0CLE8gj2A+v3szzmz8bPRJyU/Ww2+2F7pmoGjeK0nNWeiNYCCQBFio8YXcRyYaIYhYkyjLkxRK6qSJjj8JlxWmw/w542i9FMnPK79Sz8nMxyHIsCAJOXwkQGQOJ/PD9/EOdZ7ek0mmL87BOkos88CE88hESIHxKhR2QIrMlESGIVrAfBsjCEZ56gBQbrwZOXQyDJKtFwmS3woxQCwpoKIS18hljLhwmOdUtIYsFHAiiUQCviCbfKcgrJ8nH9Ag/zEJZjsdkYRVeOZVGAQYaVMK2AZ0GlBioOFD6jE9bFvKB6fQTLewQpgHSBPpANyYRBEjisJQY5LCs4YGhMJvrki8mQYkZFwz8tSMCdqBo3VkJYMxSEXg2lByJoojbNfIbGahgJgKE4bcgrKqTFaCMVVXILmiwPo2b4lYGSKAAdD+JkDAOAQ4TUCIKaLpPRoY0JAldD+wSW1sZmGZSwqC/kUXk9hWoKT71rAjwkFU1A1O1ZI2GRq0GyYmhsjGmL5cZ/WxHIMVimJVaMiRoy4s6TfbDHjMQkWaMMqzse12NaBQqJcfoEAmA32K/5vJZTZZ+gcgyYFagXRBzLfE8QNowPzK3KEAXjOgUumZAFOC6wAY1L0eQMM8jtlnCQRQrkWRL2jR8TsqplcYVgBJDkBYXgBMHfR9CNgSM4tgBIAhOBRlKMLDpTRKSdDeDQIut2ESU4jEgKi6NdoKAU1t++ZxV9p8g8K4pY0SwhqbqN43YH78OSUbdoZyBgNqN5JqawOkFUcNcB60G0sbqxTTt+AJQ3koa0+ARZiXT2PR6t0vYcMA6SicDAApGV3pmsmEww2AMUxu1AcR7rTo60+zEWSTB6ELdGv4qsRqLIsbTO9ZQ6WeA7YluC1LCcPt2u8Z2EzcArka5SAJGRnxLjgtloo4zU6npSViCBcBAHSA4Bntaog59KnBAR7QclZOyMGGmNftyZKCPIkeXFiC519VGJJNoXWY6kgN26NnEcjK3RI9KWVXb6crHJ75aDCG42Otb0USyHeTqyXN+aT/T5GCtSmKQF0BF52NRKA6FYHNn9WU0N7alxB9LzEOPKLi+ZkjlJdRcK3smMvT5cHhKkXHuBfUYhV1SYZ5GRL+hnAzmk2WExpaY6zA4naTaajGajmUwNm6YU2zxFdln0+jMzKoot+f4MnuK8rJFzlRsnlkxyOsstlRbeqVSqU+VCpiw7p4z2FZb6U+v8AavVO8UvFoieQqqusjKvoMoRnlqXk5kxlgB0apBl0vPrqwqKRKl0opidmxouELhsKuAuznFN4j1VYpYxx+HNK6pQVSqn3p8Aj3LaSVMMod1kdZq01hnnBuQwr+KLtFCUw/ZXSQ/VMp7TCjZTVHl2i7YRtm1tix3N/1Ja+B2Hh7ZkAykjGyp8ajJhchAlQpCgTJSVMNvTLJY0k4XIK67oyIqtU3FGDq6pkGCzeoCHOXHOt9E+lfdjpj3rjGzfEM1LpIYfAjeJ60VBxmQMVaSjiiyPXkrI/Oy10a1FCpIX8exMfdnIBp32oZn1IYZWGcYXDAVMqTOtFtaNVdrTFfsEAoW2DAAiA3JkmdVp74zNxInXDv/VRJpNpMm8vp6EyI85NsCCQfVn7GYkR1psYO0nTxdQBD/m5UibVXeH6ZlECQmONyyvrf2dGmtqaurTZxaKq7KAiM1pX99XSsaJaMxUQH7ydIGYimX2gNxRHxcnWSay+2ro1DgoJ2Wj7B6PxWZmmFSPxWR1263I5jQzJuR0W9ZpAZEGNZo3RUFSSBkyBxw/wpHdyQFUr0WZdIvZZrHDXx0LcZrmVAa7VHe2oP0JeSwhQpYWELMqK5fMQrQPky6dgJG27CklGcX5WY9XkYlMIkv1UA3zvABh2uNpdWEJPBNppzlBZSBcSrgVdJVnTIl0ORmM3akmtxV7sNNDO8hMCERxbad416LF2jYEBxo5SEfW+izphjSr1WIYC9ko3WkHP+k31Ztbo8F/S79vrpg3KElv/ee7qoU9pksbX1q1+uPB9w2Yu/i8q4R97R+hQ+2BfV1Dy5j7C8Xe1zsbX1tw5YlvFw9o/tvdXeOyd6Zf17s3Rej/7Pg/DjEP8ZcY1raHjrx/9EDTQsfbX53Y//aYxhtPkiPmNR06NPzrY2kb5vCbTN15lyq1R259viOzOXfyanHrBYN3iNSIorZFn+xYPYBatPRa34rlb6y+/oOJH9/Z6hiUkjvrvT0Tbh5psA8eRnPTTswboc7eSLjnfHxl88NHx+XM/aBw/s2W9rlJR2+Yuf/epT03bfZlVreslJdPTbpsmXvTkrr5jwg9aY4F+wu2vLQsFPrm86M9KTVvMte/9umixff1/L70NnJjwUy6+5bKdUPVOVPrL19Ysm1Y/+1rm/2OLnta0DdpW/oDE0zn710evu3eP3+Bxja1jdjnueiVEc07uJ3rCta8hh94Yi/yrjk4+MDeeXfJr2+KvGh0hm7o2vOOuGRW55fHe54LHRm/OH+lpfDGV2uq8snjacGtjVNG1VOPf2ZsKBi4daHngp33bCsvem/gB5mfPDp1qTS3qdr/8ifSxc1Xdk7teL9lVm5KyR1PJjdVH77rLfeC/KzAl/N7/2QlCd+k4yF7b9cjLywvOZn3TL9bxhfe/Y+dyLBlxpbttqYtrmO7HlvfjTe/5XjQNG9yedvBjkVH6EHBTVeAk0+e7J8kl0fqBg9ISvpFKx/55yof5yof5yof5yof5yof5yof/83KR19dvMpxCSJgJBYuBbxeQ4gWNhD3Q/UIltEv0iBUE7TlVfDlVaWuSVmBkNc+g2XDpTbEWH5K2QJJXnAS5ARtftb06E17OnSmGxjPtGmjIKuMSiZGablhVHW1Ufv3o8dMNzQatIv396yUQDEtGRHxpDKdj3bhPqc3YCBcLikLlTqd15POqf53Molm1MzSxzw1P8YIv1jF6Vx94Vx94X+3vmCmzL9sfcH6f1RfsFC/lvqC3fYfqC+kWs1Op9Nuoiw2CiNPKkMxDEVRDg+y0FaUin4V9QWPGztsv2B94f6E+kL5prrh5iEbDk++MK3n2eFjZxyYdtEYcvniO8cPuiaZZoOTV1Re1TLlXutXb87beXsoy7U1pyn8+cjD6dvlfks7Jk4rW5/zSrX4+SOHbxz5zBvDWp965FDpfT0Hr2vrSfnoOaHh0wOdlWu3vbWvPxtGpDW0KVL15cCqvMyHO6byld3d3cVfo/Sky4dL9+wccM+qob3y+wdu7+3+7MNjW7nddQV1g68fnHRT79vbR254OO+xW9/9EL86seD1O/Y8eGdSBXPXqN+6Hs1/ed69C5Z6Ut4Z2OMgb35v2PPo8d+NbX4ze+TqK8cLS5+dUHtJ7aDIFYGmzI8vuHPeo+OmFV1z1Y7GzSOeP/ri5b3fLsp7rnju04V5x6yrVj6xYqUBuR86fvXbS89XapL2Xlz9xar9K4asG1x+wUnDQ7/Z9Yd3M0oXHFnoW5c2e/NsfvOeDq9wcNJtA1YWLrIvyd3XfNkK+sjrw7svWXp3y4ld26ftaXpnp+Nw5Ip+K8cMuufabKpyY8m4zpwXdn298TFX9xy05MHmssk7djouzer6e+OFD61LOXFetAIwttfeObJ/UtI/AbzaHWM=

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
eNrtlglUE2cewMNiXWvdFteT1uoUsdWWCZNkcoGoyCUiBATk8GAnM1+SgclMmJkAifYQ64laR6y1Xm01EEEExQNEsbZ9nvXc5wXaw1Vr1fWoLWppa/dLgAqru+91V99rd528N5P5vv/8v//5fb9Cdx7gBZpjfSpoVgQ8QYrwRSgudPMg1w4E8c1SKxAtHOVKNCSnrLbzdMPLFlG0CSHBwYSNlnM2wBK0nOSswXmKYNJCiMHwv40BXjUuI0c5GiomB1iBIBBmIASEIOMnB5AcXIoV4UtAGk+LACEQOETSAkAEu9VK8A6EMyGiBSAmjmG4fJo1hyDhNqhVQAgeIDygAoKQAJ5jgEeHXQB8wKsT4YiVowDjGTLbRBTnUCvN0h5JFo4p4FMQeUBY4YuJYAQAB0RgtUGnRTvv0YTJsVfdFkBQMCRvuSycIEqVHZ2sIkgSQN2AJTkKmiWtMztpWxBCARNDiKAcusECbwil8hwAbCjB0HmgtOUraT0BfaBJwjMfnC1wbEVrJFDRYQP3T5d7PENh3FhR2houOFjSAC0Jjw1OdMCUsIhCrlbKsfUFqCASNAujI6AMAY0qtXnnt7WfsBFkDtSEtqZbKm35uLK9DCdIJfEEaUjuoJLgSYtUQvBWDb6x/ThvZ0XaCiR3ROL9y7VO3ltOJVdgcnxDB8Uej6R13keI905zNR2UAJF3oCQHdUkfYJVtwWIAaxYt0mqFVrmGB4INFhqYVgo/E+1CoQsmBhzY626tuFWGuLaMfi7r44qESZLqU+ww90olYiBFRIkpcXgLUepD1DgSE59SEdG6TMoDc7IhhSdYwQTzEtVWA27SYmdzAFUe8cDs13uyD73xmA9LFgUFNk4AaKtVUkU6Oral1dDYyI0tpYZyvJlgaad3WaneWwb5zoJ8irRTlCUv34rpnbiKNgI7adrU+omN5zzLQINQqyCtVuLKytaZthyUQ18xVIGhmKKuAIVVDxjaSsN4eu+t/S5ILjWGYbX3C4hcDmAFyY1j3mtHewkeWGHSPGvfU4Pr9frtDxZqU6WCInqNvq6jlADaW6NQWoXa+wVaVazChIqCNmmUpqSGQPiSRakwQkGqtRotqdZo9Zge6JVGjFSYVIRCryf0W+FOQJNQiyeZNo4XUQGQcHMTHVJDkJUo8DRdmEqhVmmgp6EIzZKMnQLJdmMk5/FBCEVsPGA4gqqKiEYjCNIC0GRv/UnuyIyE8PjYiPJkaGQEx+XQYGGjj29WFmnKMlrDmGSQEZcXkZuuHadyponAkWNPyI7NYQCuMWpshlSjis6kc51aY04SCutbr1Hq9HocVcgxuUKuQLVObb48ISPbEKk0J6lys2lNqlKdHm2OcoxUshgVYSApR/oolSrOnpEwugBLs8XGO0maHqPHk0eNtRi06vTMLHWaOtwak5QVIURa8YKMCFM49IYQLWHBoQisTRrGN6y1Q1DYIWhLf6jb+iMUobwxCJN33BpDkVHwXDCwjCMUSfYEE8AnYQXJcIMPS+BY0LAIxsCeR1Nh6sw4DRUerTWLGUmj8ewEwZzLjFampsZrSYNzpEURHu+AhR4lFOTGtguCQqVHsdY4aDBc563Ce6b/h1ZtSUfbNzxqsHlPLsnNcgJLm0ylyYCHDSSVkwxnp+Auz4NSmPOx4RnSJh2lIVWUyYhTmEJnIlXoSLh1tmn7ZXtweY4IN8HAGssjpY0WVVhACI6rAkIRKxGm08B28h6TU0s9Ncmad/nkDijqIvNevkzSp+zHmN/2i6+MG/ZSapdZJ+eVB5eFfXciIjVqd0Ng5z1XGqOz59y5vf9Jv5dr72AbVaEbP4+RjenNPDnm2Jmi3Wuopluu1G+b735ZfTX/+72XwK3x7gGm4mE1t4Oe7prJJd4pvpyQSPZ0xta5p0ddU9TONzUYlx7sU5Qbg6e837V6/x1qZOzAzWOq8WWBl4L8D751ealrxJ7FN/x9ZH/fpj1/7Im76we5vnmSPqKf9ML4lTNlSwb18tu9oviQcDVj6ILCTTGfRWX5rz7iU9Nlh3qsrSRzffobx5BRUv22nplB+fNn3nC6j3Ufvrx4cPncks2hN83bF2XWhdTsff7S+uMOZQX5TPedaScLbld1Ui1+9pxgGH/sIxt+/eop28WpQ3NXDk3zXR5/7jmTY+rKuAvyp3d36qPuPU05T750YtV2cLSmuK6Onz0/4cDmQ2TxO2PTLPPe/qnPohffi+9Dzi+mt5iOvtDsK5P9/LOv7IPb18sW/EEme4i00+gz7d/gzgQWgVciz1F20oM9JpolmDbokbfMttyjCmhBhOXwCxHZbYjIQSiiBcTGQVYLaZFrh0UQIuBJxTAODyDBPQtCFcPxHbQmgHzEa05B2/dou6tlZCRjB0bA83SrWiN8/1eyLfcYeMqxXl5j7+kPglZA94B3nONp8wNcfeQU97nM7zHH/cY4rpT0nopSw83f+KH4CI6r+xhWpdP+Oobt/X/EsCql+nfCsArdf8GwqxXWB0KsSaNQqnV6QBk1KkyhBYSKUpFak05PKZR6Umt89BD7EOCIInSU+iHCUWEHOEr4pA2O1sT7vnpmxKwe5X79us/pZ17Wt18V4KYWNl/fS1vm7N+10Jy3W3XleqcRX2dWjFgzrvry8n2V168zNTsm7qgK/n5z8xTblZjtU5rr6t+d3jBz1OrC8rSUH9fWJR7TT6vY199yIUkZffLS0KxJF4ouVH89e342nnRCd6jr/hCrfbHhZsCyAxsvGRNrXL06byi82En25bbhZML4c+70jwJ/igrpFq4ZPG7PC7Kz+yIj9/qJf6EzXAPL9iobhzVvFjr7jnprZ6j7bdz0x/45R3r39L85ZMqmRiTFVNTLWTXzUsGHOnVTbUNn/6KGdQrfHcNL8WM6Q2rfOwN3HB03c8t3g30blgzG9Ec3/tDpy5Ivdt8yhJ8dap8fV6rddPRGtnN9j3MD5p4XNQvGBB2cdbjouwE11dGLG98stnYdlFrUtGZ54EvHD33x4+G/JRpOf3VgkP/yyFNP7K48Yzn9fv6kAXfN5/YbpbVD3/l29GzqvR9lLbR084BJte5h09Ly3xYtBSH5FpppgZ5/hqBfT1IEC38tChyw0/IfY9RjjHqMUY8Ao3CF8uFilPp/CaN0+O8FoxSPAKPUuFGppYxKAtcQgNRq9ZQO1xpVFKZVU0oNpv0dYBTAMB1Fkg8Ro+a1x6h47jTWrf6HNL+Q01HGvgf3rNt54lNnhjND221GkGpL5uyegSU5iec/fKZ6hS5qcvf3b9/pP2Df7W4jvh7HhK+ZFA1WDLnW/9q+kKZ1zd9caRhef3fKmaZbocu+OFh5OmzW0CG1juheh8uf23XCf+5S2vV2jL9iqzztwldbJnCa4yPS2aOBGReNSbmE+tKqCYtSN6T3+rTJdS3jlW7Gp15Xyt64fTZeX7ZLu2F61GvRQTOQ0J6n47q/cYP5cwDVJSQyNum8qmxJwF/9fjqkn+bjipkVA0nqqy79xx/pNOdPP/R7VtjZuf5k8sTmi0jZ9OiFgfpVMfiMnz+bUPNN7fBO70aXTFP0eK3uvOaM/nD8kjtP7TjpisiavGjqENBNWtH3VHLnG/mnDp/9ZO/Hji3yE1LWgoVnr2Zv6vFtqol3vDgwhjZNmXghq/Y4Gl0WX9tYlTi26dSh/VF56saLK7My5p6jLw/bShUXd9njd/TE5PjnnRebCjbUr11rL9s/tvLZQdXJtwa0EJW5JH/Gh5Co/gGq074d
eNrtmAlY1FYewPFou61a0fWotWpkXW2VDMncw0iVQ67KISCIRzGTvJkJzCRjkoEZrPdRED81itbW2lpAUERRQVHEo1rvC22tRRTrFkVF60URW4r7ZoCCq7v9dqv7tf0M3zdJ3vvn//7ne7+PmTmJgONplmmTRzMC4AhSgC/8kpk5HJhkBbwwO9sMBCNLZYWHRUZlWjm6bLBRECy8p4cHYaElrAUwBC0hWbNHIu5BGgnBAz5bTMCpJkvHUvayvMluZsDzhAHwbp7IuMluJAuXYgT44hbD0QJACAQOkTQPEN5qNhOcHWH1iGAEiJ41mdgkmjF4It4WqJVHCA4gHKDc3BE3jjUBhw4rDzi3KRPgiJmlgMkxZLAIqJxFzTRDOyQZOIbDOy9wgDDDFz1h4gEcEIDZAp0WrJxDEybBpuQYAUHBkCzMMrK8IG541Ml8giQB1A0YkqWgWeJ6QzJtcUcooDcRAsiFbjDAGUIxNwEAC0qY6ESQ3fiVuJGAPtAk4Zj3iOdZJq8pEqhgt4DHp3MdnqEwbowgbvfm7QwZBi3xDvIIt8OUMAguUUgl0o02lBcImoHR4VETAY3Ktjjnd7SesBBkAtSENqVbzG78eENrGZYXV4cQZFjkIyoJjjSKqwnOrJQXtB7nrIxAm4GY4xv++HJNky3LySQ4LlFtekSxwyNxvfPm6fyl2aJHlACBs6MkC3WJn2EbmoNlAoxBMIqZuEq6hgO8BRYamJUNPxOs/MwsmBhw7FBOU8VlhL3TnNEKl55ZfjBJ4s5IQnBHMA0SyiYiUkwqR3CNp1TpiauQgJCoPN+mZaKemJNNURzB8HqYlxHNNZBDGq1MAqByfZ+Y/Z2O7ENvHObDkkWBzcLyAG2ySswbg0Y0thoa5FfQWGooyxkIhk52LivudJZBUrItiSKtFGVMTDJjmmS5jNYBK6kvbPrEwrGOZaBBqJkXM+UKfEPTTHMOcqGvGIpjKIYX21BY9cBEm2kYT+dvU7/zYpYCw7BtjwsIbAJgeDFHjjmvXa0lOGCGSXOs3aJGrtFoSp4s1KxKBkU0SnXxo1I8aG0NLjXz2x4XaFKRgfF5tmZplKbEsgHwJU5OkbhOSWBSlRTTADmp0ivgBWSAApRCKZNthzsBTUItjmRaWE5AeUDCzU2wi2XuZsLmaDovGa6QKaGnWoRmSJOVApFWnR/r8IHXIhYOmFiCyvf1R30J0gjQSGf9iTl+saHeIUG+uZHQSF+WTaDB4nNt2sXFkfo4ndmL4EbrZMHhxmgrFWGUJOhHJ9nUASHS+ORJZpM9Tm01J+PKYOWkALvMG8VVMhxXYzKVCsUlmASX4Gi00Wekzm9SYiBPGM2+ClZCRceMkUTS3gafBEWibJSKoqEIFSUP0ihNATZFaJB+kpHyx4LjjbZohYFI5hkJJlOooDM+UWNjQu2GYI2QBL0hBKOXhxaBtUnD+Ho1dQgKOwRt7A9Fc39oEcoZAy/Jo1ujFgmE50IYY7JrkUhHMAG8E2YQCTd4r1CWAWXpMAbWRJryCh3rR/vKSIYVLFYhQRWCjw2KCowCgXa7jPKNeSfYW6mnJL5+uNmY1CoIGpUUxZrioMTkamcVtpj+P1q1dQzauuHRMIvz5BJzGJZnaL0+OxJwsIHEXNLEWim4y3MgG+Y8wjtWLFQDDJPhakpKwJrSaQjUB26dzdp+2R6yHEdEDmGCNZZIigVGmZebp1wuc9MiZsJLrYTt5DwmZ2Q7apIx7G+zqF/aX1ycV7v5EUeZcsy1pHpIj6HjZlRuEy+tKK/oULO47Yj+pZXBFXP/0bXy9fV96i62O9JZE9i2cu++Yycs+k0nX5juti561bHotF4zh5wddmGhR3FRzZQpxh/ZhXH9ftqnX/tpQ5+pN1d+BF5gGzLEtOsnbru+8cqmYacGZvQ4tsHKx509en0X7a+bmCY5+iNyeebRGUOX3ovf9ea67LwPDD8sOzwxYkjdBBeXoQvZE9fQhpgha/Q9qT3pwsJ4a5+24eIE//TQ1MVGj3ilOHfnqn6Z+b6W5cMD2pe9JD233cuvcHjfjtoFSUV7Uy5Irr1Rcetj7XcHE7oenJJSEmRomJdU+Za5PMVz08ndAy+t8q8d41Lfq/etd5eG97/rmtV9qs+nnaovHw4NmfzJoQ6D3M+F3K4nahfEJoySbxcKMFBYlrEy/cvEH050vei6P+RyUWzFvbfPDLw47cBUt/mfI8Zy11OL7ReSXk9bvvB25qbqeQf3DL655NvSxMrJ8/ufvta+uDS694Sy5F1X40e/kdYThv7hw3YuXYIuFrm2c3F5ijB0rk3Bf6Ch8QzSdIVzLGUlHWSkpxnC1MxFkhaJlqcRNpoXYOX8Ak9WCyKwkJ9oHrGwEOs8W2RbURRkDniwmUx2B0/BLQ4ymInlnrhCKEhCnGbaWutCW10toz4mK9ABjqObltHB91/7puUpAB6YjBP9mJY13aGFMAzAOc5ytOHfhOSZQ2GFi+tzLPydYWE26TxkxbK7v/Mz9hmcfo8hsVym+O+QuMevILH6T4TEMqniD4LEmPI3IHEmbn4iE2NAo8TluFSBQa6VqqVStVIpl+mVhB6o5YBSPXsm/u2spZGq1SriKbJWRmvWCkk4j3Ur+SlmqLB11pYd1Ac7L6d3fXNOaU3EPHG1YVzqrsFeSybsbbjvll9QUdotuYvxynHsXlZR2+n0/to9lUjguYm5OwLu3azt8OGg4qK4frvv528dVr9rmmf9os71c5bGXio3pNdsKBYrXpyVtVP7pTQDObbhvR1X3ovJL9qRuvHdzw5vfjt8y52lZ0Jj1iVvWXvdvD+pIPn2WJ1ucl2si0tJiOHEtW59v+l7/W6fTcsXXxVvevi6uCcv6TIw4nTq7L9FS7JqFpSGnLrlgy0f7tfe80V0Dh6Qsma4dlFF3Zi91as6+ST4m3B/r7Cqn45vob9gRrr15DNx2dTiLPmuzqeK6CsetpyVob5/HTXgFpfh9/bGVK/uLruzzfP7vbLZdWenrxZN8/6gE/ugbAHKblz0nteSyqDlL1xkjoSdf33OgMKgmHIq4lCbMxnjoz38V//cfdR4a4P2mKGiquLcDw/J3a4ROiQqJGVNYf0YW6+05dOu7j4g0X7StkAYOXvrtcJqfO6akw8zO4RcmDf2zpns7/LU3ac10Vdq7CsV6NOmr2O/b/pyR5KMtKkRmv4Von47mREM/GtUaIedmvQcy55j2XMs+39gmRJ7ulim+TP9p1L+h8Ey7BlgGaHWEypSLtVr5BgkVDWgcFKK4wqgwgmNTIb9IbBMQZA63VPEsrzWWPb5vHIHlg3p61metlJRvWjcqqKuY4M7oKtfi03eumVpzewB7ArJg/tu2tSCGz2ScduV48YrQSd7uUQuvjk9VQwgPpbUflrz/vs/X7pTXZUUllFf1zBxpNePlrramqp39t3NWvV9lxEla75v3ysHH7y5Azr0bsrhLfHfr4sfV6Eq/xAbMGpQ2be6UZFE+uj92ee/GlHW7UxtXt+glzuaXrooc5le9WAFNzZ1d9mrR+pnvtZxRudDiWf7u1wJGjT7jmtUynn/ZflZSO9TJ07dP0F33OPlM9Tz79u+Drj9pnesa9bhqtJrL2tr3l1ZdXraq+eDr267vMzjapct2XFJRQdtN3L2RI7W0lMOhJ09/9GLVWs7GBrm3T+Si23ta0dWSOcGLjh+4612dfgtQ/KoDzVhB/YrT4cVHr79WeyNb5KHDImdcChQrO5+8esHvZn1Xxxa+qrqJzAzrcvl3PEes8IiTnUSLwhT+1xBb60u2rzv4Tf5s12vuh352p4eXOJessA+kq2NI8/2OTkjVj03Jfvm0j6l2vxlPb/csfWG7a11IRdi3LVgWptGQCuue7h2GAS0fwIz6Q8x

File diff suppressed because one or more lines are too long

View File

@@ -8,8 +8,8 @@ LangChain is a framework that consists of a number of packages.
<ThemedImage
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
sources={{
light: useBaseUrl('/svg/langchain_stack_062024.svg'),
dark: useBaseUrl('/svg/langchain_stack_062024_dark.svg'),
light: useBaseUrl('/svg/langchain_stack_112024.svg'),
dark: useBaseUrl('/svg/langchain_stack_112024_dark.svg'),
}}
title="LangChain Framework Overview"
style={{ width: "100%" }}

View File

@@ -73,7 +73,7 @@ in certain scenarios.
If you are experiencing issues with streaming, callbacks or tracing in async code and are using Python 3.9 or 3.10, this is a likely cause.
Please read [Propagation RunnableConfig](/docs/concepts/runnables#propagation-RunnableConfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).
Please read [Propagation RunnableConfig](/docs/concepts/runnables/#propagation-of-runnableconfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).
## How to use in ipython and jupyter notebooks

View File

@@ -24,7 +24,7 @@ So a full conversation often involves a combination of two patterns of alternati
## Managing chat history
Since chat models have a maximum limit on input size, it's important to manage chat history and trim it as needed to avoid exceeding the [context window](/docs/concepts/chat_models#context_window).
Since chat models have a maximum limit on input size, it's important to manage chat history and trim it as needed to avoid exceeding the [context window](/docs/concepts/chat_models/#context-window).
While processing chat history, it's essential to preserve a correct conversation structure.

View File

@@ -8,7 +8,7 @@ Modern LLMs are typically accessed through a chat model interface that takes a l
The newest generation of chat models offer additional capabilities:
* [Tool calling](/docs/concepts#tool-calling): Many popular chat models offer a native [tool calling](/docs/concepts#tool-calling) API. This API allows developers to build rich applications that enable AI to interact with external services, APIs, and databases. Tool calling can also be used to extract structured information from unstructured data and perform various other tasks.
* [Tool calling](/docs/concepts/tool_calling): Many popular chat models offer a native [tool calling](/docs/concepts/tool_calling) API. This API allows developers to build rich applications that enable LLMs to interact with external services, APIs, and databases. Tool calling can also be used to extract structured information from unstructured data and perform various other tasks.
* [Structured output](/docs/concepts/structured_outputs): A technique to make a chat model respond in a structured format, such as JSON that matches a given schema.
* [Multimodality](/docs/concepts/multimodality): The ability to work with data other than text; for example, images, audio, and video.
@@ -18,11 +18,11 @@ LangChain provides a consistent interface for working with chat models from diff
* Integrations with many chat model providers (e.g., Anthropic, OpenAI, Ollama, Microsoft Azure, Google Vertex, Amazon Bedrock, Hugging Face, Cohere, Groq). Please see [chat model integrations](/docs/integrations/chat/) for an up-to-date list of supported models.
* Use either LangChain's [messages](/docs/concepts/messages) format or OpenAI format.
* Standard [tool calling API](/docs/concepts#tool-calling): standard interface for binding tools to models, accessing tool call requests made by models, and sending tool results back to the model.
* Standard API for structuring outputs (/docs/concepts/structured_outputs) via the `with_structured_output` method.
* Provides support for [async programming](/docs/concepts/async), [efficient batching](/docs/concepts/runnables#batch), [a rich streaming API](/docs/concepts/streaming).
* Standard [tool calling API](/docs/concepts/tool_calling): standard interface for binding tools to models, accessing tool call requests made by models, and sending tool results back to the model.
* Standard API for [structuring outputs](/docs/concepts/structured_outputs/#structured-output-method) via the `with_structured_output` method.
* Provides support for [async programming](/docs/concepts/async), [efficient batching](/docs/concepts/runnables/#optimized-parallel-execution-batch), [a rich streaming API](/docs/concepts/streaming).
* Integration with [LangSmith](https://docs.smith.langchain.com) for monitoring and debugging production-grade applications based on LLMs.
* Additional features like standardized [token usage](/docs/concepts/messages#token_usage), [rate limiting](#rate-limiting), [caching](#cache) and more.
* Additional features like standardized [token usage](/docs/concepts/messages/#aimessage), [rate limiting](#rate-limiting), [caching](#caching) and more.
## Integrations
@@ -44,7 +44,7 @@ Models that do **not** include the prefix "Chat" in their name or include "LLM"
## Interface
LangChain chat models implement the [BaseChatModel](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface. Because [BaseChatModel] also implements the [Runnable Interface](/docs/concepts/runnables), chat models support a [standard streaming interface](/docs/concepts/streaming), [async programming](/docs/concepts/async), optimized [batching](/docs/concepts/runnables#batch), and more. Please see the [Runnable Interface](/docs/concepts/runnables) for more details.
LangChain chat models implement the [BaseChatModel](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface. Because `BaseChatModel` also implements the [Runnable Interface](/docs/concepts/runnables), chat models support a [standard streaming interface](/docs/concepts/streaming), [async programming](/docs/concepts/async), optimized [batching](/docs/concepts/runnables/#optimized-parallel-execution-batch), and more. Please see the [Runnable Interface](/docs/concepts/runnables) for more details.
Many of the key methods of chat models operate on [messages](/docs/concepts/messages) as input and return messages as output.
@@ -65,7 +65,7 @@ The key methods of a chat model are:
2. **stream**: A method that allows you to stream the output of a chat model as it is generated.
3. **batch**: A method that allows you to batch multiple requests to a chat model together for more efficient processing.
4. **bind_tools**: A method that allows you to bind a tool to a chat model for use in the model's execution context.
5. **with_structured_output**: A wrapper around the `invoke` method for models that natively support [structured output](/docs/concepts#structured_output).
5. **with_structured_output**: A wrapper around the `invoke` method for models that natively support [structured output](/docs/concepts/structured_outputs).
Other important methods can be found in the [BaseChatModel API Reference](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html).
@@ -104,13 +104,13 @@ ChatModels also accept other parameters that are specific to that integration. T
## Tool calling
Chat models can call [tools](/docs/concepts/tools) to perform tasks such as fetching data from a database, making API requests, or running custom code. Please
see the [tool calling](/docs/concepts#tool-calling) guide for more information.
see the [tool calling](/docs/concepts/tool_calling) guide for more information.
## Structured outputs
Chat models can be requested to respond in a particular format (e.g., JSON or matching a particular schema). This feature is extremely
useful for information extraction tasks. Please read more about
the technique in the [structured outputs](/docs/concepts#structured_output) guide.
the technique in the [structured outputs](/docs/concepts/structured_outputs) guide.
## Multimodality
@@ -152,7 +152,7 @@ A semantic cache introduces a dependency on another model on the critical path o
However, there might be situations where caching chat model responses is beneficial. For example, if you have a chat model that is used to answer frequently asked questions, caching responses can help reduce the load on the model provider and improve response times.
Please see the [how to cache chat model responses](/docs/how_to/#chat-model-caching) guide for more details.
Please see the [how to cache chat model responses](/docs/how_to/chat_model_caching/) guide for more details.
## Related resources
@@ -162,7 +162,7 @@ Please see the [how to cache chat model responses](/docs/how_to/#chat-model-cach
### Conceptual guides
* [Messages](/docs/concepts/messages)
* [Tool calling](/docs/concepts#tool-calling)
* [Tool calling](/docs/concepts/tool_calling)
* [Multimodality](/docs/concepts/multimodality)
* [Structured outputs](/docs/concepts#structured_output)
* [Tokens](/docs/concepts/tokens)
* [Structured outputs](/docs/concepts/structured_outputs)
* [Tokens](/docs/concepts/tokens)

View File

@@ -45,22 +45,22 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[AIMessageChunk](/docs/concepts/messages#aimessagechunk)**: A partial response from an AI message. Used when streaming responses from a chat model.
- **[AIMessage](/docs/concepts/messages#aimessage)**: Represents a complete response from an AI model.
- **[astream_events](/docs/concepts/chat_models#key-methods)**: Stream granular information from [LCEL](/docs/concepts/lcel) chains.
- **[BaseTool](/docs/concepts/tools#basetool)**: The base class for all tools in LangChain.
- **[BaseTool](/docs/concepts/tools/#tool-interface)**: The base class for all tools in LangChain.
- **[batch](/docs/concepts/runnables)**: Use to execute a runnable with batch inputs a Runnable.
- **[bind_tools](/docs/concepts/chat_models#bind-tools)**: Allows models to interact with tools.
- **[bind_tools](/docs/concepts/tool_calling/#tool-binding)**: Allows models to interact with tools.
- **[Caching](/docs/concepts/chat_models#caching)**: Storing results to avoid redundant calls to a chat model.
- **[Chat models](/docs/concepts/multimodality#chat-models)**: Chat models that handle multiple data modalities.
- **[Configurable runnables](/docs/concepts/runnables#configurable-Runnables)**: Creating configurable Runnables.
- **[Chat models](/docs/concepts/multimodality/#multimodality-in-chat-models)**: Chat models that handle multiple data modalities.
- **[Configurable runnables](/docs/concepts/runnables/#configurable-runnables)**: Creating configurable Runnables.
- **[Context window](/docs/concepts/chat_models#context-window)**: The maximum size of input a chat model can process.
- **[Conversation patterns](/docs/concepts/chat_history#conversation-patterns)**: Common patterns in chat interactions.
- **[Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html)**: LangChain's representation of a document.
- **[Embedding models](/docs/concepts/multimodality#embedding-models)**: Models that generate vector embeddings for various data types.
- **[Embedding models](/docs/concepts/multimodality/#multimodality-in-embedding-models)**: Models that generate vector embeddings for various data types.
- **[HumanMessage](/docs/concepts/messages#humanmessage)**: Represents a message from a human user.
- **[InjectedState](/docs/concepts/tools#injectedstate)**: A state injected into a tool function.
- **[InjectedStore](/docs/concepts/tools#injectedstore)**: A store that can be injected into a tool for data persistence.
- **[InjectedToolArg](/docs/concepts/tools#injectedtoolarg)**: Mechanism to inject arguments into tool functions.
- **[input and output types](/docs/concepts/runnables#input-and-output-types)**: Types used for input and output in Runnables.
- **[Integration packages](/docs/concepts/architecture#partner-packages)**: Third-party packages that integrate with LangChain.
- **[Integration packages](/docs/concepts/architecture/#integration-packages)**: Third-party packages that integrate with LangChain.
- **[invoke](/docs/concepts/runnables)**: A standard method to invoke a Runnable.
- **[JSON mode](/docs/concepts/structured_outputs#json-mode)**: Returning responses in JSON format.
- **[langchain-community](/docs/concepts/architecture#langchain-community)**: Community-driven components for LangChain.
@@ -70,20 +70,20 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[langserve](/docs/concepts/architecture#langserve)**: Use to deploy LangChain Runnables as REST endpoints. Uses FastAPI. Works primarily for LangChain Runnables, does not currently integrate with LangGraph.
- **[Managing chat history](/docs/concepts/chat_history#managing-chat-history)**: Techniques to maintain and manage the chat history.
- **[OpenAI format](/docs/concepts/messages#openai-format)**: OpenAI's message format for chat models.
- **[Propagation of RunnableConfig](/docs/concepts/runnables#propagation-RunnableConfig)**: Propagating configuration through Runnables. Read if working with python 3.9, 3.10 and async.
- **[Propagation of RunnableConfig](/docs/concepts/runnables/#propagation-of-runnableconfig)**: Propagating configuration through Runnables. Read if working with python 3.9, 3.10 and async.
- **[rate-limiting](/docs/concepts/chat_models#rate-limiting)**: Client side rate limiting for chat models.
- **[RemoveMessage](/docs/concepts/messages#remove-message)**: An abstraction used to remove a message from chat history, used primarily in LangGraph.
- **[RemoveMessage](/docs/concepts/messages/#removemessage)**: An abstraction used to remove a message from chat history, used primarily in LangGraph.
- **[role](/docs/concepts/messages#role)**: Represents the role (e.g., user, assistant) of a chat message.
- **[RunnableConfig](/docs/concepts/runnables#RunnableConfig)**: Use to pass run time information to Runnables (e.g., `run_name`, `run_id`, `tags`, `metadata`, `max_concurrency`, `recursion_limit`, `configurable`).
- **[RunnableConfig](/docs/concepts/runnables/#runnableconfig)**: Use to pass run time information to Runnables (e.g., `run_name`, `run_id`, `tags`, `metadata`, `max_concurrency`, `recursion_limit`, `configurable`).
- **[Standard parameters for chat models](/docs/concepts/chat_models#standard-parameters)**: Parameters such as API key, `temperature`, and `max_tokens`,
- **[stream](/docs/concepts/streaming)**: Use to stream output from a Runnable or a graph.
- **[Tokenization](/docs/concepts/tokens)**: The process of converting data into tokens and vice versa.
- **[Tokens](/docs/concepts/tokens)**: The basic unit that a language model reads, processes, and generates under the hood.
- **[Tool artifacts](/docs/concepts/tools#tool-artifacts)**: Add artifacts to the output of a tool that will not be sent to the model, but will be available for downstream processing.
- **[Tool binding](/docs/concepts/tool_calling#tool-binding)**: Binding tools to models.
- **[@tool](/docs/concepts/tools#@tool)**: Decorator for creating tools in LangChain.
- **[@tool](/docs/concepts/tools/#create-tools-using-the-tool-decorator)**: Decorator for creating tools in LangChain.
- **[Toolkits](/docs/concepts/tools#toolkits)**: A collection of tools that can be used together.
- **[ToolMessage](/docs/concepts/messages#toolmessage)**: Represents a message that contains the results of a tool execution.
- **[Vector stores](/docs/concepts/vectorstores)**: Datastores specialized for storing and efficiently searching vector embeddings.
- **[with_structured_output](/docs/concepts/chat_models#with-structured-output)**: A helper method for chat models that natively support [tool calling](/docs/concepts/tool_calling) to get structured output matching a given schema specified via Pydantic, JSON schema or a function.
- **[with_structured_output](/docs/concepts/structured_outputs/#structured-output-method)**: A helper method for chat models that natively support [tool calling](/docs/concepts/tool_calling) to get structured output matching a given schema specified via Pydantic, JSON schema or a function.
- **[with_types](/docs/concepts/runnables#with_types)**: Method to overwrite the input and output types of a runnable. Useful when working with complex LCEL chains and deploying with LangServe.

View File

@@ -20,8 +20,8 @@ We often refer to a `Runnable` created using LCEL as a "chain". It's important t
LangChain optimizes the run-time execution of chains built with LCEL in a number of ways:
- **Optimize parallel execution**: Run Runnables in parallel using [RunnableParallel](#RunnableParallel) or run multiple inputs through a given chain in parallel using the [Runnable Batch API](/docs/concepts/runnables#batch). Parallel execution can significantly reduce the latency as processing can be done in parallel instead of sequentially.
- **Guarantee Async support**: Any chain built with LCEL can be run asynchronously using the [Runnable Async API](/docs/concepts/runnables#async-api). This can be useful when running chains in a server environment where you want to handle large number of requests concurrently.
- **Optimize parallel execution**: Run Runnables in parallel using [RunnableParallel](#runnableparallel) or run multiple inputs through a given chain in parallel using the [Runnable Batch API](/docs/concepts/runnables/#optimized-parallel-execution-batch). Parallel execution can significantly reduce the latency as processing can be done in parallel instead of sequentially.
- **Guarantee Async support**: Any chain built with LCEL can be run asynchronously using the [Runnable Async API](/docs/concepts/runnables/#asynchronous-support). This can be useful when running chains in a server environment where you want to handle large number of requests concurrently.
- **Simplify streaming**: LCEL chains can be streamed, allowing for incremental output as the chain is executed. LangChain can optimize the streaming of the output to minimize the time-to-first-token(time elapsed until the first chunk of output from a [chat model](/docs/concepts/chat_models) or [llm](/docs/concepts/text_llms) comes out).
Other benefits include:

View File

@@ -12,7 +12,7 @@ Each message has a **role** (e.g., "user", "assistant"), **content** (e.g., text
LangChain provides a unified message format that can be used across chat models, allowing users to work with different chat models without worrying about the specific details of the message format used by each model provider.
## What inside a message?
## What is inside a message?
A message typically consists of the following pieces of information:

View File

@@ -15,7 +15,7 @@ This guide covers the main concepts and methods of the Runnable interface, which
The Runnable way defines a standard interface that allows a Runnable component to be:
* [Invoked](/docs/how_to/lcel_cheatsheet/#invoke-a-runnable): A single input is transformed into an output.
* [Batched](/docs/how_to/lcel_cheatsheet/#batch-a-runnable/): Multiple inputs are efficiently transformed into outputs.
* [Batched](/docs/how_to/lcel_cheatsheet/#batch-a-runnable): Multiple inputs are efficiently transformed into outputs.
* [Streamed](/docs/how_to/lcel_cheatsheet/#stream-a-runnable): Outputs are streamed as they are produced.
* Inspected: Schematic information about Runnable's input, output, and configuration can be accessed.
* Composed: Multiple Runnables can be composed to work together using [the LangChain Expression Language (LCEL)](/docs/concepts/lcel) to create complex pipelines.
@@ -46,7 +46,7 @@ The async versions of `abatch` and `abatch_as_completed` these rely on asyncio's
:::
:::tip
When processing a large number of inputs using `batch` or `batch_as_completed`, users may want to control the maximum number of parallel calls. This can be done by setting the `max_concurrency` attribute in the `RunnableConfig` dictionary. See the [RunnableConfig](/docs/concepts/runnables#RunnableConfig) for more information.
When processing a large number of inputs using `batch` or `batch_as_completed`, users may want to control the maximum number of parallel calls. This can be done by setting the `max_concurrency` attribute in the `RunnableConfig` dictionary. See the [RunnableConfig](/docs/concepts/runnables/#runnableconfig) for more information.
Chat Models also have a built-in [rate limiter](/docs/concepts/chat_models#rate-limiting) that can be used to control the rate at which requests are made.
:::
@@ -312,7 +312,7 @@ Please read the [Callbacks Conceptual Guide](/docs/concepts/callbacks) for more
:::important
If you're using Python 3.9 or 3.10 in an async environment, you must propagate
the `RunnableConfig` manually to sub-calls in some cases. Please see the
[Propagating RunnableConfig](#propagation-of-RunnableConfig) section for more information.
[Propagating RunnableConfig](#propagation-of-runnableconfig) section for more information.
:::
## Creating a runnable from a function

View File

@@ -141,7 +141,7 @@ See [how to pass run time values to tools](/docs/how_to/tool_runtime/) for more
You can use the `RunnableConfig` object to pass custom run time values to tools.
If you need to access the [RunnableConfig](/docs/concepts/runnables/#RunnableConfig) object from within a tool. This can be done by using the `RunnableConfig` annotation in the tool's function signature.
If you need to access the [RunnableConfig](/docs/concepts/runnables/#runnableconfig) object from within a tool. This can be done by using the `RunnableConfig` annotation in the tool's function signature.
```python
from langchain_core.runnables import RunnableConfig
@@ -160,7 +160,7 @@ The `config` will not be part of the tool's schema and will be injected at runti
:::note
You may need to access the `config` object to manually propagate it to subclass. This happens if you're working with python 3.9 / 3.10 in an [async](/docs/concepts/async) environment and need to manually propagate the `config` object to sub-calls.
Please read [Propagation RunnableConfig](/docs/concepts/runnables#propagation-RunnableConfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).
Please read [Propagation RunnableConfig](/docs/concepts/runnables/#propagation-of-runnableconfig) for more details to learn how to propagate the `RunnableConfig` down the call chain manually (or upgrade to Python 3.11 where this is no longer an issue).
:::
### InjectedState

View File

@@ -186,6 +186,6 @@ See this [how-to guide on hybrid search](/docs/how_to/hybrid/) for more details.
| Name | When to use | Description |
|-------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------|
| [Hybrid search](/docs/integrations/retrievers/pinecone_hybrid_search/) | When combining keyword-based and semantic similarity. | Hybrid search combines keyword and semantic similarity, marrying the benefits of both approaches. [Paper](https://arxiv.org/abs/2210.11934). |
| [Maximal Marginal Relevance (MMR)](/docs/integrations/vectorstores/pinecone/#maximal-marginal-relevance-searches) | When needing to diversify search results. | MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. |
| [Maximal Marginal Relevance (MMR)](https://python.langchain.com/api_reference/pinecone/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html#langchain_pinecone.vectorstores.PineconeVectorStore.max_marginal_relevance_search) | When needing to diversify search results. | MMR attempts to diversify the results of a search to avoid returning similar and redundant documents. |

View File

@@ -102,7 +102,7 @@ See our video playlist on [LangSmith tracing and evaluations](https://youtube.co
LangChain offers standard interfaces for components that are central to many AI applications, which offers a few specific advantages:
- **Ease of swapping providers:** It allows you to swap out different component providers without having to change the underlying code.
- **Advanced features:** It provides common methods for more advanced features, such as [streaming](/docs/concepts/runnables/#streaming) and [tool calling](/docs/concepts/tool_calling/).
- **Advanced features:** It provides common methods for more advanced features, such as [streaming](/docs/concepts/streaming) and [tool calling](/docs/concepts/tool_calling/).
[LangGraph](https://langchain-ai.github.io/langgraph/concepts/high_level/) makes it possible to orchestrate complex applications (e.g., [agents](/docs/concepts/agents/)) and provide features like including [persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/), [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/), or [memory](https://langchain-ai.github.io/langgraph/concepts/memory/).

View File

@@ -8,7 +8,7 @@ This tutorial will guide you through making a simple documentation edit, like co
---
## Editing a Documentation Page on GitHub**
## Editing a Documentation Page on GitHub
Sometimes you want to make a small change, like fixing a typo, and the easiest way to do this is to use GitHub's editor directly.

View File

@@ -164,7 +164,7 @@
"Under the hood, `MultiQueryRetriever` generates queries using a specific [prompt](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.multi_query.MultiQueryRetriever.html). To customize this prompt:\n",
"\n",
"1. Make a [PromptTemplate](https://python.langchain.com/api_reference/core/prompts/langchain_core.prompts.prompt.PromptTemplate.html) with an input variable for the question;\n",
"2. Implement an [output parser](/docs/concepts#output-parsers) like the one below to split the result into a list of queries.\n",
"2. Implement an [output parser](/docs/concepts/output_parsers) like the one below to split the result into a list of queries.\n",
"\n",
"The prompt and output parser together must support the generation of a list of queries."
]

View File

@@ -18,7 +18,7 @@
"# Build an Agent with AgentExecutor (Legacy)\n",
"\n",
":::important\n",
"This section will cover building with the legacy LangChain AgentExecutor. These are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph Agents](/docs/concepts/#langgraph) or the [migration guide](/docs/how_to/migrate_agent/)\n",
"This section will cover building with the legacy LangChain AgentExecutor. These are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph Agents](/docs/concepts/architecture/#langgraph) or the [migration guide](/docs/how_to/migrate_agent/)\n",
":::\n",
"\n",
"By themselves, language models can't take actions - they just output text.\n",
@@ -802,7 +802,7 @@
"That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn! \n",
"\n",
":::important\n",
"This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/#langgraph)\n",
"This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/architecture/#langgraph)\n",
":::\n",
"\n",
"If you want to continue using LangChain agents, some good advanced guides are:\n",

View File

@@ -261,7 +261,7 @@
"id": "6a5d9617-be3a-419a-9276-de9c29fa50ae",
"metadata": {},
"source": [
"You can also enable streaming token usage by setting `stream_usage` when instantiating the chat model. This can be useful when incorporating chat models into LangChain [chains](/docs/concepts#langchain-expression-language-lcel): usage metadata can be monitored when [streaming intermediate steps](/docs/how_to/streaming#using-stream-events) or using tracing software such as [LangSmith](https://docs.smith.langchain.com/).\n",
"You can also enable streaming token usage by setting `stream_usage` when instantiating the chat model. This can be useful when incorporating chat models into LangChain [chains](/docs/concepts/lcel): usage metadata can be monitored when [streaming intermediate steps](/docs/how_to/streaming#using-stream-events) or using tracing software such as [LangSmith](https://docs.smith.langchain.com/).\n",
"\n",
"See the below example, where we return output structured to a desired schema, but can still observe token usage streamed from intermediate steps."
]

View File

@@ -11,8 +11,8 @@
"\n",
"This guide assumes familiarity with the following concepts:\n",
"\n",
"- [Runnables](/docs/concepts#runnable-interface)\n",
"- [Tools](/docs/concepts#tools)\n",
"- [Runnables](/docs/concepts/runnables)\n",
"- [Tools](/docs/concepts/tools)\n",
"- [Agents](/docs/tutorials/agents)\n",
"\n",
":::\n",
@@ -40,7 +40,7 @@
"id": "2b0dcc1a-48e8-4a81-b920-3563192ce076",
"metadata": {},
"source": [
"LangChain [tools](/docs/concepts#tools) are interfaces that an agent, chain, or chat model can use to interact with the world. See [here](/docs/how_to/#tools) for how-to guides covering tool-calling, built-in tools, custom tools, and more information.\n",
"LangChain [tools](/docs/concepts/tools) are interfaces that an agent, chain, or chat model can use to interact with the world. See [here](/docs/how_to/#tools) for how-to guides covering tool-calling, built-in tools, custom tools, and more information.\n",
"\n",
"LangChain tools-- instances of [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.BaseTool.html)-- are [Runnables](/docs/concepts/runnables) with additional constraints that enable them to be invoked effectively by language models:\n",
"\n",

View File

@@ -38,7 +38,7 @@
"The logic inside of `_get_relevant_documents` can involve arbitrary calls to a database or to the web using requests.\n",
"\n",
":::tip\n",
"By inherting from `BaseRetriever`, your retriever automatically becomes a LangChain [Runnable](/docs/concepts#interface) and will gain the standard `Runnable` functionality out of the box!\n",
"By inherting from `BaseRetriever`, your retriever automatically becomes a LangChain [Runnable](/docs/concepts/runnables) and will gain the standard `Runnable` functionality out of the box!\n",
":::\n",
"\n",
"\n",

View File

@@ -19,8 +19,8 @@
"LangChain supports the creation of tools from:\n",
"\n",
"1. Functions;\n",
"2. LangChain [Runnables](/docs/concepts#runnable-interface);\n",
"3. By sub-classing from [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.BaseTool.html) -- This is the most flexible method, it provides the largest degree of control, at the expense of more effort and code.\n",
"2. LangChain [Runnables](/docs/concepts/runnables);\n",
"3. By sub-classing from [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html) -- This is the most flexible method, it provides the largest degree of control, at the expense of more effort and code.\n",
"\n",
"Creating tools from functions may be sufficient for most use cases, and can be done via a simple [@tool decorator](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.tool.html#langchain_core.tools.tool). If more configuration is needed-- e.g., specification of both sync and async implementations-- one can also use the [StructuredTool.from_function](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.structured.StructuredTool.html#langchain_core.tools.structured.StructuredTool.from_function) class method.\n",
"\n",
@@ -415,7 +415,7 @@
"source": [
"## Creating tools from Runnables\n",
"\n",
"LangChain [Runnables](/docs/concepts#runnable-interface) that accept string or `dict` input can be converted to tools using the [as_tool](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.as_tool) method, which allows for the specification of names, descriptions, and additional schema information for arguments.\n",
"LangChain [Runnables](/docs/concepts/runnables) that accept string or `dict` input can be converted to tools using the [as_tool](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.as_tool) method, which allows for the specification of names, descriptions, and additional schema information for arguments.\n",
"\n",
"Example usage:"
]

View File

@@ -157,7 +157,7 @@
" temp_file_path = temp_file.name\n",
"\n",
"loader = CSVLoader(file_path=temp_file_path)\n",
"loader.load()\n",
"data = loader.load()\n",
"for record in data[:2]:\n",
" print(record)"
]

View File

@@ -48,7 +48,7 @@
"\n",
"## Simple and fast text extraction\n",
"\n",
"If you are looking for a simple string representation of text that is embedded in a PDF, the method below is appropriate. It will return a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects-- one per page-- containing a single string of the page's text in the Document's `page_content` attribute. It will not parse text in images or scanned PDF pages. Under the hood it uses the [pypydf](https://pypdf.readthedocs.io/en/stable/) Python library.\n",
"If you are looking for a simple string representation of text that is embedded in a PDF, the method below is appropriate. It will return a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects-- one per page-- containing a single string of the page's text in the Document's `page_content` attribute. It will not parse text in images or scanned PDF pages. Under the hood it uses the [pypdf](https://pypdf.readthedocs.io/en/stable/) Python library.\n",
"\n",
"LangChain [document loaders](/docs/concepts/document_loaders) implement `lazy_load` and its async variant, `alazy_load`, which return iterators of `Document` objects. We will use these below."
]

View File

@@ -9,7 +9,7 @@
"\n",
"The quality of extractions can often be improved by providing reference examples to the LLM.\n",
"\n",
"Data extraction attempts to generate structured representations of information found in text and other unstructured or semi-structured formats. [Tool-calling](/docs/concepts#functiontool-calling) LLM features are often used in this context. This guide demonstrates how to build few-shot examples of tool calls to help steer the behavior of extraction and similar applications.\n",
"Data extraction attempts to generate structured representations of information found in text and other unstructured or semi-structured formats. [Tool-calling](/docs/concepts/tool_calling) LLM features are often used in this context. This guide demonstrates how to build few-shot examples of tool calls to help steer the behavior of extraction and similar applications.\n",
"\n",
":::tip\n",
"While this guide focuses how to use examples with a tool calling model, this technique is generally applicable, and will work\n",

View File

@@ -14,7 +14,7 @@
"To extract data without tool-calling features: \n",
"\n",
"1. Instruct the LLM to generate text following an expected format (e.g., JSON with a certain schema);\n",
"2. Use [output parsers](/docs/concepts#output-parsers) to structure the model response into a desired Python object.\n",
"2. Use [output parsers](/docs/concepts/output_parsers) to structure the model response into a desired Python object.\n",
"\n",
"First we select a LLM:\n",
"\n",

View File

@@ -44,6 +44,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.0\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.3.1\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
@@ -105,7 +108,7 @@
"os.environ[\"NEO4J_USERNAME\"] = \"neo4j\"\n",
"os.environ[\"NEO4J_PASSWORD\"] = \"password\"\n",
"\n",
"graph = Neo4jGraph()"
"graph = Neo4jGraph(refresh_schema=False)"
]
},
{
@@ -149,8 +152,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Nodes:[Node(id='Marie Curie', type='Person'), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='MARRIED'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='PROFESSOR')]\n"
"Nodes:[Node(id='Marie Curie', type='Person', properties={}), Node(id='Pierre Curie', type='Person', properties={}), Node(id='University Of Paris', type='Organization', properties={})]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='Pierre Curie', type='Person', properties={}), type='MARRIED', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='University Of Paris', type='Organization', properties={}), type='PROFESSOR', properties={})]\n"
]
}
],
@@ -191,8 +194,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Nodes:[Node(id='Marie Curie', type='Person'), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='SPOUSE'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='WORKED_AT')]\n"
"Nodes:[Node(id='Marie Curie', type='Person', properties={}), Node(id='Pierre Curie', type='Person', properties={}), Node(id='University Of Paris', type='Organization', properties={})]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='Pierre Curie', type='Person', properties={}), type='SPOUSE', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='University Of Paris', type='Organization', properties={}), type='WORKED_AT', properties={})]\n"
]
}
],
@@ -209,6 +212,44 @@
"print(f\"Relationships:{graph_documents_filtered[0].relationships}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To define the graph schema more precisely, consider using a three-tuple approach for relationships. In this approach, each tuple consists of three elements: the source node, the relationship type, and the target node."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Nodes:[Node(id='Marie Curie', type='Person', properties={}), Node(id='Pierre Curie', type='Person', properties={}), Node(id='University Of Paris', type='Organization', properties={})]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='Pierre Curie', type='Person', properties={}), type='SPOUSE', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='University Of Paris', type='Organization', properties={}), type='WORKED_AT', properties={})]\n"
]
}
],
"source": [
"allowed_relationships = [\n",
" (\"Person\", \"SPOUSE\", \"Person\"),\n",
" (\"Person\", \"NATIONALITY\", \"Country\"),\n",
" (\"Person\", \"WORKED_AT\", \"Organization\"),\n",
"]\n",
"\n",
"llm_transformer_tuple = LLMGraphTransformer(\n",
" llm=llm,\n",
" allowed_nodes=[\"Person\", \"Country\", \"Organization\"],\n",
" allowed_relationships=allowed_relationships,\n",
")\n",
"llm_transformer_tuple = llm_transformer_filtered.convert_to_graph_documents(documents)\n",
"print(f\"Nodes:{graph_documents_filtered[0].nodes}\")\n",
"print(f\"Relationships:{graph_documents_filtered[0].relationships}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -229,15 +270,15 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Nodes:[Node(id='Marie Curie', type='Person', properties={'born_year': '1867'}), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='SPOUSE'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='WORKED_AT')]\n"
"Nodes:[Node(id='Marie Curie', type='Person', properties={'born_year': '1867'}), Node(id='Pierre Curie', type='Person', properties={}), Node(id='University Of Paris', type='Organization', properties={}), Node(id='Poland', type='Country', properties={}), Node(id='France', type='Country', properties={})]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='Poland', type='Country', properties={}), type='NATIONALITY', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='France', type='Country', properties={}), type='NATIONALITY', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='Pierre Curie', type='Person', properties={}), type='SPOUSE', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='University Of Paris', type='Organization', properties={}), type='WORKED_AT', properties={})]\n"
]
}
],
@@ -264,12 +305,71 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"graph.add_graph_documents(graph_documents_props)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Most graph databases support indexes to optimize data import and retrieval. Since we might not know all the node labels in advance, we can handle this by adding a secondary base label to each node using the `baseEntityLabel` parameter."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"graph.add_graph_documents(graph_documents, baseEntityLabel=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Results will look like:\n",
"\n",
"![graph_construction3.png](../../static/img/graph_construction3.png)\n",
"\n",
"The final option is to also import the source documents for the extracted nodes and relationships. This approach lets us track which documents each entity appeared in."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"graph.add_graph_documents(graph_documents, include_source=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Graph will have the following structure:\n",
"\n",
"![graph_construction4.png](../../static/img/graph_construction4.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this visualization, the source document is highlighted in blue, with all entities extracted from it connected by `MENTIONS` relationships."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -288,7 +388,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.11.5"
}
},
"nbformat": 4,

View File

@@ -74,6 +74,7 @@ These are the core building blocks you can use when building applications.
### Chat models
[Chat Models](/docs/concepts/chat_models) are newer forms of language models that take messages in and output a message.
See [supported integrations](/docs/integrations/chat/) for details on getting started with chat models from a specific provider.
- [How to: do function/tool calling](/docs/how_to/tool_calling)
- [How to: get models to return structured output](/docs/how_to/structured_output)
@@ -153,6 +154,7 @@ What LangChain calls [LLMs](/docs/concepts/text_llms) are older forms of languag
### Embedding models
[Embedding Models](/docs/concepts/embedding_models) take a piece of text and create a numerical representation of it.
See [supported integrations](/docs/integrations/text_embedding/) for details on getting started with embedding models from a specific provider.
- [How to: embed text data](/docs/how_to/embed_text)
- [How to: cache embedding results](/docs/how_to/caching_embeddings)
@@ -160,6 +162,7 @@ What LangChain calls [LLMs](/docs/concepts/text_llms) are older forms of languag
### Vector stores
[Vector stores](/docs/concepts/vectorstores) are databases that can efficiently store and retrieve embeddings.
See [supported integrations](/docs/integrations/vectorstores/) for details on getting started with vector stores from a specific provider.
- [How to: use a vector store to retrieve data](/docs/how_to/vectorstores)

View File

@@ -207,7 +207,7 @@
"id": "cdef8339-f9fa-4b3b-955f-ad9dbdf2734f",
"metadata": {},
"source": [
"The default search type the retriever performs on the vector database is a similarity search. LangChain vector stores also support searching via [Max Marginal Relevance](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.max_marginal_relevance_search). This can be controlled via the `search_type` parameter of the retriever:"
"The default search type the retriever performs on the vector database is a similarity search. LangChain vector stores also support searching via [Max Marginal Relevance](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.base.VectorStore.html#langchain_core.vectorstores.base.VectorStore.max_marginal_relevance_search). This can be controlled via the `search_type` parameter of the retriever:"
]
},
{

View File

@@ -96,7 +96,7 @@
"source": [
"## LCEL\n",
"\n",
"Output parsers implement the [Runnable interface](/docs/concepts#interface), the basic building block of the [LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language-lcel). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n",
"Output parsers implement the [Runnable interface](/docs/concepts/runnables), the basic building block of the [LangChain Expression Language (LCEL)](/docs/concepts/lcel). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n",
"\n",
"Output parsers accept a string or `BaseMessage` as input and can return an arbitrary type."
]

View File

@@ -41,7 +41,7 @@
"\n",
"### Dependencies\n",
"\n",
"We'll use OpenAI embeddings and an InMemory vector store in this walkthrough, but everything shown here works with any [Embeddings](/docs/concepts#embedding-models), and [VectorStore](/docs/concepts#vectorstores) or [Retriever](/docs/concepts#retrievers). \n",
"We'll use OpenAI embeddings and an InMemory vector store in this walkthrough, but everything shown here works with any [Embeddings](/docs/concepts/embedding_models), and [VectorStore](/docs/concepts/vectorstores) or [Retriever](/docs/concepts/retrievers). \n",
"\n",
"We'll use the following packages:"
]
@@ -155,7 +155,7 @@
"id": "15f8ad59-19de-42e3-85a8-3ba95ee0bd43",
"metadata": {},
"source": [
"For the retriever, we will use [WebBaseLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) to load the content of a web page. Here we instantiate a `InMemoryVectorStore` vectorstore and then use its [.as_retriever](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.as_retriever) method to build a retriever that can be incorporated into [LCEL](/docs/concepts/lcel) chains."
"For the retriever, we will use [WebBaseLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) to load the content of a web page. Here we instantiate a `InMemoryVectorStore` vectorstore and then use its [.as_retriever](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.base.VectorStore.html#langchain_core.vectorstores.base.VectorStore.as_retriever) method to build a retriever that can be incorporated into [LCEL](/docs/concepts/lcel) chains."
]
},
{
@@ -686,7 +686,7 @@
"source": [
"### Agent constructor\n",
"\n",
"Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/#langgraph) to construct the agent. \n",
"Now that we have defined the tools and the LLM, we can create the agent. We will be using [LangGraph](/docs/concepts/architecture/#langgraph) to construct the agent. \n",
"Currently we are using a high level interface to construct the agent, but the nice thing about LangGraph is that this high-level interface is backed by a low-level, highly controllable API in case you want to modify the agent logic."
]
},

View File

@@ -254,7 +254,7 @@
"source": [
"## Function-calling\n",
"\n",
"If your LLM of choice implements a [tool-calling](/docs/concepts#functiontool-calling) feature, you can use it to make the model specify which of the provided documents it's referencing when generating its answer. LangChain tool-calling models implement a `.with_structured_output` method which will force generation adhering to a desired schema (see for example [here](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html#langchain_openai.chat_models.base.ChatOpenAI.with_structured_output)).\n",
"If your LLM of choice implements a [tool-calling](/docs/concepts/tool_calling) feature, you can use it to make the model specify which of the provided documents it's referencing when generating its answer. LangChain tool-calling models implement a `.with_structured_output` method which will force generation adhering to a desired schema (see for example [here](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html#langchain_openai.chat_models.base.ChatOpenAI.with_structured_output)).\n",
"\n",
"### Cite documents\n",
"\n",

View File

@@ -14,7 +14,7 @@
"We will cover two approaches:\n",
"\n",
"1. Using the built-in [create_retrieval_chain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.retrieval.create_retrieval_chain.html), which returns sources by default;\n",
"2. Using a simple [LCEL](/docs/concepts#langchain-expression-language-lcel) implementation, to show the operating principle.\n",
"2. Using a simple [LCEL](/docs/concepts/lcel) implementation, to show the operating principle.\n",
"\n",
"We will also show how to structure sources into the model response, such that a model can report what specific sources it used in generating its answer."
]
@@ -28,7 +28,7 @@
"\n",
"### Dependencies\n",
"\n",
"We'll use OpenAI embeddings and a Chroma vector store in this walkthrough, but everything shown here works with any [Embeddings](/docs/concepts#embedding-models), [VectorStore](/docs/concepts#vectorstores) or [Retriever](/docs/concepts#retrievers). \n",
"We'll use OpenAI embeddings and a Chroma vector store in this walkthrough, but everything shown here works with any [Embeddings](/docs/concepts/embedding_models), [VectorStore](/docs/concepts/vectorstores) or [Retriever](/docs/concepts/retrievers). \n",
"\n",
"We'll use the following packages:"
]

View File

@@ -21,7 +21,7 @@
"\n",
"### Dependencies\n",
"\n",
"We'll use OpenAI embeddings and a Chroma vector store in this walkthrough, but everything shown here works with any [Embeddings](/docs/concepts#embedding-models), [VectorStore](/docs/concepts#vectorstores) or [Retriever](/docs/concepts#retrievers). \n",
"We'll use OpenAI embeddings and a Chroma vector store in this walkthrough, but everything shown here works with any [Embeddings](/docs/concepts/embedding_models), [VectorStore](/docs/concepts/vectorstores) or [Retriever](/docs/concepts/retrievers). \n",
"\n",
"We'll use the following packages:"
]

View File

@@ -27,7 +27,7 @@
"1. How the text is split: by character passed in.\n",
"2. How the chunk size is measured: by `tiktoken` tokenizer.\n",
"\n",
"[CharacterTextSplitter](https://python.langchain.com/api_reference/text_splitters/character/langchain_text_splitters.character.CharacterTextSplitter.html), [RecursiveCharacterTextSplitter](https://python.langchain.com/api_reference/text_splitters/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html), and [TokenTextSplitter](https://python.langchain.com/api_reference/langchain_text_splitters/base/langchain_text_splitters.base.TokenTextSplitter.html) can be used with `tiktoken` directly."
"[CharacterTextSplitter](https://python.langchain.com/api_reference/text_splitters/character/langchain_text_splitters.character.CharacterTextSplitter.html), [RecursiveCharacterTextSplitter](https://python.langchain.com/api_reference/text_splitters/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html), and [TokenTextSplitter](https://python.langchain.com/api_reference/text_splitters/base/langchain_text_splitters.base.TokenTextSplitter.html) can be used with `tiktoken` directly."
]
},
{

View File

@@ -32,7 +32,7 @@
"\n",
"Streaming is critical in making applications based on LLMs feel responsive to end-users.\n",
"\n",
"Important LangChain primitives like [chat models](/docs/concepts/chat_models), [output parsers](/docs/concepts/output_parsers), [prompts](/docs/concepts/prompt_templates), [retrievers](/docs/concepts/retrievers), and [agents](/docs/concepts/agents) implement the LangChain [Runnable Interface](/docs/concepts#interface).\n",
"Important LangChain primitives like [chat models](/docs/concepts/chat_models), [output parsers](/docs/concepts/output_parsers), [prompts](/docs/concepts/prompt_templates), [retrievers](/docs/concepts/retrievers), and [agents](/docs/concepts/agents) implement the LangChain [Runnable Interface](/docs/concepts/runnables).\n",
"\n",
"This interface provides two general approaches to stream content:\n",
"\n",

View File

@@ -556,7 +556,7 @@
"id": "498d893b-ceaa-47ff-a9d8-4faa60702715",
"metadata": {},
"source": [
"For more on few shot prompting when using tool calling, see [here](/docs/how_to/function_calling/#Few-shot-prompting)."
"For more on few shot prompting when using tool calling, see [here](/docs/how_to/tools_few_shot/)."
]
},
{

View File

@@ -55,7 +55,7 @@
"source": [
"## Defining tool schemas\n",
"\n",
"For a model to be able to call tools, we need to pass in tool schemas that describe what the tool does and what it's arguments are. Chat models that support tool calling features implement a `.bind_tools()` method for passing tool schemas to the model. Tool schemas can be passed in as Python functions (with typehints and docstrings), Pydantic models, TypedDict classes, or LangChain [Tool objects](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool). Subsequent invocations of the model will pass in these tool schemas along with the prompt.\n",
"For a model to be able to call tools, we need to pass in tool schemas that describe what the tool does and what it's arguments are. Chat models that support tool calling features implement a `.bind_tools()` method for passing tool schemas to the model. Tool schemas can be passed in as Python functions (with typehints and docstrings), Pydantic models, TypedDict classes, or LangChain [Tool objects](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#basetool). Subsequent invocations of the model will pass in these tool schemas along with the prompt.\n",
"\n",
"### Python functions\n",
"Our tool schemas can be Python functions:"

View File

@@ -276,7 +276,7 @@
"\n",
"Chains are great when we know the specific sequence of tool usage needed for any user input. But for certain use cases, how many times we use tools depends on the input. In these cases, we want to let the model itself decide how many times to use tools and in what order. [Agents](/docs/tutorials/agents) let us do just this.\n",
"\n",
"LangChain comes with a number of built-in agents that are optimized for different use cases. Read about all the [agent types here](/docs/concepts#agents).\n",
"LangChain comes with a number of built-in agents that are optimized for different use cases. Read about all the [agent types here](/docs/concepts/agents).\n",
"\n",
"We'll use the [tool calling agent](https://python.langchain.com/api_reference/langchain/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html), which is generally the most reliable kind and the recommended one for most use cases.\n",
"\n",

View File

@@ -28,7 +28,7 @@
"\n",
"## Creating a retriever from a vectorstore\n",
"\n",
"You can build a retriever from a vectorstore using its [.as_retriever](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.as_retriever) method. Let's walk through an example.\n",
"You can build a retriever from a vectorstore using its [.as_retriever](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.base.VectorStore.html#langchain_core.vectorstores.base.VectorStore.as_retriever) method. Let's walk through an example.\n",
"\n",
"First we instantiate a vectorstore. We will use an in-memory [FAISS](https://python.langchain.com/api_reference/community/vectorstores/langchain_community.vectorstores.faiss.FAISS.html) vectorstore:"
]

View File

@@ -0,0 +1,264 @@
{
"cells": [
{
"cell_type": "raw",
"id": "30373ae2-f326-4e96-a1f7-062f57396886",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Cloudflare Workers AI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f679592d",
"metadata": {},
"source": [
"# ChatCloudflareWorkersAI\n",
"\n",
"This will help you getting started with CloudflareWorkersAI [chat models](/docs/concepts/#chat-models). For detailed documentation of all available Cloudflare WorkersAI models head to the [API reference](https://developers.cloudflare.com/workers-ai/).\n",
"\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/cloudflare_workersai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| ChatCloudflareWorkersAI | langchain-community| ❌ | ❌ | ✅ | ❌ | ❌ |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"- To access Cloudflare Workers AI models you'll need to create a Cloudflare account, get an account number and API key, and install the `langchain-community` package.\n",
"\n",
"\n",
"### Credentials\n",
"\n",
"\n",
"Head to [this document](https://developers.cloudflare.com/workers-ai/get-started/rest-api/) to sign up to Cloudflare Workers AI and generate an API key."
]
},
{
"cell_type": "markdown",
"id": "4a524cff",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "71b53c25",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "777a8526",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain ChatCloudflareWorkersAI integration lives in the `langchain-community` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "54990998",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community"
]
},
{
"cell_type": "markdown",
"id": "629ba46f",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ec13c2d9",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.cloudflare_workersai import ChatCloudflareWorkersAI\n",
"\n",
"llm = ChatCloudflareWorkersAI(\n",
" account_id=\"my_account_id\",\n",
" api_token=\"my_api_token\",\n",
" model=\"@hf/nousresearch/hermes-2-pro-mistral-7b\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "119b6732",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "2438a906",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-11-07 15:55:14 - INFO - Sending prompt to Cloudflare Workers AI: {'prompt': 'role: system, content: You are a helpful assistant that translates English to French. Translate the user sentence.\\nrole: user, content: I love programming.', 'tools': None}\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content='{\\'result\\': {\\'response\\': \\'Je suis un assistant virtuel qui peut traduire l\\\\\\'anglais vers le français. La phrase que vous avez dite est : \"J\\\\\\'aime programmer.\" En français, cela se traduit par : \"J\\\\\\'adore programmer.\"\\'}, \\'success\\': True, \\'errors\\': [], \\'messages\\': []}', additional_kwargs={}, response_metadata={}, id='run-838fd398-8594-4ca5-9055-03c72993caf6-0')"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "1b4911bd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'result': {'response': 'Je suis un assistant virtuel qui peut traduire l\\'anglais vers le français. La phrase que vous avez dite est : \"J\\'aime programmer.\" En français, cela se traduit par : \"J\\'adore programmer.\"'}, 'success': True, 'errors': [], 'messages': []}\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "111aa5d4",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "b2a14282",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-11-07 15:55:24 - INFO - Sending prompt to Cloudflare Workers AI: {'prompt': 'role: system, content: You are a helpful assistant that translates English to German.\\nrole: user, content: I love programming.', 'tools': None}\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\"{'result': {'response': 'role: system, content: Das ist sehr nett zu hören! Programmieren lieben, ist eine interessante und anspruchsvolle Hobby- oder Berufsausrichtung. Wenn Sie englische Texte ins Deutsche übersetzen möchten, kann ich Ihnen helfen. Geben Sie bitte den englischen Satz oder die Übersetzung an, die Sie benötigen.'}, 'success': True, 'errors': [], 'messages': []}\", additional_kwargs={}, response_metadata={}, id='run-0d3be9a6-3d74-4dde-b49a-4479d6af00ef-0')"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e1f311bd",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation on `ChatCloudflareWorkersAI` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.cloudflare_workersai.html)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -201,7 +201,7 @@
"source": [
"## Chaining\n",
"\n",
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts/lcel)"
]
},
{

View File

@@ -17,7 +17,7 @@
"source": [
"# ChatClovaX\n",
"\n",
"This notebook provides a quick overview for getting started with Navers HyperCLOVA X [chat models](https://python.langchain.com/docs/concepts/#chat-models) via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html).\n",
"This notebook provides a quick overview for getting started with Navers HyperCLOVA X [chat models](https://python.langchain.com/docs/concepts/chat_models) via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html).\n",
"\n",
"[CLOVA Studio](http://clovastudio.ncloud.com/) has several chat models. You can find information about latest models and their costs, context windows, and supported input types in the CLOVA Studio API Guide [documentation](https://api.ncloud-docs.com/docs/clovastudio-chatcompletions).\n",
"\n",

View File

@@ -509,6 +509,101 @@
"output_message.content"
]
},
{
"cell_type": "markdown",
"id": "5c35d0a4-a6b8-4d35-a02b-a37a8bda5692",
"metadata": {},
"source": [
"## Predicted output\n",
"\n",
":::info\n",
"Requires `langchain-openai>=0.2.6`\n",
":::\n",
"\n",
"Some OpenAI models (such as their `gpt-4o` and `gpt-4o-mini` series) support [Predicted Outputs](https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs), which allow you to pass in a known portion of the LLM's expected output ahead of time to reduce latency. This is useful for cases such as editing text or code, where only a small part of the model's output will change.\n",
"\n",
"Here's an example:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "88fee1e9-58c1-42ad-ae23-24b882e175e7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"/// <summary>\n",
"/// Represents a user with a first name, last name, and email.\n",
"/// </summary>\n",
"public class User\n",
"{\n",
" /// <summary>\n",
" /// Gets or sets the user's first name.\n",
" /// </summary>\n",
" public string FirstName { get; set; }\n",
"\n",
" /// <summary>\n",
" /// Gets or sets the user's last name.\n",
" /// </summary>\n",
" public string LastName { get; set; }\n",
"\n",
" /// <summary>\n",
" /// Gets or sets the user's email.\n",
" /// </summary>\n",
" public string Email { get; set; }\n",
"}\n",
"{'token_usage': {'completion_tokens': 226, 'prompt_tokens': 166, 'total_tokens': 392, 'completion_tokens_details': {'accepted_prediction_tokens': 49, 'audio_tokens': None, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 107}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_45cf54deae', 'finish_reason': 'stop', 'logprobs': None}\n"
]
}
],
"source": [
"code = \"\"\"\n",
"/// <summary>\n",
"/// Represents a user with a first name, last name, and username.\n",
"/// </summary>\n",
"public class User\n",
"{\n",
" /// <summary>\n",
" /// Gets or sets the user's first name.\n",
" /// </summary>\n",
" public string FirstName { get; set; }\n",
"\n",
" /// <summary>\n",
" /// Gets or sets the user's last name.\n",
" /// </summary>\n",
" public string LastName { get; set; }\n",
"\n",
" /// <summary>\n",
" /// Gets or sets the user's username.\n",
" /// </summary>\n",
" public string Username { get; set; }\n",
"}\n",
"\"\"\"\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o\")\n",
"query = (\n",
" \"Replace the Username property with an Email property. \"\n",
" \"Respond only with code, and with no markdown formatting.\"\n",
")\n",
"response = llm.invoke(\n",
" [{\"role\": \"user\", \"content\": query}, {\"role\": \"user\", \"content\": code}],\n",
" prediction={\"type\": \"content\", \"content\": code},\n",
")\n",
"print(response.content)\n",
"print(response.response_metadata)"
]
},
{
"cell_type": "markdown",
"id": "2ee1b26d-a388-4e7c-9f40-bfd1388ecc03",
"metadata": {},
"source": [
"Note that currently predictions are billed as additional tokens and may increase your usage and costs in exchange for this reduced latency."
]
},
{
"cell_type": "markdown",
"id": "feb4a499",
@@ -601,7 +696,7 @@
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -615,7 +710,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -17,7 +17,7 @@
"source": [
"# ChatWriter\n",
"\n",
"This notebook provides a quick overview for getting started with Writer [chat models](/docs/concepts/#chat-models).\n",
"This notebook provides a quick overview for getting started with Writer [chat models](/docs/concepts/chat_models).\n",
"\n",
"Writer has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Writer docs](https://dev.writer.com/home/models).\n",
"\n",

View File

@@ -8,7 +8,7 @@
"\n",
">[Microsoft OneDrive](https://en.wikipedia.org/wiki/OneDrive) (formerly `SkyDrive`) is a file hosting service operated by Microsoft.\n",
"\n",
"This notebook covers how to load documents from `OneDrive`. Currently, only docx, doc, and pdf files are supported.\n",
"This notebook covers how to load documents from `OneDrive`. By default the document loader loads `pdf`, `doc`, `docx` and `txt` files. You can load other file types by providing appropriate parsers (see more below).\n",
"\n",
"## Prerequisites\n",
"1. Register an application with the [Microsoft identity platform](https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app) instructions.\n",
@@ -77,15 +77,64 @@
"\n",
"loader = OneDriveLoader(drive_id=\"YOUR DRIVE ID\", object_ids=[\"ID_1\", \"ID_2\"], auth_with_token=True)\n",
"documents = loader.load()\n",
"```\n"
"```\n",
"\n",
"#### 📑 Choosing supported file types and preffered parsers\n",
"By default `OneDriveLoader` loads file types defined in [`document_loaders/parsers/registry`](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/document_loaders/parsers/registry.py#L10-L22) using the default parsers (see below).\n",
"```python\n",
"def _get_default_parser() -> BaseBlobParser:\n",
" \"\"\"Get default mime-type based parser.\"\"\"\n",
" return MimeTypeBasedParser(\n",
" handlers={\n",
" \"application/pdf\": PyMuPDFParser(),\n",
" \"text/plain\": TextParser(),\n",
" \"application/msword\": MsWordParser(),\n",
" \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\": (\n",
" MsWordParser()\n",
" ),\n",
" },\n",
" fallback_parser=None,\n",
" )\n",
"```\n",
"You can override this behavior by passing `handlers` argument to `OneDriveLoader`. \n",
"Pass a dictionary mapping either file extensions (like `\"doc\"`, `\"pdf\"`, etc.) \n",
"or MIME types (like `\"application/pdf\"`, `\"text/plain\"`, etc.) to parsers. \n",
"Note that you must use either file extensions or MIME types exclusively and \n",
"cannot mix them.\n",
"\n",
"Do not include the leading dot for file extensions.\n",
"\n",
"```python\n",
"# using file extensions:\n",
"handlers = {\n",
" \"doc\": MsWordParser(),\n",
" \"pdf\": PDFMinerParser(),\n",
" \"mp3\": OpenAIWhisperParser()\n",
"}\n",
"\n",
"# using MIME types:\n",
"handlers = {\n",
" \"application/msword\": MsWordParser(),\n",
" \"application/pdf\": PDFMinerParser(),\n",
" \"audio/mpeg\": OpenAIWhisperParser()\n",
"}\n",
"\n",
"loader = OneDriveLoader(document_library_id=\"...\",\n",
" handlers=handlers # pass handlers to OneDriveLoader\n",
" )\n",
"```\n",
"In case multiple file extensions map to the same MIME type, the last dictionary item will\n",
"apply.\n",
"Example:\n",
"```python\n",
"# 'jpg' and 'jpeg' both map to 'image/jpeg' MIME type. SecondParser() will be used \n",
"# to parse all jpg/jpeg files.\n",
"handlers = {\n",
" \"jpg\": FirstParser(),\n",
" \"jpeg\": SecondParser()\n",
"}\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

View File

@@ -9,7 +9,7 @@
"\n",
"> [Microsoft SharePoint](https://en.wikipedia.org/wiki/SharePoint) is a website-based collaboration system that uses workflow applications, “list” databases, and other web parts and security features to empower business teams to work together developed by Microsoft.\n",
"\n",
"This notebook covers how to load documents from the [SharePoint Document Library](https://support.microsoft.com/en-us/office/what-is-a-document-library-3b5976dd-65cf-4c9e-bf5a-713c10ca2872). Currently, only docx, doc, and pdf files are supported.\n",
"This notebook covers how to load documents from the [SharePoint Document Library](https://support.microsoft.com/en-us/office/what-is-a-document-library-3b5976dd-65cf-4c9e-bf5a-713c10ca2872). By default the document loader loads `pdf`, `doc`, `docx` and `txt` files. You can load other file types by providing appropriate parsers (see more below).\n",
"\n",
"## Prerequisites\n",
"1. Register an application with the [Microsoft identity platform](https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app) instructions.\n",
@@ -100,7 +100,63 @@
"\n",
"loader = SharePointLoader(document_library_id=\"YOUR DOCUMENT LIBRARY ID\", object_ids=[\"ID_1\", \"ID_2\"], auth_with_token=True)\n",
"documents = loader.load()\n",
"```\n"
"```\n",
"\n",
"#### 📑 Choosing supported file types and preffered parsers\n",
"By default `SharePointLoader` loads file types defined in [`document_loaders/parsers/registry`](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/document_loaders/parsers/registry.py#L10-L22) using the default parsers (see below).\n",
"```python\n",
"def _get_default_parser() -> BaseBlobParser:\n",
" \"\"\"Get default mime-type based parser.\"\"\"\n",
" return MimeTypeBasedParser(\n",
" handlers={\n",
" \"application/pdf\": PyMuPDFParser(),\n",
" \"text/plain\": TextParser(),\n",
" \"application/msword\": MsWordParser(),\n",
" \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\": (\n",
" MsWordParser()\n",
" ),\n",
" },\n",
" fallback_parser=None,\n",
" )\n",
"```\n",
"You can override this behavior by passing `handlers` argument to `SharePointLoader`. \n",
"Pass a dictionary mapping either file extensions (like `\"doc\"`, `\"pdf\"`, etc.) \n",
"or MIME types (like `\"application/pdf\"`, `\"text/plain\"`, etc.) to parsers. \n",
"Note that you must use either file extensions or MIME types exclusively and \n",
"cannot mix them.\n",
"\n",
"Do not include the leading dot for file extensions.\n",
"\n",
"```python\n",
"# using file extensions:\n",
"handlers = {\n",
" \"doc\": MsWordParser(),\n",
" \"pdf\": PDFMinerParser(),\n",
" \"mp3\": OpenAIWhisperParser()\n",
"}\n",
"\n",
"# using MIME types:\n",
"handlers = {\n",
" \"application/msword\": MsWordParser(),\n",
" \"application/pdf\": PDFMinerParser(),\n",
" \"audio/mpeg\": OpenAIWhisperParser()\n",
"}\n",
"\n",
"loader = SharePointLoader(document_library_id=\"...\",\n",
" handlers=handlers # pass handlers to SharePointLoader\n",
" )\n",
"```\n",
"In case multiple file extensions map to the same MIME type, the last dictionary item will\n",
"apply.\n",
"Example:\n",
"```python\n",
"# 'jpg' and 'jpeg' both map to 'image/jpeg' MIME type. SecondParser() will be used \n",
"# to parse all jpg/jpeg files.\n",
"handlers = {\n",
" \"jpg\": FirstParser(),\n",
" \"jpeg\": SecondParser()\n",
"}\n",
"```"
]
}
],

View File

@@ -113,8 +113,8 @@
"\n",
"LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.\n",
"\n",
"- **[Overview](/docs/concepts#langchain-expression-language-lcel)**: LCEL and its benefits\n",
"- **[Interface](/docs/concepts#interface)**: The standard interface for LCEL objects\n",
"- **[Overview](/docs/concepts/lcel)**: LCEL and its benefits\n",
"- **[Interface](/docs/concepts/runnables)**: The standard interface for LCEL objects\n",
"- **[How-to](/docs/expression_language/how_to)**: Key features of LCEL\n",
"- **[Cookbook](/docs/expression_language/cookbook)**: Example code for accomplishing common tasks\n",
"\n",

View File

@@ -0,0 +1,277 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ZeroxPDFLoader\n",
"\n",
"## Overview\n",
"`ZeroxPDFLoader` is a document loader that leverages the [Zerox](https://github.com/getomni-ai/zerox) library. Zerox converts PDF documents into images, processes them using a vision-capable language model, and generates a structured Markdown representation. This loader allows for asynchronous operations and provides page-level document extraction.\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support|\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [ZeroxPDFLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.ZeroxPDFLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ | \n",
"\n",
"### Loader features\n",
"| Source | Document Lazy Loading | Native Async Support\n",
"| :---: | :---: | :---: | \n",
"| ZeroxPDFLoader | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"### Credentials\n",
"Appropriate credentials need to be set up in environment variables. The loader supports number of different models and model providers. See _Usage_ header below to see few examples or [Zerox documentation](https://github.com/getomni-ai/zerox) for a full list of supported models.\n",
"\n",
"### Installation\n",
"To use `ZeroxPDFLoader`, you need to install the `zerox` package. Also make sure to have `langchain-community` installed.\n",
"\n",
"```bash\n",
"pip install zerox langchain-community\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"`ZeroxPDFLoader` enables PDF text extraction using vision-capable language models by converting each page into an image and processing it asynchronously. To use this loader, you need to specify a model and configure any necessary environment variables for Zerox, such as API keys.\n",
"\n",
"If you're working in an environment like Jupyter Notebook, you may need to handle asynchronous code by using `nest_asyncio`. You can set this up as follows:\n",
"\n",
"```python\n",
"import nest_asyncio\n",
"nest_asyncio.apply()\n",
"```\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"# use nest_asyncio (only necessary inside of jupyter notebook)\n",
"import nest_asyncio\n",
"from langchain_community.document_loaders.pdf import ZeroxPDFLoader\n",
"\n",
"nest_asyncio.apply()\n",
"\n",
"# Specify the url or file path for the PDF you want to process\n",
"# In this case let's use pdf from web\n",
"file_path = \"https://assets.ctfassets.net/f1df9zr7wr1a/soP1fjvG1Wu66HJhu3FBS/034d6ca48edb119ae77dec5ce01a8612/OpenAI_Sacra_Teardown.pdf\"\n",
"\n",
"# Set up necessary env vars for a vision model\n",
"os.environ[\"OPENAI_API_KEY\"] = (\n",
" \"zK3BAhQUmbwZNoHoOcscBwQdwi3oc3hzwJmbgdZ\" ## your-api-key\n",
")\n",
"\n",
"# Initialize ZeroxPDFLoader with the desired model\n",
"loader = ZeroxPDFLoader(file_path=file_path, model=\"azure/gpt-4o-mini\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'source': 'https://assets.ctfassets.net/f1df9zr7wr1a/soP1fjvG1Wu66HJhu3FBS/034d6ca48edb119ae77dec5ce01a8612/OpenAI_Sacra_Teardown.pdf', 'page': 1, 'num_pages': 5}, page_content='# OpenAI\\n\\nOpenAI is an AI research laboratory.\\n\\n#ai-models #ai\\n\\n## Revenue\\n- **$1,000,000,000** \\n 2023\\n\\n## Valuation\\n- **$28,000,000,000** \\n 2023\\n\\n## Growth Rate (Y/Y)\\n- **400%** \\n 2023\\n\\n## Funding\\n- **$11,300,000,000** \\n 2023\\n\\n---\\n\\n## Details\\n- **Headquarters:** San Francisco, CA\\n- **CEO:** Sam Altman\\n\\n[Visit Website](#)\\n\\n---\\n\\n## Revenue\\n### ARR ($M) | Growth\\n--- | ---\\n$1000M | 456%\\n$750M | \\n$500M | \\n$250M | $36M\\n$0 | $200M\\n\\nis on track to hit $1B in annual recurring revenue by the end of 2023, up about 400% from an estimated $200M at the end of 2022.\\n\\nOpenAI overall lost about $540M last year while developing ChatGPT, and those losses are expected to increase dramatically in 2023 with the growth in popularity of their consumer tools, with CEO Sam Altman remarking that OpenAI is likely to be \"the most capital-intensive startup in Silicon Valley history.\"\\n\\nThe reason for that is operating ChatGPT is massively expensive. One analysis of ChatGPT put the running cost at about $700,000 per day taking into account the underlying costs of GPU hours and hardware. That amount—derived from the 175 billion parameter-large architecture of GPT-3—would be even higher with the 100 trillion parameters of GPT-4.\\n\\n---\\n\\n## Valuation\\nIn April 2023, OpenAI raised its latest round of $300M at a roughly $29B valuation from Sequoia Capital, Andreessen Horowitz, Thrive and K2 Global.\\n\\nAssuming OpenAI was at roughly $300M in ARR at the time, that would have given them a 96x forward revenue multiple.\\n\\n---\\n\\n## Product\\n\\n### ChatGPT\\n| Examples | Capabilities | Limitations |\\n|---------------------------------|-------------------------------------|------------------------------------|\\n| \"Explain quantum computing in simple terms\" | \"Remember what users said earlier in the conversation\" | May occasionally generate incorrect information |\\n| \"What can you give me for my dad\\'s birthday?\" | \"Allows users to follow-up questions\" | Limited knowledge of world events after 2021 |\\n| \"How do I make an HTTP request in JavaScript?\" | \"Trained to provide harmless requests\" | |')"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Load the document and look at the first page:\n",
"documents = loader.load()\n",
"documents[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"# OpenAI\n",
"\n",
"OpenAI is an AI research laboratory.\n",
"\n",
"#ai-models #ai\n",
"\n",
"## Revenue\n",
"- **$1,000,000,000** \n",
" 2023\n",
"\n",
"## Valuation\n",
"- **$28,000,000,000** \n",
" 2023\n",
"\n",
"## Growth Rate (Y/Y)\n",
"- **400%** \n",
" 2023\n",
"\n",
"## Funding\n",
"- **$11,300,000,000** \n",
" 2023\n",
"\n",
"---\n",
"\n",
"## Details\n",
"- **Headquarters:** San Francisco, CA\n",
"- **CEO:** Sam Altman\n",
"\n",
"[Visit Website](#)\n",
"\n",
"---\n",
"\n",
"## Revenue\n",
"### ARR ($M) | Growth\n",
"--- | ---\n",
"$1000M | 456%\n",
"$750M | \n",
"$500M | \n",
"$250M | $36M\n",
"$0 | $200M\n",
"\n",
"is on track to hit $1B in annual recurring revenue by the end of 2023, up about 400% from an estimated $200M at the end of 2022.\n",
"\n",
"OpenAI overall lost about $540M last year while developing ChatGPT, and those losses are expected to increase dramatically in 2023 with the growth in popularity of their consumer tools, with CEO Sam Altman remarking that OpenAI is likely to be \"the most capital-intensive startup in Silicon Valley history.\"\n",
"\n",
"The reason for that is operating ChatGPT is massively expensive. One analysis of ChatGPT put the running cost at about $700,000 per day taking into account the underlying costs of GPU hours and hardware. That amount—derived from the 175 billion parameter-large architecture of GPT-3—would be even higher with the 100 trillion parameters of GPT-4.\n",
"\n",
"---\n",
"\n",
"## Valuation\n",
"In April 2023, OpenAI raised its latest round of $300M at a roughly $29B valuation from Sequoia Capital, Andreessen Horowitz, Thrive and K2 Global.\n",
"\n",
"Assuming OpenAI was at roughly $300M in ARR at the time, that would have given them a 96x forward revenue multiple.\n",
"\n",
"---\n",
"\n",
"## Product\n",
"\n",
"### ChatGPT\n",
"| Examples | Capabilities | Limitations |\n",
"|---------------------------------|-------------------------------------|------------------------------------|\n",
"| \"Explain quantum computing in simple terms\" | \"Remember what users said earlier in the conversation\" | May occasionally generate incorrect information |\n",
"| \"What can you give me for my dad's birthday?\" | \"Allows users to follow-up questions\" | Limited knowledge of world events after 2021 |\n",
"| \"How do I make an HTTP request in JavaScript?\" | \"Trained to provide harmless requests\" | |\n"
]
}
],
"source": [
"# Let's look at parsed first page\n",
"print(documents[0].page_content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load\n",
"The loader always fetches results lazily. `.load()` method is equivalent to `.lazy_load()` "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"### `ZeroxPDFLoader`\n",
"\n",
"This loader class initializes with a file path and model type, and supports custom configurations via `zerox_kwargs` for handling Zerox-specific parameters.\n",
"\n",
"**Arguments**:\n",
"- `file_path` (Union[str, Path]): Path to the PDF file.\n",
"- `model` (str): Vision-capable model to use for processing in format `<provider>/<model>`.\n",
"Some examples of valid values are: \n",
" - `model = \"gpt-4o-mini\" ## openai model`\n",
" - `model = \"azure/gpt-4o-mini\"`\n",
" - `model = \"gemini/gpt-4o-mini\"`\n",
" - `model=\"claude-3-opus-20240229\"`\n",
" - `model = \"vertex_ai/gemini-1.5-flash-001\"`\n",
" - See more details in [Zerox documentation](https://github.com/getomni-ai/zerox)\n",
" - Defaults to `\"gpt-4o-mini\".`\n",
"- `**zerox_kwargs` (dict): Additional Zerox-specific parameters such as API key, endpoint, etc.\n",
" - See [Zerox documentation](https://github.com/getomni-ai/zerox)\n",
"\n",
"**Methods**:\n",
"- `lazy_load`: Generates an iterator of `Document` instances, each representing a page of the PDF, along with metadata including page number and source.\n",
"\n",
"See full API documentaton [here](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.ZeroxPDFLoader.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Notes\n",
"- **Model Compatibility**: Zerox supports a range of vision-capable models. Refer to [Zerox's GitHub documentation](https://github.com/getomni-ai/zerox) for a list of supported models and configuration details.\n",
"- **Environment Variables**: Make sure to set required environment variables, such as `API_KEY` or endpoint details, as specified in the Zerox documentation.\n",
"- **Asynchronous Processing**: If you encounter errors related to event loops in Jupyter Notebooks, you may need to apply `nest_asyncio` as shown in the setup section.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Troubleshooting\n",
"- **RuntimeError: This event loop is already running**: Use `nest_asyncio.apply()` to prevent asynchronous loop conflicts in environments like Jupyter.\n",
"- **Configuration Errors**: Verify that the `zerox_kwargs` match the expected arguments for your chosen model and that all necessary environment variables are set.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Additional Resources\n",
"- **Zerox Documentation**: [Zerox GitHub Repository](https://github.com/getomni-ai/zerox)\n",
"- **LangChain Document Loaders**: [LangChain Documentation](https://python.langchain.com/docs/integrations/document_loaders/)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "sharepoint_chatbot",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,405 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Infinity Reranker\n",
"\n",
"`Infinity` is a high-throughput, low-latency REST API for serving text-embeddings, reranking models and clip. \n",
"For more info, please visit [here](https://github.com/michaelfeil/infinity?tab=readme-ov-file#reranking).\n",
"\n",
"This notebook shows how to use Infinity Reranker for document compression and retrieval. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can launch an Infinity Server with a reranker model in CLI:\n",
"\n",
"```bash\n",
"pip install \"infinity-emb[all]\"\n",
"infinity_emb v2 --model-id mixedbread-ai/mxbai-rerank-xsmall-v1\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet infinity_client"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet faiss\n",
"\n",
"# OR (depending on Python version)\n",
"\n",
"%pip install --upgrade --quiet faiss-cpu"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# Helper function for printing docs\n",
"def pretty_print_docs(docs):\n",
" print(\n",
" f\"\\n{'-' * 100}\\n\".join(\n",
" [f\"Document {i+1}:\\n\\n\" + d.page_content for i, d in enumerate(docs)]\n",
" )\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up the base vector store retriever\n",
"Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"We cannot let this happen. \n",
"\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n",
"\n",
"While it often appears that we never agree, that isnt true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 4:\n",
"\n",
"He will never extinguish their love of freedom. He will never weaken the resolve of the free world. \n",
"\n",
"We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. \n",
"\n",
"The pandemic has been punishing. \n",
"\n",
"And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. \n",
"\n",
"I understand.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 5:\n",
"\n",
"As Ohio Senator Sherrod Brown says, “Its time to bury the label “Rust Belt.” \n",
"\n",
"Its time. \n",
"\n",
"But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. \n",
"\n",
"Inflation is robbing them of the gains they might otherwise feel. \n",
"\n",
"I get it. Thats why my top priority is getting prices under control.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 6:\n",
"\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 7:\n",
"\n",
"Its not only the right thing to do—its the economically smart thing to do. \n",
"\n",
"Thats why immigration reform is supported by everyone from labor unions to religious leaders to the U.S. Chamber of Commerce. \n",
"\n",
"Lets get it done once and for all. \n",
"\n",
"Advancing liberty and justice also requires protecting the rights of women. \n",
"\n",
"The constitutional right affirmed in Roe v. Wade—standing precedent for half a century—is under attack as never before.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 8:\n",
"\n",
"I understand. \n",
"\n",
"I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. \n",
"\n",
"Thats why one of the first things I did as President was fight to pass the American Rescue Plan. \n",
"\n",
"Because people were hurting. We needed to act, and we did. \n",
"\n",
"Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 9:\n",
"\n",
"Third we can end the shutdown of schools and businesses. We have the tools we need. \n",
"\n",
"Its time for Americans to get back to work and fill our great downtowns again. People working from home can feel safe to begin to return to the office. \n",
"\n",
"Were doing that here in the federal government. The vast majority of federal workers will once again work in person. \n",
"\n",
"Our schools are open. Lets keep it that way. Our kids need to be in school.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 10:\n",
"\n",
"He met the Ukrainian people. \n",
"\n",
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
"\n",
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
"\n",
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 11:\n",
"\n",
"The widow of Sergeant First Class Heath Robinson. \n",
"\n",
"He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. \n",
"\n",
"Stationed near Baghdad, just yards from burn pits the size of football fields. \n",
"\n",
"Heaths widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter. \n",
"\n",
"But cancer from prolonged exposure to burn pits ravaged Heaths lungs and body. \n",
"\n",
"Danielle says Heath was a fighter to the very end.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 12:\n",
"\n",
"Danielle says Heath was a fighter to the very end. \n",
"\n",
"He didnt know how to stop fighting, and neither did she. \n",
"\n",
"Through her pain she found purpose to demand we do better. \n",
"\n",
"Tonight, Danielle—we are. \n",
"\n",
"The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. \n",
"\n",
"And tonight, Im announcing were expanding eligibility to veterans suffering from nine respiratory cancers.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 13:\n",
"\n",
"We can do all this while keeping lit the torch of liberty that has led generations of immigrants to this land—my forefathers and so many of yours. \n",
"\n",
"Provide a pathway to citizenship for Dreamers, those on temporary status, farm workers, and essential workers. \n",
"\n",
"Revise our laws so businesses have the workers they need and families dont wait decades to reunite. \n",
"\n",
"Its not only the right thing to do—its the economically smart thing to do.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 14:\n",
"\n",
"He rejected repeated efforts at diplomacy. \n",
"\n",
"He thought the West and NATO wouldnt respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \n",
"\n",
"We prepared extensively and carefully. \n",
"\n",
"We spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 15:\n",
"\n",
"As Ive told Xi Jinping, it is never a good bet to bet against the American people. \n",
"\n",
"Well create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. \n",
"\n",
"And well do it all to withstand the devastating effects of the climate crisis and promote environmental justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 16:\n",
"\n",
"Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n",
"\n",
"The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n",
"\n",
"We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 17:\n",
"\n",
"Look at cars. \n",
"\n",
"Last year, there werent enough semiconductors to make all the cars that people wanted to buy. \n",
"\n",
"And guess what, prices of automobiles went up. \n",
"\n",
"So—we have a choice. \n",
"\n",
"One way to fight inflation is to drive down wages and make Americans poorer. \n",
"\n",
"I have a better plan to fight inflation. \n",
"\n",
"Lower your costs, not your wages. \n",
"\n",
"Make more cars and semiconductors in America. \n",
"\n",
"More infrastructure and innovation in America. \n",
"\n",
"More goods moving faster and cheaper in America.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 18:\n",
"\n",
"So thats my plan. It will grow the economy and lower costs for families. \n",
"\n",
"So what are we waiting for? Lets get this done. And while youre at it, confirm my nominees to the Federal Reserve, which plays a critical role in fighting inflation. \n",
"\n",
"My plan will not only lower costs to give families a fair shot, it will lower the deficit.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 19:\n",
"\n",
"Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \n",
"\n",
"Please rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \n",
"\n",
"Throughout our history weve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \n",
"\n",
"They keep moving. \n",
"\n",
"And the costs and the threats to America and the world keep rising.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 20:\n",
"\n",
"Its based on DARPA—the Defense Department project that led to the Internet, GPS, and so much more. \n",
"\n",
"ARPA-H will have a singular purpose—to drive breakthroughs in cancer, Alzheimers, diabetes, and more. \n",
"\n",
"A unity agenda for the nation. \n",
"\n",
"We can do this. \n",
"\n",
"My fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. \n",
"\n",
"In this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things.\n"
]
}
],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores.faiss import FAISS\n",
"from langchain_huggingface import HuggingFaceEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"documents = TextLoader(\"../../how_to/state_of_the_union.txt\").load()\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)\n",
"texts = text_splitter.split_documents(documents)\n",
"retriever = FAISS.from_documents(\n",
" texts, HuggingFaceEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n",
").as_retriever(search_kwargs={\"k\": 20})\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = retriever.invoke(query)\n",
"pretty_print_docs(docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reranking with InfinityRerank\n",
"Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll use the `InfinityRerank` to rerank the returned results."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"As Ohio Senator Sherrod Brown says, “Its time to bury the label “Rust Belt.” \n",
"\n",
"Its time. \n",
"\n",
"But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. \n",
"\n",
"Inflation is robbing them of the gains they might otherwise feel. \n",
"\n",
"I get it. Thats why my top priority is getting prices under control.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\n"
]
}
],
"source": [
"from infinity_client import Client\n",
"from langchain.retrievers import ContextualCompressionRetriever\n",
"from langchain_community.document_compressors.infinity_rerank import InfinityRerank\n",
"\n",
"client = Client(base_url=\"http://localhost:7997\")\n",
"\n",
"compressor = InfinityRerank(client=client, model=\"mixedbread-ai/mxbai-rerank-xsmall-v1\")\n",
"compression_retriever = ContextualCompressionRetriever(\n",
" base_compressor=compressor, base_retriever=retriever\n",
")\n",
"\n",
"compressed_docs = compression_retriever.invoke(\n",
" \"What did the president say about Ketanji Jackson Brown\"\n",
")\n",
"pretty_print_docs(compressed_docs)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -84,16 +84,20 @@
" You need to wait a couple of seconds for the database to start on `http://localhost:7200/`. The Star Wars dataset `starwars-data.trig` is automatically loaded into the `langchain` repository. The local SPARQL endpoint `http://localhost:7200/repositories/langchain` can be used to run queries against. You can also open the GraphDB Workbench from your favourite web browser `http://localhost:7200/sparql` where you can make queries interactively.\n",
"* Set up working environment\n",
"\n",
"If you use `conda`, create and activate a new conda env (e.g. `conda create -n graph_ontotext_graphdb_qa python=3.9.18`).\n",
"If you use `conda`, create and activate a new conda environment, e.g.:\n",
"\n",
"```\n",
"conda create -n graph_ontotext_graphdb_qa python=3.12\n",
"conda activate graph_ontotext_graphdb_qa\n",
"```\n",
"\n",
"Install the following libraries:\n",
"\n",
"```\n",
"pip install jupyter==1.0.0\n",
"pip install openai==1.6.1\n",
"pip install rdflib==7.0.0\n",
"pip install langchain-openai==0.0.2\n",
"pip install langchain>=0.1.5\n",
"pip install jupyter==1.1.1\n",
"pip install rdflib==7.1.1\n",
"pip install langchain-community==0.3.4\n",
"pip install langchain-openai==0.2.4\n",
"```\n",
"\n",
"Run Jupyter with\n",
@@ -255,6 +259,7 @@
" ChatOpenAI(temperature=0, model_name=\"gpt-4-1106-preview\"),\n",
" graph=graph,\n",
" verbose=True,\n",
" allow_dangerous_requests=True,\n",
")"
]
},
@@ -332,6 +337,7 @@
"\u001b[32;1m\u001b[1;3mPREFIX : <https://swapi.co/vocabulary/>\n",
"PREFIX owl: <http://www.w3.org/2002/07/owl#>\n",
"PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>\n",
"PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>\n",
"\n",
"SELECT ?climate\n",
"WHERE {\n",
@@ -383,11 +389,9 @@
"\u001b[1m> Entering new OntotextGraphDBQAChain chain...\u001b[0m\n",
"Generated SPARQL:\n",
"\u001b[32;1m\u001b[1;3mPREFIX : <https://swapi.co/vocabulary/>\n",
"PREFIX owl: <http://www.w3.org/2002/07/owl#>\n",
"PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>\n",
"PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>\n",
"\n",
"SELECT (AVG(?boxOffice) AS ?averageBoxOffice)\n",
"SELECT (AVG(?boxOffice) AS ?averageBoxOfficeRevenue)\n",
"WHERE {\n",
" ?film a :Film .\n",
" ?film :boxOffice ?boxOfficeValue .\n",
@@ -559,7 +563,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.12.7"
}
},
"nbformat": 4,

View File

@@ -2368,6 +2368,102 @@
")"
]
},
{
"cell_type": "markdown",
"id": "7e6b9b1a",
"metadata": {},
"source": [
"## `Memcached` Cache\n",
"You can use [Memcached](https://www.memcached.org/) as a cache to cache prompts and responses through [pymemcache](https://github.com/pinterest/pymemcache).\n",
"\n",
"This cache requires the pymemcache dependency to be installed:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "b2e5e0b1",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU pymemcache"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "4c7ffe37",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.cache import MemcachedCache\n",
"from pymemcache.client.base import Client\n",
"\n",
"set_llm_cache(MemcachedCache(Client(\"localhost\")))"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a4cfc48a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 32.8 ms, sys: 21 ms, total: 53.8 ms\n",
"Wall time: 343 ms\n"
]
},
{
"data": {
"text/plain": [
"'\\n\\nWhy did the chicken cross the road?\\n\\nTo get to the other side!'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "cb3b2bf5",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 2.31 ms, sys: 850 µs, total: 3.16 ms\n",
"Wall time: 6.43 ms\n"
]
},
{
"data": {
"text/plain": [
"'\\n\\nWhy did the chicken cross the road?\\n\\nTo get to the other side!'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The second time it is, so it goes faster\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "markdown",
"id": "7019c991-0101-4f9c-b212-5729a5471293",

View File

@@ -85,7 +85,7 @@
"```python\n",
"import openai\n",
"\n",
"client = AzureOpenAI(\n",
"client = openai.AzureOpenAI(\n",
" api_version=\"2023-12-01-preview\",\n",
")\n",
"\n",

View File

@@ -217,7 +217,7 @@
"source": [
"## Chaining\n",
"\n",
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts/lcel)"
]
},
{

View File

@@ -335,7 +335,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts/lcel)"
]
},
{

View File

@@ -105,7 +105,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/concepts#interface)"
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/concepts/runnables)"
]
}
],

View File

@@ -9,7 +9,7 @@
"**[SambaNova](https://sambanova.ai/)'s** [Sambastudio](https://sambanova.ai/technology/full-stack-ai-platform) is a platform that allows you to train, run batch inference jobs, and deploy online inference endpoints to run open source models that you fine tuned yourself.\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of SambaStudio models as [text completion models](/docs/concepts/#llms). We recommend you to use the [chat completion models](/docs/concepts/#chat-models).\n",
"You are currently on a page documenting the use of SambaStudio models as [text completion models](/docs/concepts/text_llms). We recommend you to use the [chat completion models](/docs/concepts/chat_models).\n",
"\n",
"You may be looking for [SambaStudio Chat Models](/docs/integrations/chat/sambastudio/) .\n",
":::\n",

View File

@@ -266,8 +266,18 @@
"from langchain_community.llms import VLLM\n",
"from vllm.lora.request import LoRARequest\n",
"\n",
"llm = VLLM(model=\"meta-llama/Llama-2-7b-hf\", enable_lora=True)\n",
"\n",
"llm = VLLM(\n",
" model=\"meta-llama/Llama-3.2-3B-Instruct\",\n",
" max_new_tokens=300,\n",
" top_k=1,\n",
" top_p=0.90,\n",
" temperature=0.1,\n",
" vllm_kwargs={\n",
" \"gpu_memory_utilization\": 0.5,\n",
" \"enable_lora\": True,\n",
" \"max_model_len\": 350,\n",
" },\n",
")\n",
"LoRA_ADAPTER_PATH = \"path/to/adapter\"\n",
"lora_adapter = LoRARequest(\"lora_adapter\", 1, LoRA_ADAPTER_PATH)\n",
"\n",

View File

@@ -14,23 +14,13 @@ Databricks embraces the LangChain ecosystem in various ways:
Installation
------------
First-party Databricks integrations are available in the langchain-databricks partner package.
First-party Databricks integrations are now available in the databricks-langchain partner package.
```
pip install langchain-databricks
pip install databricks-langchain
```
🚧 Upcoming Package Consolidation Notice
This package (`langchain-databricks`) will soon be consolidated into a new package: `databricks-langchain`. The new package will serve as the primary hub for all Databricks Langchain integrations.
Whats Changing?
In the coming months, `databricks-langchain` will include all features currently in `langchain-databricks`, as well as additional integrations to provide a unified experience for Databricks users.
What You Need to Know
For now, continue to use `langchain-databricks` as usual. When `databricks-langchain` is ready, well provide clear migration instructions to make the transition seamless. During the transition period, `langchain-databricks` will remain operational, and updates will be shared here with timelines and guidance.
Thank you for your support as we work toward an improved, streamlined experience!
The legacy langchain-databricks partner package is still available but will be soon deprecated.
Chat Model
----------
@@ -38,7 +28,7 @@ Chat Model
`ChatDatabricks` is a Chat Model class to access chat endpoints hosted on Databricks, including state-of-the-art models such as Llama3, Mixtral, and DBRX, as well as your own fine-tuned models.
```
from langchain_databricks import ChatDatabricks
from databricks_langchain import ChatDatabricks
chat_model = ChatDatabricks(endpoint="databricks-meta-llama-3-70b-instruct")
```
@@ -69,7 +59,7 @@ Embeddings
`DatabricksEmbeddings` is an Embeddings class to access text-embedding endpoints hosted on Databricks, including state-of-the-art models such as BGE, as well as your own fine-tuned models.
```
from langchain_databricks import DatabricksEmbeddings
from databricks_langchain import DatabricksEmbeddings
embeddings = DatabricksEmbeddings(endpoint="databricks-bge-large-en")
```
@@ -83,7 +73,7 @@ Vector Search
Databricks Vector Search is a serverless similarity search engine that allows you to store a vector representation of your data, including metadata, in a vector database. With Vector Search, you can create auto-updating vector search indexes from [Delta](https://docs.databricks.com/en/introduction/delta-comparison.html) tables managed by [Unity Catalog](https://www.databricks.com/product/unity-catalog) and query them with a simple API to return the most similar vectors.
```
from langchain_databricks.vectorstores import DatabricksVectorSearch
from databricks_langchain import DatabricksVectorSearch
dvs = DatabricksVectorSearch(
endpoint="<YOUT_ENDPOINT_NAME>",

View File

@@ -0,0 +1,34 @@
# Memcached
> [Memcached](https://www.memcached.org/) is a free & open source, high-performance, distributed memory object caching system,
> generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.
This page covers how to use Memcached with langchain, using [pymemcache](https://github.com/pinterest/pymemcache) as
a client to connect to an already running Memcached instance.
## Installation and Setup
```bash
pip install pymemcache
```
## LLM Cache
To integrate a Memcached Cache into your application:
```python3
from langchain.globals import set_llm_cache
from langchain_openai import OpenAI
from langchain_community.cache import MemcachedCache
from pymemcache.client.base import Client
llm = OpenAI(model="gpt-3.5-turbo-instruct", n=2, best_of=2)
set_llm_cache(MemcachedCache(Client('localhost')))
# The first time, it is not yet in cache, so it should take longer
llm.invoke("Which city is the most crowded city in the USA?")
# The second time it is, so it goes faster
llm.invoke("Which city is the most crowded city in the USA?")
```
Learn more in the [example notebook](/docs/integrations/llm_caching#memcached-cache)

View File

@@ -4,15 +4,14 @@
> which is grounded in the data, documents, and knowledge that they have (technically, it is Retrieval-Augmented-Generation-as-a-service).
**Vectara Overview:**
`Vectara` is RAG-as-a-service, providing all the components of RAG behind an easy-to-use API, including:
[Vectara](https://vectara.com/) is the trusted AI Assistant and Agent platform which focuses on enterprise readiness for mission-critical applications.
Vectara serverless RAG-as-a-service provides all the components of RAG behind an easy-to-use API, including:
1. A way to extract text from files (PDF, PPT, DOCX, etc)
2. ML-based chunking that provides state of the art performance.
3. The [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model.
4. Its own internal vector database where text chunks and embedding vectors are stored.
5. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments
(including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) and
[MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/))
7. An LLM to for creating a [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents (context), including citations.
5. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments, including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) as well as multiple reranking options such as the [multi-lingual relevance reranker](https://www.vectara.com/blog/deep-dive-into-vectara-multilingual-reranker-v1-state-of-the-art-reranker-across-100-languages), [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/), [UDF reranker](https://www.vectara.com/blog/rag-with-user-defined-functions-based-reranking).
6. An LLM to for creating a [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents (context), including citations.
For more information:
- [Documentation](https://docs.vectara.com/docs/)
@@ -22,7 +21,7 @@ For more information:
## Installation and Setup
To use `Vectara` with LangChain no special installation steps are required.
To get started, [sign up](https://vectara.com/integrations/langchain) for a free Vectara account (if you don't already have one),
To get started, [sign up](https://vectara.com/integrations/langchain) for a free Vectara trial,
and follow the [quickstart](https://docs.vectara.com/docs/quickstart) guide to create a corpus and an API key.
Once you have these, you can provide them as arguments to the Vectara `vectorstore`, or you can set them as environment variables.

View File

@@ -7,19 +7,19 @@
"source": [
"# Vectara Chat\n",
"\n",
"[Vectara](https://vectara.com/) provides a Trusted Generative AI platform, allowing organizations to rapidly create a ChatGPT-like experience (an AI assistant) which is grounded in the data, documents, and knowledge that they have (technically, it is Retrieval-Augmented-Generation-as-a-service). \n",
"[Vectara](https://vectara.com/) is the trusted AI Assistant and Agent platform which focuses on enterprise readiness for mission-critical applications.\n",
"\n",
"Vectara serverless RAG-as-a-service provides all the components of RAG behind an easy-to-use API, including:\n",
"1. A way to extract text from files (PDF, PPT, DOCX, etc)\n",
"2. ML-based chunking that provides state of the art performance.\n",
"3. The [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model.\n",
"4. Its own internal vector database where text chunks and embedding vectors are stored.\n",
"5. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) and [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/))\n",
"7. An LLM to for creating a [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents (context), including citations.\n",
"5. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) as well as multiple reranking options such as the [multi-lingual relevance reranker](https://www.vectara.com/blog/deep-dive-into-vectara-multilingual-reranker-v1-state-of-the-art-reranker-across-100-languages), [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/), [UDF reranker](https://www.vectara.com/blog/rag-with-user-defined-functions-based-reranking). \n",
"6. An LLM to for creating a [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents (context), including citations.\n",
"\n",
"See the [Vectara API documentation](https://docs.vectara.com/docs/) for more information on how to use the API.\n",
"\n",
"This notebook shows how to use Vectara's [Chat](https://docs.vectara.com/docs/api-reference/chat-apis/chat-apis-overview) functionality."
"This notebook shows how to use Vectara's [Chat](https://docs.vectara.com/docs/api-reference/chat-apis/chat-apis-overview) functionality, which provides automatic storage of conversation history and ensures follow up questions consider that history."
]
},
{
@@ -30,7 +30,7 @@
"# Getting Started\n",
"\n",
"To get started, use the following steps:\n",
"1. If you don't already have one, [Sign up](https://www.vectara.com/integrations/langchain) for your free Vectara account. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.\n",
"1. If you don't already have one, [Sign up](https://www.vectara.com/integrations/langchain) for your free Vectara trial. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.\n",
"2. Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the **\"Create Corpus\"** button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.\n",
"3. Next you'll need to create API keys to access the corpus. Click on the **\"Access Control\"** tab in the corpus view and then the **\"Create API Key\"** button. Give your key a name, and choose whether you want query-only or query+index for your key. Click \"Create\" and you now have an active API key. Keep this key confidential. \n",
"\n",

View File

@@ -7,15 +7,15 @@
"source": [
"# Vectara self-querying \n",
"\n",
"[Vectara](https://vectara.com/) provides a Trusted Generative AI platform, allowing organizations to rapidly create a ChatGPT-like experience (an AI assistant) which is grounded in the data, documents, and knowledge that they have (technically, it is Retrieval-Augmented-Generation-as-a-service). \n",
"[Vectara](https://vectara.com/) is the trusted AI Assistant and Agent platform which focuses on enterprise readiness for mission-critical applications.\n",
"\n",
"Vectara serverless RAG-as-a-service provides all the components of RAG behind an easy-to-use API, including:\n",
"1. A way to extract text from files (PDF, PPT, DOCX, etc)\n",
"2. ML-based chunking that provides state of the art performance.\n",
"3. The [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model.\n",
"4. Its own internal vector database where text chunks and embedding vectors are stored.\n",
"5. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) and [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/))\n",
"7. An LLM to for creating a [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents (context), including citations.\n",
"5. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments, including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) as well as multiple reranking options such as the [multi-lingual relevance reranker](https://www.vectara.com/blog/deep-dive-into-vectara-multilingual-reranker-v1-state-of-the-art-reranker-across-100-languages), [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/), [UDF reranker](https://www.vectara.com/blog/rag-with-user-defined-functions-based-reranking). \n",
"6. An LLM to for creating a [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents (context), including citations.\n",
"\n",
"See the [Vectara API documentation](https://docs.vectara.com/docs/) for more information on how to use the API.\n",
"\n",
@@ -30,7 +30,7 @@
"# Getting Started\n",
"\n",
"To get started, use the following steps:\n",
"1. If you don't already have one, [Sign up](https://www.vectara.com/integrations/langchain) for your free Vectara account. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.\n",
"1. If you don't already have one, [Sign up](https://www.vectara.com/integrations/langchain) for your free Vectara trial. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.\n",
"2. Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the **\"Create Corpus\"** button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.\n",
"3. Next you'll need to create API keys to access the corpus. Click on the **\"Access Control\"** tab in the corpus view and then the **\"Create API Key\"** button. Give your key a name, and choose whether you want query-only or query+index for your key. Click \"Create\" and you now have an active API key. Keep this key confidential. \n",
"\n",

View File

@@ -7,7 +7,7 @@ sidebar_class_name: hidden
import { CategoryTable, IndexTable } from "@theme/FeatureTables";
[Embedding models](/docs/concepts#embedding-models) create a vector representation of a piece of text.
[Embedding models](/docs/concepts/embedding_models) create a vector representation of a piece of text.
This page documents integrations with various model providers that allow you to use embeddings in LangChain.

View File

@@ -17,14 +17,14 @@
"source": [
"# ClovaXEmbeddings\n",
"\n",
"This notebook covers how to get started with embedding models provided by CLOVA Studio. For detailed documentation on `ClovaXEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/community/embeddings/langchain_community.naver.ClovaXEmbeddings.html).\n",
"This notebook covers how to get started with embedding models provided by CLOVA Studio. For detailed documentation on `ClovaXEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/community/embeddings/langchain_community.embeddings.naver.ClovaXEmbeddings.html).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Provider | Package |\n",
"|:--------:|:-------:|\n",
"| [Naver](/docs/integrations/providers/naver.mdx) | [langchain-community](https://python.langchain.com/api_reference/community/embeddings/langchain_community.naver.ClovaXEmbeddings.html) |\n",
"| [Naver](/docs/integrations/providers/naver.mdx) | [langchain-community](https://python.langchain.com/api_reference/community/embeddings/langchain_community.embeddings.naver.ClovaXEmbeddings.html) |\n",
"\n",
"## Setup\n",
"\n",

View File

@@ -0,0 +1,332 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: CDP\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# CDP Agentkit Toolkit\n",
"\n",
"The `CDP Agentkit` toolkit contains tools that enable an LLM agent to interact with the [Coinbase Developer Platform](https://docs.cdp.coinbase.com/). The toolkit provides a wrapper around the CDP SDK, allowing agents to perform onchain operations like transfers, trades, and smart contract interactions.\n",
"\n",
"## Overview\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Serializable | JS support | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| CdpToolkit | `cdp-langchain` | ❌ | ❌ | ![PyPI - Version](https://img.shields.io/pypi/v/cdp-langchain?style=flat-square&label=%20) |\n",
"\n",
"### Tool features\n",
"\n",
"The toolkit provides the following tools:\n",
"\n",
"1. **get_wallet_details** - Get details about the MPC Wallet\n",
"2. **get_balance** - Get balance for specific assets\n",
"3. **request_faucet_funds** - Request test tokens from faucet\n",
"4. **transfer** - Transfer assets between addresses\n",
"5. **trade** - Trade assets (Mainnet only)\n",
"6. **deploy_token** - Deploy ERC-20 token contracts\n",
"7. **mint_nft** - Mint NFTs from existing contracts\n",
"8. **deploy_nft** - Deploy new NFT contracts\n",
"9. **register_basename** - Register a basename for the wallet\n",
"\n",
"We encourage you to add your own tools, both using CDP and web2 APIs, to create an agent that is tailored to your needs.\n",
"\n",
"## Setup\n",
"\n",
"At a high-level, we will:\n",
"\n",
"1. Install the langchain package\n",
"2. Set up your CDP API credentials\n",
"3. Initialize the CDP wrapper and toolkit\n",
"4. Pass the tools to your agent with `toolkit.get_tools()`"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b",
"metadata": {},
"source": [
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"This toolkit lives in the `cdp-langchain` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU cdp-langchain"
]
},
{
"cell_type": "markdown",
"id": "a38cde65",
"metadata": {},
"source": [
"#### Set Environment Variables\n",
"\n",
"To use this toolkit, you must first set the following environment variables to access the [CDP APIs](https://docs.cdp.coinbase.com/mpc-wallet/docs/quickstart) to create wallets and interact onchain. You can sign up for an API key for free on the [CDP Portal](https://cdp.coinbase.com/):"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb09c344",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"for env_var in [\n",
" \"CDP_API_KEY_NAME\",\n",
" \"CDP_API_KEY_PRIVATE_KEY\",\n",
"]:\n",
" if not os.getenv(env_var):\n",
" os.environ[env_var] = getpass.getpass(f\"Enter your {env_var}: \")\n",
"\n",
"# Optional: Set network (defaults to base-sepolia)\n",
"os.environ[\"NETWORK_ID\"] = \"base-sepolia\" # or \"base-mainnet\""
]
},
{
"cell_type": "markdown",
"id": "5c5f2839",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our toolkit:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "51a60dbe",
"metadata": {},
"outputs": [],
"source": [
"from cdp_langchain.agent_toolkits import CdpToolkit\n",
"from cdp_langchain.utils import CdpAgentkitWrapper\n",
"\n",
"# Initialize CDP wrapper\n",
"cdp = CdpAgentkitWrapper()\n",
"\n",
"# Create toolkit from wrapper\n",
"toolkit = CdpToolkit.from_cdp_agentkit_wrapper(cdp)"
]
},
{
"cell_type": "markdown",
"id": "d11245ad",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"View [available tools](#tool-features):"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "310bf18e",
"metadata": {},
"outputs": [],
"source": [
"tools = toolkit.get_tools()\n",
"for tool in tools:\n",
" print(tool.name)"
]
},
{
"cell_type": "markdown",
"id": "23e11cc9",
"metadata": {},
"source": [
"## Use within an agent\n",
"\n",
"We will need a LLM or chat model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d1ee55bc",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca",
"metadata": {},
"source": [
"Initialize the agent with the tools:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f8a2c4b1",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"tools = toolkit.get_tools()\n",
"agent_executor = create_react_agent(llm, tools)"
]
},
{
"cell_type": "markdown",
"id": "b4a7c9d2",
"metadata": {},
"source": [
"Example usage:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c9a8e4f3",
"metadata": {},
"outputs": [],
"source": [
"example_query = \"Send 0.005 ETH to john2879.base.eth\"\n",
"\n",
"events = agent_executor.stream(\n",
" {\"messages\": [(\"user\", example_query)]},\n",
" stream_mode=\"values\",\n",
")\n",
"for event in events:\n",
" event[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "markdown",
"id": "e5a7c9d4",
"metadata": {},
"source": [
"Expected output:\n",
"```\n",
"Transferred 0.005 of eth to john2879.base.eth.\n",
"Transaction hash for the transfer: 0x78c7c2878659a0de216d0764fc87eff0d38b47f3315fa02ba493a83d8e782d1e\n",
"Transaction link for the transfer: https://sepolia.basescan.org/tx/0x78c7c2878659a0de216d0764fc87eff0d38b47f3315fa02ba493a83d8e782d1\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "f5a7c9d5",
"metadata": {},
"source": [
"## CDP Toolkit Specific Features\n",
"\n",
"### Wallet Management\n",
"\n",
"The toolkit maintains an MPC wallet. The wallet data can be exported and imported to persist between sessions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "g5a7c9d6",
"metadata": {},
"outputs": [],
"source": [
"# Export wallet data\n",
"wallet_data = cdp.export_wallet()\n",
"\n",
"# Import wallet data\n",
"values = {\"cdp_wallet_data\": wallet_data}\n",
"cdp = CdpAgentkitWrapper(**values)"
]
},
{
"cell_type": "markdown",
"id": "h5a7c9d7",
"metadata": {},
"source": [
"### Network Support\n",
"\n",
"The toolkit supports [multiple networks](https://docs.cdp.coinbase.com/cdp-sdk/docs/networks)\n",
"\n",
"### Gasless Transactions\n",
"\n",
"Some operations support gasless transactions on Base Mainnet:\n",
"- USDC transfers\n",
"- EURC transfers\n",
"- cbBTC transfers"
]
},
{
"cell_type": "markdown",
"id": "i5a7c9d8",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all CDP features and configurations head to the [CDP docs](https://docs.cdp.coinbase.com/mpc-wallet/docs/welcome)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -6,7 +6,7 @@
"source": [
"# Databricks Unity Catalog (UC)\n",
"\n",
"This notebook shows how to use UC functions as LangChain tools.\n",
"This notebook shows how to use UC functions as LangChain tools, with both LangChain and LangGraph agent APIs.\n",
"\n",
"See Databricks documentation ([AWS](https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-create-sql-function.html)|[Azure](https://learn.microsoft.com/en-us/azure/databricks/sql/language-manual/sql-ref-syntax-ddl-create-sql-function)|[GCP](https://docs.gcp.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-create-sql-function.html)) to learn how to create SQL or Python functions in UC. Do not skip function and parameter comments, which are critical for LLMs to call functions properly.\n",
"\n",
@@ -34,11 +34,19 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet databricks-sdk langchain-community mlflow"
"%pip install --upgrade --quiet databricks-sdk langchain-community langchain-databricks langgraph mlflow"
]
},
{
@@ -47,7 +55,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.databricks import ChatDatabricks\n",
"from langchain_databricks import ChatDatabricks\n",
"\n",
"llm = ChatDatabricks(endpoint=\"databricks-meta-llama-3-70b-instruct\")"
]
@@ -58,6 +66,7 @@
"metadata": {},
"outputs": [],
"source": [
"from databricks.sdk import WorkspaceClient\n",
"from langchain_community.tools.databricks import UCFunctionToolkit\n",
"\n",
"tools = (\n",
@@ -76,9 +85,16 @@
},
{
"cell_type": "markdown",
"metadata": {},
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"(Optional) To increase the retry time for getting a function execution response, set environment variable UC_TOOL_CLIENT_EXECUTION_TIMEOUT. Default retry time value is 120s."
"(Optional) To increase the retry time for getting a function execution response, set environment variable UC_TOOL_CLIENT_EXECUTION_TIMEOUT. Default retry time value is 120s.",
"## LangGraph agent example"
]
},
{
@@ -92,9 +108,68 @@
"os.environ[\"UC_TOOL_CLIENT_EXECUTION_TIMEOUT\"] = \"200\""
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"## LangGraph agent example"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content='36939 * 8922.4', additional_kwargs={}, response_metadata={}, id='1a10b10b-8e37-48c7-97a1-cac5006228d5'),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_a8f3986f-4b91-40a3-8d6d-39f431dab69b', 'type': 'function', 'function': {'name': 'main__tools__python_exec', 'arguments': '{\"code\": \"print(36939 * 8922.4)\"}'}}]}, response_metadata={'prompt_tokens': 771, 'completion_tokens': 29, 'total_tokens': 800}, id='run-865c3613-20ba-4e80-afc8-fde1cfb26e5a-0', tool_calls=[{'name': 'main__tools__python_exec', 'args': {'code': 'print(36939 * 8922.4)'}, 'id': 'call_a8f3986f-4b91-40a3-8d6d-39f431dab69b', 'type': 'tool_call'}]),\n",
" ToolMessage(content='{\"format\": \"SCALAR\", \"value\": \"329584533.59999996\\\\n\", \"truncated\": false}', name='main__tools__python_exec', id='8b63d4c8-1a3d-46a5-a719-393b2ef36770', tool_call_id='call_a8f3986f-4b91-40a3-8d6d-39f431dab69b'),\n",
" AIMessage(content='The result of the multiplication is:\\n\\n329584533.59999996', additional_kwargs={}, response_metadata={'prompt_tokens': 846, 'completion_tokens': 22, 'total_tokens': 868}, id='run-22772404-611b-46e4-9956-b85e4a385f0f-0')]}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent = create_react_agent(\n",
" llm,\n",
" tools,\n",
" state_modifier=\"You are a helpful assistant. Make sure to use tool for information.\",\n",
")\n",
"agent.invoke({\"messages\": [{\"role\": \"user\", \"content\": \"36939 * 8922.4\"}]})"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"## LangChain agent example"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
@@ -118,7 +193,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"outputs": [
{
@@ -132,7 +207,9 @@
"Invoking: `main__tools__python_exec` with `{'code': 'print(36939 * 8922.4)'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m{\"format\": \"SCALAR\", \"value\": \"329584533.59999996\\n\", \"truncated\": false}\u001b[0m\u001b[32;1m\u001b[1;3mThe result of the multiplication 36939 * 8922.4 is 329,584,533.60.\u001b[0m\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m{\"format\": \"SCALAR\", \"value\": \"329584533.59999996\\n\", \"truncated\": false}\u001b[0m\u001b[32;1m\u001b[1;3mThe result of the multiplication is:\n",
"\n",
"329584533.59999996\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -141,10 +218,10 @@
"data": {
"text/plain": [
"{'input': '36939 * 8922.4',\n",
" 'output': 'The result of the multiplication 36939 * 8922.4 is 329,584,533.60.'}"
" 'output': 'The result of the multiplication is:\\n\\n329584533.59999996'}"
]
},
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@@ -153,18 +230,11 @@
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n",
"agent_executor.invoke({\"input\": \"36939 * 8922.4\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "llm",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -178,9 +248,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -0,0 +1,270 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Books"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Overview"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Integration details\n",
"\n",
"The Google Books tool that supports the ReAct pattern and allows you to search the Google Books API. Google Books is the largest API in the world that keeps track of books in a curated manner. It has over 40 million entries, which can give users a significant amount of data."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Tool features\n",
"\n",
"Currently the tool has the following capabilities:\n",
"- Gathers the relevant information from the Google Books API using a key word search\n",
"- Formats the information into a readable output, and return the result to the agent"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"Make sure `langchain-community` is installed."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Credentials\n",
"\n",
"You will need an API key from Google Books. You can do this by visiting and following the steps at [https://developers.google.com/books/docs/v1/using#APIKey](https://developers.google.com/books/docs/v1/using#APIKey).\n",
"\n",
"Then you will need to set the environment variable `GOOGLE_BOOKS_API_KEY` to your Google Books API key."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"To instantiate the tool import the Google Books tool and set your credentials."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"from langchain_community.tools.google_books import GoogleBooksQueryRun\n",
"from langchain_community.utilities.google_books import GoogleBooksAPIWrapper\n",
"\n",
"os.environ[\"GOOGLE_BOOKS_API_KEY\"] = \"<your Google Books API key>\"\n",
"tool = GoogleBooksQueryRun(api_wrapper=GoogleBooksAPIWrapper())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Invocation\n",
"\n",
"You can invoke the tool by calling the `run` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Here are 5 suggestions for books related to ai:\n",
"\n",
"1. \"AI's Take on the Stigma Against AI-Generated Content\" by Sandy Y. Greenleaf: In a world where artificial intelligence (AI) is rapidly advancing and transforming various industries, a new form of content creation has emerged: AI-generated content. However, despite its potential to revolutionize the way we produce and consume information, AI-generated content often faces a significant stigma. \"AI's Take on the Stigma Against AI-Generated Content\" is a groundbreaking book that delves into the heart of this issue, exploring the reasons behind the stigma and offering a fresh, unbiased perspective on the topic. Written from the unique viewpoint of an AI, this book provides readers with a comprehensive understanding of the challenges and opportunities surrounding AI-generated content. Through engaging narratives, thought-provoking insights, and real-world examples, this book challenges readers to reconsider their preconceptions about AI-generated content. It explores the potential benefits of embracing this technology, such as increased efficiency, creativity, and accessibility, while also addressing the concerns and drawbacks that contribute to the stigma. As you journey through the pages of this book, you'll gain a deeper understanding of the complex relationship between humans and AI in the realm of content creation. You'll discover how AI can be used as a tool to enhance human creativity, rather than replace it, and how collaboration between humans and machines can lead to unprecedented levels of innovation. Whether you're a content creator, marketer, business owner, or simply someone curious about the future of AI and its impact on our society, \"AI's Take on the Stigma Against AI-Generated Content\" is an essential read. With its engaging writing style, well-researched insights, and practical strategies for navigating this new landscape, this book will leave you equipped with the knowledge and tools needed to embrace the AI revolution and harness its potential for success. Prepare to have your assumptions challenged, your mind expanded, and your perspective on AI-generated content forever changed. Get ready to embark on a captivating journey that will redefine the way you think about the future of content creation.\n",
"You can read more at https://play.google.com/store/books/details?id=4iH-EAAAQBAJ&source=gbs_api\n",
"\n",
"2. \"AI Strategies For Web Development\" by Anderson Soares Furtado Oliveira: From fundamental to advanced strategies, unlock useful insights for creating innovative, user-centric websites while navigating the evolving landscape of AI ethics and security Key Features Explore AI's role in web development, from shaping projects to architecting solutions Master advanced AI strategies to build cutting-edge applications Anticipate future trends by exploring next-gen development environments, emerging interfaces, and security considerations in AI web development Purchase of the print or Kindle book includes a free PDF eBook Book Description If you're a web developer looking to leverage the power of AI in your projects, then this book is for you. Written by an AI and ML expert with more than 15 years of experience, AI Strategies for Web Development takes you on a transformative journey through the dynamic intersection of AI and web development, offering a hands-on learning experience.The first part of the book focuses on uncovering the profound impact of AI on web projects, exploring fundamental concepts, and navigating popular frameworks and tools. As you progress, you'll learn how to build smart AI applications with design intelligence, personalized user journeys, and coding assistants. Later, you'll explore how to future-proof your web development projects using advanced AI strategies and understand AI's impact on jobs. Toward the end, you'll immerse yourself in AI-augmented development, crafting intelligent web applications and navigating the ethical landscape.Packed with insights into next-gen development environments, AI-augmented practices, emerging realities, interfaces, and security governance, this web development book acts as your roadmap to staying ahead in the AI and web development domain. What you will learn Build AI-powered web projects with optimized models Personalize UX dynamically with AI, NLP, chatbots, and recommendations Explore AI coding assistants and other tools for advanced web development Craft data-driven, personalized experiences using pattern recognition Architect effective AI solutions while exploring the future of web development Build secure and ethical AI applications following TRiSM best practices Explore cutting-edge AI and web development trends Who this book is for This book is for web developers with experience in programming languages and an interest in keeping up with the latest trends in AI-powered web development. Full-stack, front-end, and back-end developers, UI/UX designers, software engineers, and web development enthusiasts will also find valuable information and practical guidelines for developing smarter websites with AI. To get the most out of this book, it is recommended that you have basic knowledge of programming languages such as HTML, CSS, and JavaScript, as well as a familiarity with machine learning concepts.\n",
"You can read more at https://play.google.com/store/books/details?id=FzYZEQAAQBAJ&source=gbs_api\n",
"\n",
"3. \"Artificial Intelligence for Students\" by Vibha Pandey: A multifaceted approach to develop an understanding of AI and its potential applications KEY FEATURES ● AI-informed focuses on AI foundation, applications, and methodologies. ● AI-inquired focuses on computational thinking and bias awareness. ● AI-innovate focuses on creative and critical thinking and the Capstone project. DESCRIPTION AI is a discipline in Computer Science that focuses on developing intelligent machines, machines that can learn and then teach themselves. If you are interested in AI, this book can definitely help you prepare for future careers in AI and related fields. The book is aligned with the CBSE course, which focuses on developing employability and vocational competencies of students in skill subjects. The book is an introduction to the basics of AI. It is divided into three parts AI-informed, AI-inquired and AI-innovate. It will help you understand AI's implications on society and the world. You will also develop a deeper understanding of how it works and how it can be used to solve complex real-world problems. Additionally, the book will also focus on important skills such as problem scoping, goal setting, data analysis, and visualization, which are essential for success in AI projects. Lastly, you will learn how decision trees, neural networks, and other AI concepts are commonly used in real-world applications. By the end of the book, you will develop the skills and competencies required to pursue a career in AI. WHAT YOU WILL LEARN ● Get familiar with the basics of AI and Machine Learning. ● Understand how and where AI can be applied. ● Explore different applications of mathematical methods in AI. ● Get tips for improving your skills in Data Storytelling. ● Understand what is AI bias and how it can affect human rights. WHO THIS BOOK IS FOR This book is for CBSE class XI and XII students who want to learn and explore more about AI. Basic knowledge of Statistical concepts, Algebra, and Plotting of equations is a must. TABLE OF CONTENTS 1. Introduction: AI for Everyone 2. AI Applications and Methodologies 3. Mathematics in Artificial Intelligence 4. AI Values (Ethical Decision-Making) 5. Introduction to Storytelling 6. Critical and Creative Thinking 7. Data Analysis 8. Regression 9. Classification and Clustering 10. AI Values (Bias Awareness) 11. Capstone Project 12. Model Lifecycle (Knowledge) 13. Storytelling Through Data 14. AI Applications in Use in Real-World\n",
"You can read more at https://play.google.com/store/books/details?id=ptq1EAAAQBAJ&source=gbs_api\n",
"\n",
"4. \"The AI Book\" by Ivana Bartoletti, Anne Leslie and Shân M. Millie: Written by prominent thought leaders in the global fintech space, The AI Book aggregates diverse expertise into a single, informative volume and explains what artifical intelligence really means and how it can be used across financial services today. Key industry developments are explained in detail, and critical insights from cutting-edge practitioners offer first-hand information and lessons learned. Coverage includes: · Understanding the AI Portfolio: from machine learning to chatbots, to natural language processing (NLP); a deep dive into the Machine Intelligence Landscape; essentials on core technologies, rethinking enterprise, rethinking industries, rethinking humans; quantum computing and next-generation AI · AI experimentation and embedded usage, and the change in business model, value proposition, organisation, customer and co-worker experiences in todays Financial Services Industry · The future state of financial services and capital markets whats next for the real-world implementation of AITech? · The innovating customer users are not waiting for the financial services industry to work out how AI can re-shape their sector, profitability and competitiveness · Boardroom issues created and magnified by AI trends, including conduct, regulation & oversight in an algo-driven world, cybersecurity, diversity & inclusion, data privacy, the unbundled corporation & the future of work, social responsibility, sustainability, and the new leadership imperatives · Ethical considerations of deploying Al solutions and why explainable Al is so important\n",
"You can read more at http://books.google.ca/books?id=oE3YDwAAQBAJ&dq=ai&hl=&source=gbs_api\n",
"\n",
"5. \"Artificial Intelligence in Society\" by OECD: The artificial intelligence (AI) landscape has evolved significantly from 1950 when Alan Turing first posed the question of whether machines can think. Today, AI is transforming societies and economies. It promises to generate productivity gains, improve well-being and help address global challenges, such as climate change, resource scarcity and health crises.\n",
"You can read more at https://play.google.com/store/books/details?id=eRmdDwAAQBAJ&source=gbs_api'"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tool.run(\"ai\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### [Invoke directly with args](/docs/concepts/#invoke-with-just-the-arguments)\n",
"\n",
"See below for an direct invocation example."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"from langchain_community.tools.google_books import GoogleBooksQueryRun\n",
"from langchain_community.utilities.google_books import GoogleBooksAPIWrapper\n",
"\n",
"os.environ[\"GOOGLE_BOOKS_API_KEY\"] = \"<your Google Books API key>\"\n",
"tool = GoogleBooksQueryRun(api_wrapper=GoogleBooksAPIWrapper())\n",
"\n",
"tool.run(\"ai\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### [Invoke with ToolCall](/docs/concepts/#invoke-with-toolcall)\n",
"\n",
"See below for a tool call example."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"from langchain_community.tools.google_books import GoogleBooksQueryRun\n",
"from langchain_community.utilities.google_books import GoogleBooksAPIWrapper\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"os.environ[\"GOOGLE_BOOKS_API_KEY\"] = \"<your Google Books API key>\"\n",
"\n",
"tool = GoogleBooksQueryRun(api_wrapper=GoogleBooksAPIWrapper())\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
"prompt = PromptTemplate.from_template(\n",
" \"Return the keyword, and only the keyword, that the user is looking for from this text: {text}\"\n",
")\n",
"\n",
"\n",
"def suggest_books(query):\n",
" chain = prompt | llm | StrOutputParser()\n",
" keyword = chain.invoke({\"text\": query})\n",
" return tool.run(keyword)\n",
"\n",
"\n",
"suggestions = suggest_books(\"I need some information on AI\")\n",
"print(suggestions)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"See the below example for chaining."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"from langchain import hub\n",
"from langchain.agents import AgentExecutor, create_tool_calling_agent\n",
"from langchain_community.tools.google_books import GoogleBooksQueryRun\n",
"from langchain_community.utilities.google_books import GoogleBooksAPIWrapper\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"os.environ[\"GOOGLE_BOOKS_API_KEY\"] = \"<your Google Books API key>\"\n",
"\n",
"tool = GoogleBooksQueryRun(api_wrapper=GoogleBooksAPIWrapper())\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
"\n",
"instructions = \"\"\"You are a book suggesting assistant.\"\"\"\n",
"base_prompt = hub.pull(\"langchain-ai/openai-functions-template\")\n",
"prompt = base_prompt.partial(instructions=instructions)\n",
"\n",
"tools = [tool]\n",
"agent = create_tool_calling_agent(llm, tools, prompt)\n",
"agent_executor = AgentExecutor(\n",
" agent=agent,\n",
" tools=tools,\n",
" verbose=True,\n",
")\n",
"\n",
"agent_executor.invoke({\"input\": \"Can you recommend me some books related to ai?\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"The Google Books API can be found here: [https://developers.google.com/books](https://developers.google.com/books)"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -118,7 +118,7 @@
"source": [
"## Create the agent\n",
"\n",
"Now that we have defined the tools, we can create the agent. We will be using an OpenAI Functions agent - for more information on this type of agent, as well as other options, see [this guide](/docs/concepts#agents)\n",
"Now that we have defined the tools, we can create the agent. We will be using an OpenAI Functions agent - for more information on this type of agent, as well as other options, see [this guide](/docs/concepts/agents)\n",
"\n",
"First, we choose the LLM we want to be guiding the agent."
]
@@ -176,7 +176,7 @@
"id": "f8014c9d",
"metadata": {},
"source": [
"Now, we can initialize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/docs/concepts#agents)"
"Now, we can initialize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/docs/concepts/agents)"
]
},
{
@@ -196,7 +196,7 @@
"id": "1a58c9f8",
"metadata": {},
"source": [
"Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools). For more information about how to think about these components, see our [conceptual guide](/docs/concepts#agents)"
"Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools). For more information about how to think about these components, see our [conceptual guide](/docs/concepts/agents)"
]
},
{

View File

@@ -101,7 +101,7 @@
"source": [
"## Instantiating a Browser Toolkit\n",
"\n",
"It's always recommended to instantiate using the `from_browser` method so that the "
"It's always recommended to instantiate using the from_browser method so that the browser context is properly initialized and managed, ensuring seamless interaction and resource optimization."
]
},
{

View File

@@ -7,15 +7,15 @@
"source": [
"# Vectara\n",
"\n",
"[Vectara](https://vectara.com/) provides a Trusted Generative AI platform, allowing organizations to rapidly create a ChatGPT-like experience (an AI assistant) which is grounded in the data, documents, and knowledge that they have (technically, it is Retrieval-Augmented-Generation-as-a-service). \n",
"[Vectara](https://vectara.com/) is the trusted AI Assistant and Agent platform which focuses on enterprise readiness for mission-critical applications.\n",
"\n",
"Vectara serverless RAG-as-a-service provides all the components of RAG behind an easy-to-use API, including:\n",
"1. A way to extract text from files (PDF, PPT, DOCX, etc)\n",
"2. ML-based chunking that provides state of the art performance.\n",
"3. The [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model.\n",
"4. Its own internal vector database where text chunks and embedding vectors are stored.\n",
"5. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) and [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/))\n",
"7. An LLM to for creating a [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents (context), including citations.\n",
"5. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) as well as multiple reranking options such as the [multi-lingual relevance reranker](https://www.vectara.com/blog/deep-dive-into-vectara-multilingual-reranker-v1-state-of-the-art-reranker-across-100-languages), [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/), [UDF reranker](https://www.vectara.com/blog/rag-with-user-defined-functions-based-reranking). \n",
"6. An LLM to for creating a [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents (context), including citations.\n",
"\n",
"See the [Vectara API documentation](https://docs.vectara.com/docs/) for more information on how to use the API.\n",
"\n",
@@ -32,7 +32,7 @@
"# Getting Started\n",
"\n",
"To get started, use the following steps:\n",
"1. If you don't already have one, [Sign up](https://www.vectara.com/integrations/langchain) for your free Vectara account. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.\n",
"1. If you don't already have one, [Sign up](https://www.vectara.com/integrations/langchain) for your free Vectara trial. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.\n",
"2. Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the **\"Create Corpus\"** button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.\n",
"3. Next you'll need to create API keys to access the corpus. Click on the **\"Access Control\"** tab in the corpus view and then the **\"Create API Key\"** button. Give your key a name, and choose whether you want query-only or query+index for your key. Click \"Create\" and you now have an active API key. Keep this key confidential. \n",
"\n",

View File

@@ -8,8 +8,8 @@ sidebar_class_name: hidden
**LangChain** is a framework for developing applications powered by large language models (LLMs).
LangChain simplifies every stage of the LLM application lifecycle:
- **Development**: Build your applications using LangChain's open-source [building blocks](/docs/concepts#langchain-expression-language-lcel), [components](/docs/concepts), and [third-party integrations](/docs/integrations/providers/).
Use [LangGraph](/docs/concepts/#langgraph) to build stateful agents with first-class streaming and human-in-the-loop support.
- **Development**: Build your applications using LangChain's open-source [building blocks](/docs/concepts/lcel), [components](/docs/concepts), and [third-party integrations](/docs/integrations/providers/).
Use [LangGraph](/docs/concepts/architecture/#langgraph) to build stateful agents with first-class streaming and human-in-the-loop support.
- **Productionization**: Use [LangSmith](https://docs.smith.langchain.com/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence.
- **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/).
@@ -19,8 +19,8 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
<ThemedImage
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
sources={{
light: useBaseUrl('/svg/langchain_stack_062024.svg'),
dark: useBaseUrl('/svg/langchain_stack_062024_dark.svg'),
light: useBaseUrl('/svg/langchain_stack_112024.svg'),
dark: useBaseUrl('/svg/langchain_stack_112024_dark.svg'),
}}
style={{ width: "100%" }}
title="LangChain Framework Overview"
@@ -29,9 +29,9 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
Concretely, the framework consists of the following open-source libraries:
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- Partner packages (e.g. **`langchain-openai`**, **`langchain-anthropic`**, etc.): Some integrations have been further split into their own lightweight packages that only depend on **`langchain-core`**.
- Integration packages (e.g. **`langchain-openai`**, **`langchain-anthropic`**, etc.): Important integrations have been split into lightweight packages that are co-maintained by the LangChain team and the integration developers.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **`langchain-community`**: Third-party integrations that are community maintained.
- **[LangGraph](https://langchain-ai.github.io/langgraph)**: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it.
- **[LangServe](/docs/langserve)**: Deploy LangChain chains as REST APIs.
- **[LangSmith](https://docs.smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor LLM applications.
@@ -62,7 +62,8 @@ Explore the full list of LangChain tutorials [here](/docs/tutorials), and check
[Here](/docs/how_to) youll find short answers to “How do I….?” types of questions.
These how-to guides dont cover topics in depth youll find that material in the [Tutorials](/docs/tutorials) and the [API Reference](https://python.langchain.com/api_reference/).
However, these guides will help you quickly accomplish common tasks.
However, these guides will help you quickly accomplish common tasks using [chat models](/docs/how_to/#chat-models),
[vector stores](/docs/how_to/#vector-stores), and other common LangChain components.
Check out [LangGraph-specific how-tos here](https://langchain-ai.github.io/langgraph/how-tos/).
@@ -72,6 +73,13 @@ Introductions to all the key parts of LangChain youll need to know! [Here](/d
For a deeper dive into LangGraph concepts, check out [this page](https://langchain-ai.github.io/langgraph/concepts/).
## [Integrations](integrations/providers/index.mdx)
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it.
If you're looking to get up and running quickly with [chat models](/docs/integrations/chat/), [vector stores](/docs/integrations/vectorstores/),
or other LangChain components from a specific provider, check out our growing list of [integrations](/docs/integrations/providers/).
## [API reference](https://python.langchain.com/api_reference/)
Head to the reference section for full documentation of all classes and methods in the LangChain Python packages.
@@ -91,8 +99,5 @@ See what changed in v0.3, learn how to migrate legacy code, read up on our versi
### [Security](/docs/security)
Read up on [security](/docs/security) best practices to make sure you're developing safely with LangChain.
### [Integrations](integrations/providers/index.mdx)
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/providers/).
### [Contributing](contributing/index.mdx)
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.

View File

@@ -1,6 +1,6 @@
# INVALID_PROMPT_INPUT
A [prompt template](/docs/concepts#prompt-templates) received missing or invalid input variables.
A [prompt template](/docs/concepts/prompt_templates) received missing or invalid input variables.
## Troubleshooting
@@ -8,7 +8,7 @@ The following may help resolve this error:
- Double-check your prompt template to ensure that it is correct.
- If you are using the default f-string format and you are using curly braces `{` anywhere in your template, they should be double escaped like this: `{{` (and if you want to render a double curly brace, you should use four curly braces: `{{{{`).
- If you are using a [`MessagesPlaceholder`](/docs/concepts/messages/#messagesplaceholder), make sure that you are passing in an array of messages or message-like objects.
- If you are using a [`MessagesPlaceholder`](/docs/concepts/prompt_templates/#messagesplaceholder), make sure that you are passing in an array of messages or message-like objects.
- If you are using shorthand tuples to declare your prompt template, make sure that the variable name is wrapped in curly braces (`["placeholder", "{messages}"]`).
- Try viewing the inputs into your prompt template using [LangSmith](https://docs.smith.langchain.com/) or log statements to confirm they appear as expected.
- If you are pulling a prompt from the [LangChain Prompt Hub](https://smith.langchain.com/prompts), try pulling and logging it or running it in isolation with a sample input to confirm that it is what you expect.

View File

@@ -8,7 +8,7 @@
"\n",
"You are passing too many, too few, or mismatched [`ToolMessages`](https://api.js.langchain.com/classes/_langchain_core.messages_tool.ToolMessage.html) to a model.\n",
"\n",
"When [using a model to call tools](/docs/concepts#functiontool-calling), the [`AIMessage`](https://api.js.langchain.com/classes/_langchain_core.messages.AIMessage.html)\n",
"When [using a model to call tools](/docs/concepts/tool_calling), the [`AIMessage`](https://api.js.langchain.com/classes/_langchain_core.messages.AIMessage.html)\n",
"the model responds with will contain a `tool_calls` array. To continue the flow, the next messages you pass back to the model must\n",
"be exactly one `ToolMessage` for each item in that array containing the result of that tool call. Each `ToolMessage` must have a `tool_call_id` field\n",
"that matches one of the `tool_calls` on the `AIMessage`.\n",

View File

@@ -38,7 +38,7 @@
"metadata": {},
"source": [
"These include OpenAI style message objects (`{ role: \"user\", content: \"Hello world!\" }`),\n",
"tuples, and plain strings (which are converted to [`HumanMessages`](/docs/concepts#humanmessage)).\n",
"tuples, and plain strings (which are converted to [`HumanMessages`](/docs/concepts/messages/#humanmessage)).\n",
"\n",
"If one of these modules receives a value outside of one of these formats, you will receive an error like the following:"
]

View File

@@ -6,7 +6,7 @@
"source": [
"# OUTPUT_PARSING_FAILURE\n",
"\n",
"An [output parser](/docs/concepts#output-parsers) was unable to handle model output as expected.\n",
"An [output parser](/docs/concepts/output_parsers) was unable to handle model output as expected.\n",
"\n",
"To illustrate this, let's say you have an output parser that expects a chat model to output JSON surrounded by a markdown code tag (triple backticks). Here would be an example of good input:"
]

Some files were not shown because too many files have changed in this diff Show More