Compare commits

...

125 Commits

Author SHA1 Message Date
Erick Friis
d3252b7417 core: release 0.3.19 (#28137) 2024-11-15 18:15:28 +00:00
ccurme
585479e1ff docs: add legacy LLM page to concepts index (#28135)
This page was previously not discoverable.
2024-11-15 13:06:48 -05:00
Jorge Piedrahita Ortiz
39956a3ef0 community: sambanovacloud llm integration (#27526)
- **Description:** SambaNovaCloud llm integration added, previously only
chat model integration

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-15 16:58:11 +00:00
Elham Badri
d696728278 partners/ollama: Enabled Token Level Streaming when Using Bind Tools for ChatOllama (#27689)
**Description:** The issue concerns the unexpected behavior observed
using the bind_tools method in LangChain's ChatOllama. When tools are
not bound, the llm.stream() method works as expected, returning
incremental chunks of content, which is crucial for real-time
applications such as conversational agents and live feedback systems.
However, when bind_tools([]) is used, the streaming behavior changes,
causing the output to be delivered in full chunks rather than
incrementally. This change negatively impacts the user experience by
breaking the real-time nature of the streaming mechanism.
**Issue:** #26971

---------

Co-authored-by: 4meyDam1e <amey.damle@mail.utoronto.ca>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-15 11:36:27 -05:00
ccurme
776e3271e3 standard-tests[patch]: add test for async tool calling (#28133) 2024-11-15 16:09:50 +00:00
Vadym Barda
ed4952e475 core[patch]: add caching to get_function_nonlocals (#28131) 2024-11-15 07:53:53 -08:00
ccurme
74438f3ae8 docs: add links to concept guides in how-tos (#28118) 2024-11-15 09:44:11 -05:00
ccurme
ef2dc9eae5 docs: update "quickstart" tutorial (#28096)
- Update language / add links in places
- De-emphasize output parsers
- remove deployment section
2024-11-14 14:38:45 -05:00
ccurme
f1222739f8 core[patch]: support numpy 2 (#27991) 2024-11-14 13:08:57 -05:00
Zapiron
cff70c2d67 docs: Add hyperlink to immediately show the table at the bottom of th… (#28102)
Added a hyperlink which can be clicked so users can immediately see the
table and find out the various example selector methods
2024-11-14 09:52:18 -05:00
Zapiron
4b641f87ae English Update and fixed a duplicate "the" (#27981)
Fixed a duplicate "the" in the documentation and made the documentation
generally easier to understand
2024-11-13 14:36:56 -05:00
Erick Friis
f6d34585f0 docs: throw on broken anchors (#27773)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-11-13 14:29:27 -05:00
Zapiron
7bd9c8cba3 docs: Updated link to ensure reference to the correct header for ToolNode (#28088)
When `ToolNode` hyperlink is clicked, it does not automatically scroll
to the section due to incorrect reference to the heading / id in the
LangGraph documentation
2024-11-13 14:19:55 -05:00
ccurme
940e93e891 docs: add docs on StrOutputParser (#28089)
Think it's worth adding a quick guide and including in the table in the
concepts page. `StrOutputParser` can make it easier to deal with the
union type for message content. For example, ChatAnthropic with bound
tools will generate string content if there are no tool calls and
`list[dict]` content otherwise.

I'm also considering removing the output parser section from the
["quickstart"
tutorial](https://python.langchain.com/docs/tutorials/llm_chain/); we
can link to this guide instead.
2024-11-13 14:16:50 -05:00
Vadym Barda
6ec688cf2b xai[patch]: update core (#28092) 2024-11-13 17:51:51 +00:00
Artur Barseghyan
2ab5673eb1 docs: Add example using TypedDict in structured outputs how-to guide (#27415)
For me, the [Pydantic
example](https://python.langchain.com/docs/how_to/structured_output/#choosing-between-multiple-schemas)
does not work (tested on various Python versions from 3.10 to 3.12, and
`Pydantic` versions from 2.7 to 2.9).

The `TypedDict` example (added in this PR) does.

----

Additionally, fixed an error in [Using PydanticOutputParser
example](https://python.langchain.com/docs/how_to/structured_output/#using-pydanticoutputparser).

Was:

```python
query = "Anna is 23 years old and she is 6 feet tall"

print(prompt.invoke(query).to_string())
```

Corrected to:

```python
query = "Anna is 23 years old and she is 6 feet tall"

print(prompt.invoke({"query": query}).to_string())
```

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-11-13 16:53:37 +00:00
Bharat Ramanathan
3e972faf81 community: chore warn deprecate the tracer (#27159)
- **Description:**: This PR deprecates the wandb tracer in favor of the
new
[WeaveTracer](https://weave-docs.wandb.ai/guides/integrations/langchain#using-weavetracer)
in W&B
- **Dependencies:** No dependencies, just a deprecation warning.
- **Twitter handle:** @parambharat


@baskaryan
2024-11-13 11:33:34 -05:00
Erick Friis
76e0127539 core: release 0.3.18 (#28070) 2024-11-13 16:19:13 +00:00
Eric Pinzur
eadc2f6a90 core: added DeleteResponse to the module (#28069)
Description:
* added `DeleteResponse` to the `langchain_core.indexing` module, for
implementing DocumentIndex classes.
2024-11-13 11:08:08 -05:00
ZhangShenao
c89e7ce8b5 core[patch]: Update doc-strings in callbacks (#28073)
- Fix api docs
2024-11-13 11:07:15 -05:00
Tom Pham
965286db3e docs: fix spelling error (#28075)
Fix spelling error in docs
2024-11-13 11:06:13 -05:00
Zapiron
892694d735 docs: Fixed broken link for AI models introduction (#28079)
Fixed broken redirect to the introduction to AI models in the Forefront
platform
2024-11-13 11:03:40 -05:00
Vruddhi Shah
beef4c4d62 Proofreading and Editing Report for Migration Guide (#28084)
Corrections and Suggestions for Migrating LangChain Code Documentation

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-13 11:03:09 -05:00
Zapiron
2cec957274 docs: Fix missing space between the words API Reference (#28087)
Added an expected space between the words APIReference
2024-11-13 11:02:46 -05:00
Zapiron
da7c79b794 DOCS: Concept Section Improvements & Updates (#27733)
Edited mainly the `Concepts` section in the LangChain documentation.

Overview:
* Updated some explanations to make the point more clear / Add missing
words for some documentations.
* Rephrased some sentences to make it shorter and more concise.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-11-13 11:01:27 -05:00
Zapiron
02de346f6d docs: Fixed additional 'the' and remove 'turns' to make explanation clearer (#28082)
Fixed additional 'the' and remove the word 'turns' as it would make
explanation clearer

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-13 15:15:39 +00:00
Zapiron
298ebeee4e docs: Fixed broken link for Cloudfare docs for the models available (#28080)
Fixed the broken redirect to see all the cloudfare models
2024-11-13 10:07:33 -05:00
Zapiron
8241c0df23 docs: Fixed wrong link redirect from JS ToolMessage to Python ToolMes… (#28083)
Fixed the link to ToolMessage from the JS documentation to Python
documentation
2024-11-13 10:05:19 -05:00
Zapiron
77c8a5c70c docs: Fixed broken link to the Luminous model family introduction (#28078)
The Luminous Model hyperlink at the start of the model is broken.
Fixed it to update it with the latest link used by the integration
2024-11-13 10:04:50 -05:00
Vadym Barda
09e85c7c4b xai[patch]: update dependencies (#28067) 2024-11-12 16:15:17 -05:00
am-kinetica
a646f1c383 Handled empty search result handling and updated the notebook (#27914)
- [ ] **PR title**: "community: updated Kinetica vectorstore"

  - **Description:** Handled empty search results
  - **Issue:** used to throw error if the search results were empty

@efriis
2024-11-12 13:03:49 -08:00
ccurme
00e7b2dada anthropic[patch]: add examples to API ref (#28065) 2024-11-12 20:17:02 +00:00
Vadym Barda
48ee322a78 partners: add xAI chat integration (#28032) 2024-11-12 15:11:29 -05:00
ccurme
2898b95ca7 anthropic[major]: release 0.3.0 (#28063) 2024-11-12 14:58:00 -05:00
ccurme
5eaa0e8c45 openai[patch]: release 0.2.8 (#28062) 2024-11-12 14:57:11 -05:00
ccurme
15b7dd3ad7 community[patch]: release 0.3.7 (#28061) 2024-11-12 19:54:58 +00:00
ccurme
5460096086 core[patch]: release 0.3.17 (#28060) 2024-11-12 19:38:56 +00:00
ccurme
1538ee17f9 anthropic[major]: support python 3.13 (#27916)
Last week Anthropic released version 0.39.0 of its python sdk, which
enabled support for Python 3.13. This release deleted a legacy
`client.count_tokens` method, which we currently access during init of
the `Anthropic` LLM. Anthropic has replaced this functionality with the
[client.beta.messages.count_tokens()
API](https://github.com/anthropics/anthropic-sdk-python/pull/726).

To enable support for `anthropic >= 0.39.0` and Python 3.13, here we
drop support for the legacy token counting method, and add support for
the new method via `ChatAnthropic.get_num_tokens_from_messages`.

To fully support the token counting API, we update the signature of
`get_num_tokens_from_message` to accept tools everywhere.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-11-12 14:31:07 -05:00
Syed Hyder Zaidi
759b6ed17a docs: Fix typo in Tavily Search example (#28034)
Changed "demon" to "demo" in the code comment for clarity.

PR Title
docs: Fix typo in Tavily Search example

PR Message
Description:
This PR fixes a typo in the code comment of the Tavily Search
documentation. Changed "demon" to "demo" for clarity and to avoid
confusion.

Issue:
No specific issue was mentioned, but this is a minor improvement in
documentation.

Dependencies:
No additional dependencies required.
2024-11-12 13:58:13 -05:00
ZhangShenao
ca7375ac20 Improvement[Community]Improve Embeddings API (#28038)
- Fix `BaichuanTextEmbeddings` api url
- Remove unused params in api doc
- Fix word spelling
2024-11-12 13:57:35 -05:00
Aditya Anand
e290736696 Update streaming.mdx (#28055)
fix: correct grammar in documentation for streaming modes

Updated sentence to clarify usage of "choose" in "When using the stream
and astream methods with LangGraph, you can choose one or more streaming
modes..." for better readability.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-12 16:43:12 +00:00
Aditya Anand
f9212c77e7 DOC: Fix typo in documentation for streaming modes, correcting 'witte… (#28052)
…n' to 'written' in 'Emit custom output written using LangGraph’s
StreamWriter.'

### Changes:
- Corrected the typo in the phrase 'Emit custom output witten using
LangGraph’s StreamWriter.' to 'Emit custom output written using
LangGraph’s StreamWriter.'
- Enhanced the clarity of the documentation surrounding LangGraph’s
streaming modes, specifically around the StreamWriter functionality.
- Provided additional context and emphasis on the role of the
StreamWriter class in handling custom output.

### Issue Reference:
- GitHub issue: https://github.com/langchain-ai/langchain/issues/28051

This update addresses the issue raised regarding the incorrect spelling
and aims to improve the clarity of the streaming mode documentation for
better user understanding.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**:
**Description:**  
Fixed a typo in the documentation for streaming modes, changing "witten"
to "written" in the phrase "Emit custom output witten using LangGraph’s
StreamWriter."
**Issue:**  
This PR addresses and fixes the typo in the documentation referenced in
[#28051](https://github.com/langchain-ai/langchain/issues/28051).


**Issue:**  
This PR addresses and fixes the typo in the documentation referenced in
[#28051](https://github.com/langchain-ai/langchain/issues/28051).


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-12 11:42:30 -05:00
Bagatur
139881b108 openai[patch]: fix azure oai stream check (#28048) 2024-11-12 15:42:06 +00:00
Bagatur
9611f0b55d openai[patch]: Release 0.2.7 (#28047) 2024-11-12 15:16:15 +00:00
Bagatur
5c14e1f935 community[patch]: Release 0.3.6 (#28046) 2024-11-12 15:15:07 +00:00
Bagatur
9ebd7ebed8 core[patch]: Release 0.3.16 (#28045) 2024-11-12 14:57:15 +00:00
Changyong Um
9484cc0962 community[docs]: modify parameter for the LoRA adapter on the vllm page (#27930)
**Description:** 
This PR modifies the documentation regarding the configuration of the
VLLM with the LoRA adapter. The updates aim to provide clear
instructions for users on how to set up the LoRA adapter when using the
VLLM.

- before
```python
VLLM(..., enable_lora=True)
```
- after
```python
VLLM(..., 
    vllm_kwargs={
        "enable_lora": True
    }
)
```
This change clarifies that users should use the vllm_kwargs to enable
the LoRA adapter.

Co-authored-by: Um Changyong <changyong.um@sfa.co.kr>
2024-11-11 15:41:56 -05:00
Zapiron
0b85f9035b docs: Makes the phrasing more smooth and reasoning more clear (#28020)
Updated the phrasing and reasoning on the "abstraction not receiving
much development" part of the documentation

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-11 17:17:29 +00:00
Zapiron
f695b96484 docs:Fixed missing hyperlink and changed AI to LLMs for clarity (#28006)
Changed "AI" to "LLM" in a paragraph
Fixed missing hyperlink for the structured output point
2024-11-11 12:14:29 -05:00
Choy Fuguan
c0f3777657 docs: removed bolding from header (#28001)
removed extra ** after heading two
2024-11-11 12:13:02 -05:00
Salman Faroz
44df79cf52 Correcting AzureOpenAI initialization (#28014) 2024-11-11 12:10:59 -05:00
Hammad Randhawa
57fc62323a docs : Update sql_qa.ipynb (#28026)
Text Documentation Bug:

Changed DSL query to SQL query.
2024-11-11 12:04:09 -05:00
ccurme
922b6b0e46 docs: update some cassettes (#28010) 2024-11-09 21:04:18 +00:00
ccurme
8e91c7ceec docs: add cross-links (#28000)
Mainly to improve visibility of integration pages.
2024-11-09 08:57:58 -05:00
Bagatur
33dbfba08b openai[patch]: default to invoke on o1 stream() (#27983) 2024-11-08 19:12:59 -08:00
Bagatur
503f2487a5 docs: intro nit (#27998) 2024-11-08 11:51:17 -08:00
ccurme
ff2152b115 docs: update tutorials index and add get started guides (#27996) 2024-11-08 14:47:32 -05:00
Eric Pinzur
c421997caa community[patch]: Added type hinting to OpenSearch clients (#27946)
Description:
* When working with OpenSearchVectorSearch to make
OpenSearchGraphVectorStore (coming soon), I noticed that there wasn't
type hinting for the underlying OpenSearch clients. This fixes that
issue.
* Confirmed tests are still passing with code changes.

Note that there is some additional code duplication now, but I think
this approach is cleaner overall.
2024-11-08 11:04:57 -08:00
Zapiron
4c2392e55c docs: fix link in custom tools guide (#27975)
Fixed broken link in tools documentation for `BaseTool`
2024-11-08 09:40:15 -05:00
Zapiron
85925e3164 docs: fix link in tool-calling guide (#27976)
Fix broken BaseTool link in documentation
2024-11-08 09:39:27 -05:00
Zapiron
138f360b25 docs: fix typo in PDF loader guide (#27977)
Fixed duplicate "py" in hyperlink to `pypdf` docs
2024-11-08 09:38:32 -05:00
Saad Makrod
b509747c7f Community: Google Books API Tool (#27307)
## Description

As proposed in our earlier discussion #26977 we have introduced a Google
Books API Tool that leverages the Google Books API found at
[https://developers.google.com/books/docs/v1/using](https://developers.google.com/books/docs/v1/using)
to generate book recommendations.

### Sample Usage

```python
from langchain_community.tools import GoogleBooksQueryRun
from langchain_community.utilities import GoogleBooksAPIWrapper

api_wrapper = GoogleBooksAPIWrapper()
tool = GoogleBooksQueryRun(api_wrapper=api_wrapper)

tool.run('ai')
```

### Sample Output

```txt
Here are 5 suggestions based off your search for books related to ai:

1. "AI's Take on the Stigma Against AI-Generated Content" by Sandy Y. Greenleaf: In a world where artificial intelligence (AI) is rapidly advancing and transforming various industries, a new form of content creation has emerged: AI-generated content. However, despite its potential to revolutionize the way we produce and consume information, AI-generated content often faces a significant stigma. "AI's Take on the Stigma Against AI-Generated Content" is a groundbreaking book that delves into the heart of this issue, exploring the reasons behind the stigma and offering a fresh, unbiased perspective on the topic. Written from the unique viewpoint of an AI, this book provides readers with a comprehensive understanding of the challenges and opportunities surrounding AI-generated content. Through engaging narratives, thought-provoking insights, and real-world examples, this book challenges readers to reconsider their preconceptions about AI-generated content. It explores the potential benefits of embracing this technology, such as increased efficiency, creativity, and accessibility, while also addressing the concerns and drawbacks that contribute to the stigma. As you journey through the pages of this book, you'll gain a deeper understanding of the complex relationship between humans and AI in the realm of content creation. You'll discover how AI can be used as a tool to enhance human creativity, rather than replace it, and how collaboration between humans and machines can lead to unprecedented levels of innovation. Whether you're a content creator, marketer, business owner, or simply someone curious about the future of AI and its impact on our society, "AI's Take on the Stigma Against AI-Generated Content" is an essential read. With its engaging writing style, well-researched insights, and practical strategies for navigating this new landscape, this book will leave you equipped with the knowledge and tools needed to embrace the AI revolution and harness its potential for success. Prepare to have your assumptions challenged, your mind expanded, and your perspective on AI-generated content forever changed. Get ready to embark on a captivating journey that will redefine the way you think about the future of content creation.
Read more at https://play.google.com/store/books/details?id=4iH-EAAAQBAJ&source=gbs_api

2. "AI Strategies For Web Development" by Anderson Soares Furtado Oliveira: From fundamental to advanced strategies, unlock useful insights for creating innovative, user-centric websites while navigating the evolving landscape of AI ethics and security Key Features Explore AI's role in web development, from shaping projects to architecting solutions Master advanced AI strategies to build cutting-edge applications Anticipate future trends by exploring next-gen development environments, emerging interfaces, and security considerations in AI web development Purchase of the print or Kindle book includes a free PDF eBook Book Description If you're a web developer looking to leverage the power of AI in your projects, then this book is for you. Written by an AI and ML expert with more than 15 years of experience, AI Strategies for Web Development takes you on a transformative journey through the dynamic intersection of AI and web development, offering a hands-on learning experience.The first part of the book focuses on uncovering the profound impact of AI on web projects, exploring fundamental concepts, and navigating popular frameworks and tools. As you progress, you'll learn how to build smart AI applications with design intelligence, personalized user journeys, and coding assistants. Later, you'll explore how to future-proof your web development projects using advanced AI strategies and understand AI's impact on jobs. Toward the end, you'll immerse yourself in AI-augmented development, crafting intelligent web applications and navigating the ethical landscape.Packed with insights into next-gen development environments, AI-augmented practices, emerging realities, interfaces, and security governance, this web development book acts as your roadmap to staying ahead in the AI and web development domain. What you will learn Build AI-powered web projects with optimized models Personalize UX dynamically with AI, NLP, chatbots, and recommendations Explore AI coding assistants and other tools for advanced web development Craft data-driven, personalized experiences using pattern recognition Architect effective AI solutions while exploring the future of web development Build secure and ethical AI applications following TRiSM best practices Explore cutting-edge AI and web development trends Who this book is for This book is for web developers with experience in programming languages and an interest in keeping up with the latest trends in AI-powered web development. Full-stack, front-end, and back-end developers, UI/UX designers, software engineers, and web development enthusiasts will also find valuable information and practical guidelines for developing smarter websites with AI. To get the most out of this book, it is recommended that you have basic knowledge of programming languages such as HTML, CSS, and JavaScript, as well as a familiarity with machine learning concepts.
Read more at https://play.google.com/store/books/details?id=FzYZEQAAQBAJ&source=gbs_api

3. "Artificial Intelligence for Students" by Vibha Pandey: A multifaceted approach to develop an understanding of AI and its potential applications KEY FEATURES ● AI-informed focuses on AI foundation, applications, and methodologies. ● AI-inquired focuses on computational thinking and bias awareness. ● AI-innovate focuses on creative and critical thinking and the Capstone project. DESCRIPTION AI is a discipline in Computer Science that focuses on developing intelligent machines, machines that can learn and then teach themselves. If you are interested in AI, this book can definitely help you prepare for future careers in AI and related fields. The book is aligned with the CBSE course, which focuses on developing employability and vocational competencies of students in skill subjects. The book is an introduction to the basics of AI. It is divided into three parts – AI-informed, AI-inquired and AI-innovate. It will help you understand AI's implications on society and the world. You will also develop a deeper understanding of how it works and how it can be used to solve complex real-world problems. Additionally, the book will also focus on important skills such as problem scoping, goal setting, data analysis, and visualization, which are essential for success in AI projects. Lastly, you will learn how decision trees, neural networks, and other AI concepts are commonly used in real-world applications. By the end of the book, you will develop the skills and competencies required to pursue a career in AI. WHAT YOU WILL LEARN ● Get familiar with the basics of AI and Machine Learning. ● Understand how and where AI can be applied. ● Explore different applications of mathematical methods in AI. ● Get tips for improving your skills in Data Storytelling. ● Understand what is AI bias and how it can affect human rights. WHO THIS BOOK IS FOR This book is for CBSE class XI and XII students who want to learn and explore more about AI. Basic knowledge of Statistical concepts, Algebra, and Plotting of equations is a must. TABLE OF CONTENTS 1. Introduction: AI for Everyone 2. AI Applications and Methodologies 3. Mathematics in Artificial Intelligence 4. AI Values (Ethical Decision-Making) 5. Introduction to Storytelling 6. Critical and Creative Thinking 7. Data Analysis 8. Regression 9. Classification and Clustering 10. AI Values (Bias Awareness) 11. Capstone Project 12. Model Lifecycle (Knowledge) 13. Storytelling Through Data 14. AI Applications in Use in Real-World
Read more at https://play.google.com/store/books/details?id=ptq1EAAAQBAJ&source=gbs_api

4. "The AI Book" by Ivana Bartoletti, Anne Leslie and Shân M. Millie: Written by prominent thought leaders in the global fintech space, The AI Book aggregates diverse expertise into a single, informative volume and explains what artifical intelligence really means and how it can be used across financial services today. Key industry developments are explained in detail, and critical insights from cutting-edge practitioners offer first-hand information and lessons learned. Coverage includes: · Understanding the AI Portfolio: from machine learning to chatbots, to natural language processing (NLP); a deep dive into the Machine Intelligence Landscape; essentials on core technologies, rethinking enterprise, rethinking industries, rethinking humans; quantum computing and next-generation AI · AI experimentation and embedded usage, and the change in business model, value proposition, organisation, customer and co-worker experiences in today’s Financial Services Industry · The future state of financial services and capital markets – what’s next for the real-world implementation of AITech? · The innovating customer – users are not waiting for the financial services industry to work out how AI can re-shape their sector, profitability and competitiveness · Boardroom issues created and magnified by AI trends, including conduct, regulation & oversight in an algo-driven world, cybersecurity, diversity & inclusion, data privacy, the ‘unbundled corporation’ & the future of work, social responsibility, sustainability, and the new leadership imperatives · Ethical considerations of deploying Al solutions and why explainable Al is so important
Read more at http://books.google.ca/books?id=oE3YDwAAQBAJ&dq=ai&hl=&source=gbs_api

5. "Artificial Intelligence in Society" by OECD: The artificial intelligence (AI) landscape has evolved significantly from 1950 when Alan Turing first posed the question of whether machines can think. Today, AI is transforming societies and economies. It promises to generate productivity gains, improve well-being and help address global challenges, such as climate change, resource scarcity and health crises.
Read more at https://play.google.com/store/books/details?id=eRmdDwAAQBAJ&source=gbs_api
```

## Issue 

This closes #27276 

## Dependencies

No additional dependencies were added

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 15:29:35 -08:00
Massimiliano Pronesti
be3b7f9bae cookbook: add Anthropic's contextual retrieval (#27898)
Hi there, this PR adds a notebook implementing Anthropic's proposed
[Contextual
retrieval](https://www.anthropic.com/news/contextual-retrieval) to
langchain's cookbook.
2024-11-07 14:48:01 -08:00
Erick Friis
733e43eed0 docs: new stack diagram (#27972) 2024-11-07 22:46:56 +00:00
Erick Friis
a073c4c498 templates,docs: leave templates in v0.2 (#27952)
all template installs will now have to declare `--branch v0.2` to make
clear they aren't compatible with langchain 0.3 (most have a pydantic v1
setup). e.g.

```
langchain-cli app add pirate-speak --branch v0.2
```
2024-11-07 22:23:48 +00:00
Erick Friis
8807e6986c docs: ignore case production fork master (#27971) 2024-11-07 13:55:21 -08:00
Shawn Lee
6f368e9eab community: handle chatdeepinfra jsondecode error (#27603)
Fixes #27602 

Added error handling to return empty dict if args is empty string or
None.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 13:47:19 -08:00
CLOVA Studio 개발
0588bab33e community: fix ClovaXEmbeddings document API link address (#27957)
- **Description:** 404 error occurs because `API reference` link address
path is incorrect on
`langchain/docs/docs/integrations/text_embedding/naver.ipynb`
- **Issue:** fix `API reference` link address correct path.

@vbarda @efriis
2024-11-07 13:46:01 -08:00
Akshata
05fd6a16a9 Add ChatModels wrapper for Cloudflare Workers AI (#27645)
Thank you for contributing to LangChain!

- [x] **PR title**: "community: chat models wrapper for Cloudflare
Workers AI"


- [x] **PR message**:
- **Description:** Add chat models wrapper for Cloudflare Workers AI.
Enables Langgraph intergration via ChatModel for tool usage, agentic
usage.


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-07 15:34:24 -05:00
Erick Friis
8a5b9bf2ad box: migrate to repo (#27969) 2024-11-07 10:19:22 -08:00
ccurme
1ad49957f5 docs[patch]: update cassettes for sql/csv notebook (#27966) 2024-11-07 11:48:45 -05:00
ccurme
a747dbd24b anthropic[patch]: remove retired model from tests (#27965)
`claude-instant` was [retired
yesterday](https://docs.anthropic.com/en/docs/resources/model-deprecations).
2024-11-07 16:16:29 +00:00
Aksel Joonas Reedi
2cb39270ec community: bytes as a source to AzureAIDocumentIntelligenceLoader (#26618)
- **Description:** This PR adds functionality to pass in in-memory bytes
as a source to `AzureAIDocumentIntelligenceLoader`.
- **Issue:** I needed the functionality, so I added it.
- **Dependencies:** NA
- **Twitter handle:** @akseljoonas if this is a big enough change :)

---------

Co-authored-by: Aksel Joonas Reedi <aksel@klippa.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 03:40:21 +00:00
Martin Triska
7a9149f5dd community: ZeroxPDFLoader (#27800)
# OCR-based PDF loader

This implements [Zerox](https://github.com/getomni-ai/zerox) PDF
document loader.
Zerox utilizes simple but very powerful (even though slower and more
costly) approach to parsing PDF documents: it converts PDF to series of
images and passes it to a vision model requesting the contents in
markdown.

It is especially suitable for complex PDFs that are not parsed well by
other alternatives.

## Example use:
```python
from langchain_community.document_loaders.pdf import ZeroxPDFLoader

os.environ["OPENAI_API_KEY"] = "" ## your-api-key

model = "gpt-4o-mini" ## openai model
pdf_url = "https://assets.ctfassets.net/f1df9zr7wr1a/soP1fjvG1Wu66HJhu3FBS/034d6ca48edb119ae77dec5ce01a8612/OpenAI_Sacra_Teardown.pdf"

loader = ZeroxPDFLoader(file_path=pdf_url, model=model)
docs = loader.load()
```

The Zerox library supports wide range of provides/models. See Zerox
documentation for details.

- **Dependencies:** `zerox`
- **Twitter handle:** @martintriska1

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-11-07 03:14:57 +00:00
Dmitriy Prokopchuk
53b0a99f37 community: Memcached LLM Cache Integration (#27323)
## Description
This PR adds support for Memcached as a usable LLM model cache by adding
the ```MemcachedCache``` implementation relying on the
[pymemcache](https://github.com/pinterest/pymemcache) client.

Unit test-wise, the new integration is generally covered under existing
import testing. All new functionality depends on pymemcache if
instantiated and used, so to comply with the other cache implementations
the PR also adds optional integration tests for ```MemcachedCache```.

Since this is a new integration, documentation is added for Memcached as
an integration and as an LLM Cache.

## Issue
This PR closes #27275 which was originally raised as a discussion in
#27035

## Dependencies
There are no new required dependencies for langchain, but
[pymemcache](https://github.com/pinterest/pymemcache) is required to
instantiate the new ```MemcachedCache```.

## Example Usage
```python3
from langchain.globals import set_llm_cache
from langchain_openai import OpenAI

from langchain_community.cache import MemcachedCache
from pymemcache.client.base import Client

llm = OpenAI(model="gpt-3.5-turbo-instruct", n=2, best_of=2)
set_llm_cache(MemcachedCache(Client('localhost')))

# The first time, it is not yet in cache, so it should take longer
llm.invoke("Which city is the most crowded city in the USA?")

# The second time it is, so it goes faster
llm.invoke("Which city is the most crowded city in the USA?")
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 03:07:59 +00:00
Siddharth Murching
cfff2a057e community: Update UC toolkit documentation to use LangGraph APIs (#26778)
- **Description:** Update UC toolkit documentation to show an example of
using recommended LangGraph agent APIs before the existing LangChain
AgentExecutor example. Tested by manually running the updated example
notebook
- **Dependencies:** No new dependencies

---------

Signed-off-by: Sid Murching <sid.murching@databricks.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 02:47:41 +00:00
ZhangShenao
c2072d909a Improvement[Partner] Improve qdrant vector store (#27251)
- Add static method decorator
- Add args for api doc
- Fix word spelling

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-07 02:42:41 +00:00
Baptiste Pasquier
81f7daa458 community: add InfinityRerank (#27043)
**Description:** 

- Add a Reranker for Infinity server.

**Dependencies:** 

This wrapper uses
[infinity_client](https://github.com/michaelfeil/infinity/tree/main/libs/client_infinity/infinity_client)
to connect to an Infinity server.

**Tests and docs**

- integration test: test_infinity_rerank.py
- example notebook: infinity_rerank.ipynb
[here](https://github.com/baptiste-pasquier/langchain/blob/feat/infinity-rerank/docs/docs/integrations/document_transformers/infinity_rerank.ipynb)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-06 17:26:30 -08:00
Erick Friis
2494deb2a4 infra: remove google creds from release and integration test workflows (#27950) 2024-11-07 00:31:10 +00:00
Martin Triska
90189f5639 community: Allow other than default parsers in SharePointLoader and OneDriveLoader (#27716)
## What this PR does?

### Currently `O365BaseLoader` (and consequently both derived loaders)
are limited to `pdf`, `doc`, `docx` files.
- **Solution: here we introduce _handlers_ attribute that allows for
custom handlers to be passed in. This is done in _dict_ form:**

**Example:**
```python
from langchain_community.document_loaders.parsers.documentloader_adapter import DocumentLoaderAsParser
# PR for DocumentLoaderAsParser here: https://github.com/langchain-ai/langchain/pull/27749
from langchain_community.document_loaders.excel import UnstructuredExcelLoader

xlsx_parser = DocumentLoaderAsParser(UnstructuredExcelLoader, mode="paged")

# create dictionary mapping file types to handlers (parsers)
handlers = {
    "doc": MsWordParser()
    "pdf": PDFMinerParser()
    "txt": TextParser()
    "xlsx": xlsx_parser
}
loader = SharePointLoader(document_library_id="...",
                            handlers=handlers # pass handlers to SharePointLoader
                            )
documents = loader.load()

# works the same in OneDriveLoader
loader = OneDriveLoader(document_library_id="...",
                            handlers=handlers
                            )
```
This dictionary is then passed to `MimeTypeBasedParser` same as in the
[current
implementation](5a2cfb49e0/libs/community/langchain_community/document_loaders/parsers/registry.py (L13)).


### Currently `SharePointLoader` and `OneDriveLoader` are separate
loaders that both inherit from `O365BaseLoader`
However both of these implement the same functionality. The only
differences are:
- `SharePointLoader` requires argument `document_library_id` whereas
`OneDriveLoader` requires `drive_id`. These are just different names for
the same thing.
  - `SharePointLoader` implements significantly more features.
- **Solution: `OneDriveLoader` is replaced with an empty shell just
renaming `drive_id` to `document_library_id` and inheriting from
`SharePointLoader`**

**Dependencies:** None
**Twitter handle:** @martintriska1

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-06 17:44:34 -05:00
takahashi
482c168b3e langchain_core: add file_type option to make file type default as png (#27855)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"

- [ ] **description**
langchain_core.runnables.graph_mermaid.draw_mermaid_png calls this
function, but the Mermaid API returns JPEG by default. To be consistent,
add the option `file_type` with the default `png` type.

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
With this small change, I didn't add tests and docs.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more:
One long sentence was divided into two.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-11-06 22:37:07 +00:00
Roman Solomatin
0f85dea8c8 langchain-huggingface: use separate kwargs for queries and docs (#27857)
Now `encode_kwargs` used for both for documents and queries and this
leads to wrong embeddings. E. g.:
```python
    model_kwargs = {"device": "cuda", "trust_remote_code": True}
    encode_kwargs = {"normalize_embeddings": False, "prompt_name": "s2p_query"}

    model = HuggingFaceEmbeddings(
        model_name="dunzhang/stella_en_400M_v5",
        model_kwargs=model_kwargs,
        encode_kwargs=encode_kwargs,
    )

    query_embedding = np.array(
        model.embed_query("What are some ways to reduce stress?",)
    )
    document_embedding = np.array(
        model.embed_documents(
            [
                "There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.",
                "Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.",
            ]
        )
    )
    print(model._client.similarity(query_embedding, document_embedding)) # output: tensor([[0.8421, 0.3317]], dtype=torch.float64)
```
But from the [model
card](https://huggingface.co/dunzhang/stella_en_400M_v5#sentence-transformers)
expexted like this:
```python
    model_kwargs = {"device": "cuda", "trust_remote_code": True}
    encode_kwargs = {"normalize_embeddings": False}
    query_encode_kwargs = {"normalize_embeddings": False, "prompt_name": "s2p_query"}

    model = HuggingFaceEmbeddings(
        model_name="dunzhang/stella_en_400M_v5",
        model_kwargs=model_kwargs,
        encode_kwargs=encode_kwargs,
        query_encode_kwargs=query_encode_kwargs,
    )

    query_embedding = np.array(
        model.embed_query("What are some ways to reduce stress?", )
    )
    document_embedding = np.array(
        model.embed_documents(
            [
                "There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.",
                "Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.",
            ]
        )
    )
    print(model._client.similarity(query_embedding, document_embedding)) # tensor([[0.8398, 0.2990]], dtype=torch.float64)
```
2024-11-06 17:35:39 -05:00
Bagatur
60123bef67 docs: fix trim_messages docstring (#27948) 2024-11-06 22:25:13 +00:00
murrlincoln
14f1827953 docs: Adding notebook for cdp agentkit toolkit (#27910)
- **Description:** Adding in the first pass of documentation for the CDP
Agentkit Toolkit
    - **Issue:** N/a
    - **Dependencies:** cdp-langchain
    - **Twitter handle:** @CoinbaseDev

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: John Peterson <john.peterson@coinbase.com>
2024-11-06 13:28:27 -08:00
Eric Pinzur
ea0ad917b0 community: added Document.id support to opensearch vectorstore (#27945)
Description:
* Added support of Document.id on OpenSearch vector store
* Added tests cases to match
2024-11-06 15:04:09 -05:00
Hammad Randhawa
75aa82fedc docs: Completed sentence under the heading "Instantiating a Browser … (#27944)
…Toolkit" in "playwright.ipynb" integration.

- Completed the incomplete sentence in the Langchain Playwright
documentation.

- Enhanced documentation clarity to guide users on best practices for
instantiating browser instances with Langchain Playwright.

Example before:
> "It's always recommended to instantiate using the from_browser method
so that the

Example after:
> "It's always recommended to instantiate using the `from_browser`
method so that the browser context is properly initialized and managed,
ensuring seamless interaction and resource optimization."

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-06 19:55:00 +00:00
Bagatur
67ce05a0a7 core[patch]: make oai tool description optional (#27756) 2024-11-06 18:06:47 +00:00
Bagatur
b2da3115ed docs: document init_chat_model standard params (#27812) 2024-11-06 09:50:07 -08:00
Dobiichi-Origami
395674d503 community: re-arrange function call message parse logic for Qianfan (#27935)
the [PR](https://github.com/langchain-ai/langchain/pull/26208) two month
ago has a potential bug which causes malfunction of `tool_call` for
`QianfanChatEndpoint` waiting for fix
2024-11-06 09:58:16 -05:00
Erick Friis
41b7a5169d infra: starter codeowners file (#27929) 2024-11-05 16:43:11 -08:00
ccurme
66966a6e72 openai[patch]: release 0.2.6 (#27924)
Some additions in support of [predicted
outputs](https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs)
feature:
- Bump openai sdk version
- Add integration test
- Add example to integration docs

The `prediction` kwarg is already plumbed through model invocation.
2024-11-05 23:02:24 +00:00
Erick Friis
a8c473e114 standard-tests: ci pipeline (#27923) 2024-11-05 20:55:38 +00:00
Erick Friis
c3b75560dc infra: release note grep order of operations (#27922) 2024-11-05 12:44:36 -08:00
Erick Friis
b3c81356ca infra: release note compute 2 (#27921) 2024-11-05 12:04:41 -08:00
Erick Friis
bff2a8b772 standard-tests: add tools standard tests (#27899) 2024-11-05 11:44:34 -08:00
SHJUN
f6b2f82099 community: chroma error patch(attribute changed on chroma) (#27827)
There was a change of attribute name which was "max_batch_size". It's
now "get_max_batch_size" method.
I want to use "create_batches" which is right down below.

Please check this PR link.
reference: https://github.com/chroma-core/chroma/pull/2305

---------

Signed-off-by: Prithvi Kannan <prithvi.kannan@databricks.com>
Co-authored-by: Prithvi Kannan <46332835+prithvikannan@users.noreply.github.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Jun Yamog <jkyamog@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: ono-hiroki <86904208+ono-hiroki@users.noreply.github.com>
Co-authored-by: Dobiichi-Origami <56953648+Dobiichi-Origami@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Duy Huynh <vndee.huynh@gmail.com>
Co-authored-by: Rashmi Pawar <168514198+raspawar@users.noreply.github.com>
Co-authored-by: sifatj <26035630+sifatj@users.noreply.github.com>
Co-authored-by: Eric Pinzur <2641606+epinzur@users.noreply.github.com>
Co-authored-by: Daniel Vu Dao <danielvdao@users.noreply.github.com>
Co-authored-by: Ofer Mendelevitch <ofermend@gmail.com>
Co-authored-by: Stéphane Philippart <wildagsx@gmail.com>
2024-11-05 19:43:11 +00:00
Tomaz Bratanic
a3bbbe6a86 update llm graph transformer documentation (#27905) 2024-11-05 11:54:26 -05:00
Erick Friis
31f4fb790d standard-tests: release 0.3.0 (#27900) 2024-11-04 17:29:15 -08:00
Erick Friis
ba5cba04ff infra: get min versions (#27896) 2024-11-04 23:46:13 +00:00
Bagatur
6973f7214f docs: sidebar capitalization (#27894) 2024-11-04 22:09:32 +00:00
Stéphane Philippart
4b8cd7a09a community: Use new OVHcloud batch embedding (#26209)
- **Description:** change to do the batch embedding server side and not
client side
- **Twitter handle:** @wildagsx

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-04 16:40:30 -05:00
Erick Friis
a54f390090 infra: fix prev tag output (#27892) 2024-11-04 12:46:23 -08:00
Erick Friis
75f80c2910 infra: fix prev tag condition (#27891) 2024-11-04 12:42:22 -08:00
Ofer Mendelevitch
d7c39e6dbb community: update Vectara integration (#27869)
Thank you for contributing to LangChain!

- **Description:** Updated Vectara integration
- **Issue:** refresh on descriptions across all demos and added UDF
reranker
- **Dependencies:** None
- **Twitter handle:** @ofermend

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:40:39 +00:00
Erick Friis
14a71a6e77 infra: fix prev tag calculation (#27890) 2024-11-04 12:38:39 -08:00
Daniel Vu Dao
5745f3bf78 docs: Update messages.mdx (#27856)
### Description
Updates phrasing for the header of the `Messages` section.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:36:31 +00:00
sifatj
e02a5ee03e docs: Update VectorStore as_retriever method url in qa_chat_history_how_to.ipynb (#27844)
**Description**: Update VectorStore `as_retriever` method api reference
url in `qa_chat_history_how_to.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:34:50 +00:00
sifatj
dd1711f3c2 docs: Update max_marginal_relevance_search api reference url in multi_vector.ipynb (#27843)
**Description**: Update VectorStore `max_marginal_relevance_search` api
reference url in `multi_vector.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:31:36 +00:00
sifatj
aa1f46a03a docs: Update VectorStore .as_retriever method url in vectorstore_retriever.ipynb (#27842)
**Description**: Update VectorStore `.as_retriever` method url in
`vectorstore_retriever.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:28:11 +00:00
Eric Pinzur
8eb38622a6 community: fixed bug in GraphVectorStoreRetriever (#27846)
Description:

This fixes an issue that mistakenly created in
https://github.com/langchain-ai/langchain/pull/27253. The issue
currently exists only in `langchain-community==0.3.4`.

Test cases were added to prevent this issue in the future.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:27:17 +00:00
sifatj
eecf95df9b docs: Update VectorStore api reference url in rag.ipynb (#27841)
**Description**: Update VectorStore api reference url in `rag.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:27:03 +00:00
sifatj
50563400fb docs: Update broken vectorstore urls in retrievers.ipynb (#27838)
**Description**: Update outdated `VectorStore` api reference urls in
`retrievers.ipynb`

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 20:26:03 +00:00
Bagatur
dfa83531ad qdrant,nomic[minor]: bump core deps (#27849) 2024-11-04 20:19:50 +00:00
Erick Friis
4e5cc84d40 infra: release tag compute (#27836) 2024-11-04 12:16:51 -08:00
Rashmi Pawar
f86a09f82c Add nvidia as provider for embedding, llm (#27810)
Documentation: Add NVIDIA as integration provider

cc: @mattf @dglogo

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-11-04 19:45:51 +00:00
Erick Friis
0c62684ce1 Revert "infra: add neo4j to package list" (#27887)
Reverts langchain-ai/langchain#27833

Wait for release
2024-11-04 18:18:38 +00:00
Erick Friis
bcf499df16 infra: add neo4j to package list (#27833) 2024-11-04 09:24:04 -08:00
Duy Huynh
a487ec47f4 community: set default output_token_limit value for PowerBIToolkit to fix validation error (#26308)
### Description:
This PR sets a default value of `output_token_limit = 4000` for the
`PowerBIToolkit` to fix the unintentionally validation error.

### Problem:
When attempting to run a code snippet from [Langchain's PowerBI toolkit
documentation](https://python.langchain.com/v0.1/docs/integrations/toolkits/powerbi/)
to interact with a `PowerBIDataset`, the following error occurs:

```
pydantic.v1.error_wrappers.ValidationError: 1 validation error for QueryPowerBITool
output_token_limit
  none is not an allowed value (type=type_error.none.not_allowed)
```

### Root Cause:
The issue arises because when creating a `QueryPowerBITool`, the
`output_token_limit` parameter is unintentionally set to `None`, which
is the current default for `PowerBIToolkit`. However, `QueryPowerBITool`
expects a default value of `4000` for `output_token_limit`. This
unintended override causes the error.


17659ca2cd/libs/community/langchain_community/agent_toolkits/powerbi/toolkit.py (L63)

17659ca2cd/libs/community/langchain_community/agent_toolkits/powerbi/toolkit.py (L72-L79)

17659ca2cd/libs/community/langchain_community/tools/powerbi/tool.py (L39)

### Solution:
To resolve this, the default value of `output_token_limit` is now
explicitly set to `4000` in `PowerBIToolkit` to prevent the accidental
assignment of `None`.

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-11-04 14:34:27 +00:00
Dobiichi-Origami
f7ced5b211 community: read function call from tool_calls for Qianfan (#26208)
I added one more 'elif' to read tool call message from `tool_calls`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-11-04 14:33:32 +00:00
ono-hiroki
b7d549ae88 docs: fix undefined 'data' variable in document_loader_csv.ipynb (#27872)
**Description:** 
This PR addresses an issue in the CSVLoader example where data is not
defined, causing a NameError. The line `data = loader.load()` is added
to correctly assign the output of loader.load() to the data variable.
2024-11-04 14:10:56 +00:00
Bagatur
3b0b7cfb74 chroma[minor]: release 0.2.0 (#27840) 2024-11-01 18:12:00 -07:00
Jun Yamog
830cad7bc0 core: fix CommaSeparatedListOutputParser to handle columns that may contain commas in it (#26365)
- **Description:**
Currently CommaSeparatedListOutputParser can't handle strings that may
contain commas within a column. It would parse any commas as the
delimiter.
Ex. 
"foo, foo2", "bar", "baz"

It will create 4 columns: "foo", "foo2", "bar", "baz"

This should be 3 columns:

"foo, foo2", "bar", "baz"

- **Dependencies:**
Added 2 additional imports, but they are built in python packages.

import csv
from io import StringIO

- **Twitter handle:** @jkyamog

- [ ] **Add tests and docs**: 
1. added simple unit test test_multiple_items_with_comma

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-11-01 22:42:24 +00:00
Erick Friis
9fedb04dd3 docs: INVALID_CHAT_HISTORY redirect (#27845) 2024-11-01 21:35:11 +00:00
Erick Friis
03a3670a5e infra: remove some special cases (#27839) 2024-11-01 21:13:43 +00:00
Bagatur
002e1c9055 airbyte: remove from master (#27837) 2024-11-01 13:59:34 -07:00
1155 changed files with 13879 additions and 45061 deletions

2
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,2 @@
/.github/ @efriis @baskaryan @ccurme
/libs/packages.yml @efriis

View File

@@ -1,7 +1,7 @@
Thank you for contributing to LangChain!
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for purely docs changes, "templates: ..." for template changes, "infra: ..." for CI changes.
- Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes.
- Example: "community: add foobar LLM"

View File

@@ -37,7 +37,6 @@ IGNORED_PARTNERS = [
PY_312_MAX_PACKAGES = [
f"libs/partners/{integration}"
for integration in [
"anthropic",
"chroma",
"couchbase",
"huggingface",
@@ -307,7 +306,7 @@ if __name__ == "__main__":
f"Unknown lib: {file}. check_diff.py likely needs "
"an update for this new library!"
)
elif any(file.startswith(p) for p in ["docs/", "templates/", "cookbook/"]):
elif any(file.startswith(p) for p in ["docs/", "cookbook/"]):
if file.startswith("docs/"):
docs_edited = True
dirs_to_run["lint"].add(".")

View File

@@ -7,17 +7,23 @@ else:
# for python 3.10 and below, which doesnt have stdlib tomllib
import tomli as tomllib
from packaging.version import parse as parse_version
from packaging.specifiers import SpecifierSet
from packaging.version import Version
import requests
from packaging.version import parse
from typing import List
import re
MIN_VERSION_LIBS = [
"langchain-core",
"langchain-community",
"langchain",
"langchain-text-splitters",
"numpy",
"SQLAlchemy",
]
@@ -31,29 +37,61 @@ SKIP_IF_PULL_REQUEST = [
]
def get_min_version(version: str) -> str:
# base regex for x.x.x with cases for rc/post/etc
# valid strings: https://peps.python.org/pep-0440/#public-version-identifiers
vstring = r"\d+(?:\.\d+){0,2}(?:(?:a|b|rc|\.post|\.dev)\d+)?"
# case ^x.x.x
_match = re.match(f"^\\^({vstring})$", version)
if _match:
return _match.group(1)
def get_pypi_versions(package_name: str) -> List[str]:
"""
Fetch all available versions for a package from PyPI.
# case >=x.x.x,<y.y.y
_match = re.match(f"^>=({vstring}),<({vstring})$", version)
if _match:
_min = _match.group(1)
_max = _match.group(2)
assert parse_version(_min) < parse_version(_max)
return _min
Args:
package_name (str): Name of the package
# case x.x.x
_match = re.match(f"^({vstring})$", version)
if _match:
return _match.group(1)
Returns:
List[str]: List of all available versions
raise ValueError(f"Unrecognized version format: {version}")
Raises:
requests.exceptions.RequestException: If PyPI API request fails
KeyError: If package not found or response format unexpected
"""
pypi_url = f"https://pypi.org/pypi/{package_name}/json"
response = requests.get(pypi_url)
response.raise_for_status()
return list(response.json()["releases"].keys())
def get_minimum_version(package_name: str, spec_string: str) -> Optional[str]:
"""
Find the minimum published version that satisfies the given constraints.
Args:
package_name (str): Name of the package
spec_string (str): Version specification string (e.g., ">=0.2.43,<0.4.0,!=0.3.0")
Returns:
Optional[str]: Minimum compatible version or None if no compatible version found
"""
# rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
spec_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", spec_string)
# rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1 (can be anywhere in constraint string)
for y in range(1, 10):
spec_string = re.sub(rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y+1}", spec_string)
# rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
for x in range(1, 10):
spec_string = re.sub(
rf"\^{x}\.(\d+)\.(\d+)", rf">={x}.\1.\2,<{x+1}", spec_string
)
spec_set = SpecifierSet(spec_string)
all_versions = get_pypi_versions(package_name)
valid_versions = []
for version_str in all_versions:
try:
version = parse(version_str)
if spec_set.contains(version):
valid_versions.append(version)
except ValueError:
continue
return str(min(valid_versions)) if valid_versions else None
def get_min_version_from_toml(
@@ -96,7 +134,7 @@ def get_min_version_from_toml(
][0]["version"]
# Use parse_version to get the minimum supported version from version_string
min_version = get_min_version(version_string)
min_version = get_minimum_version(lib, version_string)
# Store the minimum version in the min_versions dictionary
min_versions[lib] = min_version
@@ -112,6 +150,20 @@ def check_python_version(version_string, constraint_string):
:param constraint_string: A string representing the package's Python version constraints (e.g. ">=3.6, <4.0").
:return: True if the version matches the constraints, False otherwise.
"""
# rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
constraint_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", constraint_string)
# rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1.0 (can be anywhere in constraint string)
for y in range(1, 10):
constraint_string = re.sub(
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y+1}.0", constraint_string
)
# rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
for x in range(1, 10):
constraint_string = re.sub(
rf"\^{x}\.0\.(\d+)", rf">={x}.0.\1,<{x+1}.0.0", constraint_string
)
try:
version = Version(version_string)
constraints = SpecifierSet(constraint_string)

View File

@@ -41,12 +41,6 @@ jobs:
shell: bash
run: poetry run pip install "boto3<2" "google-cloud-aiplatform<2"
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Run integration tests
shell: bash
env:
@@ -81,11 +75,11 @@ jobs:
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # for airbyte
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
run: |
make integration_tests

View File

@@ -95,9 +95,30 @@ jobs:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | grep -P $REGEX || true | head -1)
PREV_TAG="$PKG_NAME==${VERSION%.*}.$(( ${VERSION##*.} - 1 ))"; [[ "${VERSION##*.}" -eq 0 ]] && PREV_TAG=""
# backup case if releasing e.g. 0.3.0, looks up last release
# note if last release (chronologically) was e.g. 0.1.47 it will get
# that instead of the last 0.2 release
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | (grep -P $REGEX || true) | head -1)
fi
# if PREV_TAG is empty, let it be empty
if [ -z "$PREV_TAG" ]; then
echo "No previous tag found - first release"
else
# confirm prev-tag actually exists in git repo with git tag
GIT_TAG_RESULT=$(git tag -l "$PREV_TAG")
if [ -z "$GIT_TAG_RESULT" ]; then
echo "Previous tag $PREV_TAG not found in git repo"
exit 1
fi
fi
TAG="${PKG_NAME}==${VERSION}"
if [ "$TAG" == "$PREV_TAG" ]; then
echo "No new version to release"
@@ -231,7 +252,7 @@ jobs:
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
poetry run pip install packaging
poetry run pip install packaging requests
python_version="$(poetry run python --version | awk '{print $2}')"
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml release $python_version)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
@@ -246,12 +267,6 @@ jobs:
make tests
working-directory: ${{ inputs.working-directory }}
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Import integration test dependencies
run: poetry install --with test,test_integration
working-directory: ${{ inputs.working-directory }}
@@ -289,11 +304,11 @@ jobs:
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # for airbyte
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}

View File

@@ -47,7 +47,7 @@ jobs:
id: min-version
shell: bash
run: |
poetry run pip install packaging tomli
poetry run pip install packaging tomli requests
python_version="$(poetry run python --version | awk '{print $2}')"
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml pull_request $python_version)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"

View File

@@ -72,9 +72,7 @@ jobs:
- name: Install dependencies
working-directory: langchain
run: |
# skip airbyte due to pandas dependency issue
python -m uv pip install $(ls ./libs/partners | grep -vE "airbyte" | xargs -I {} echo "./libs/partners/{}")
python -m uv pip install $(ls ./libs/partners | xargs -I {} echo "./libs/partners/{}")
python -m uv pip install libs/core libs/langchain libs/text-splitters libs/community libs/experimental
python -m uv pip install -r docs/api_reference/requirements.txt

View File

@@ -31,7 +31,7 @@ jobs:
uses: Ana06/get-changed-files@v2.2.0
- id: set-matrix
run: |
python -m pip install packaging
python -m pip install packaging requests
python .github/scripts/check_diff.py ${{ steps.files.outputs.all }} >> $GITHUB_OUTPUT
outputs:
lint: ${{ steps.set-matrix.outputs.lint }}

View File

@@ -1,11 +1,11 @@
# Migrating
Please see the following guides for migratin LangChain code:
Please see the following guides for migrating LangChain code:
* Migrate to [LangChain v0.3](https://python.langchain.com/docs/versions/v0_3/)
* Migrate to [LangChain v0.2](https://python.langchain.com/docs/versions/v0_2/)
* Migrating from [LangChain 0.0.x Chains](https://python.langchain.com/docs/versions/migrating_chains/)
* Upgrate to [LangGraph Memory](https://python.langchain.com/docs/versions/migrating_memory/)
* Upgrade to [LangGraph Memory](https://python.langchain.com/docs/versions/migrating_memory/)
The [LangChain CLI](https://python.langchain.com/docs/versions/v0_3/#migrate-using-langchain-cli) can help automatically upgrade your code to use non deprecated imports.
The [LangChain CLI](https://python.langchain.com/docs/versions/v0_3/#migrate-using-langchain-cli) can help you automatically upgrade your code to use non-deprecated imports.
This will be especially helpful if you're still on either version 0.0.x or 0.1.x of LangChain.

View File

@@ -66,12 +66,12 @@ spell_fix:
## lint: Run linting on the project.
lint lint_package lint_tests:
poetry run ruff check docs templates cookbook
poetry run ruff format docs templates cookbook --diff
poetry run ruff check --select I docs templates cookbook
git grep 'from langchain import' docs/docs templates cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
poetry run ruff check docs cookbook
poetry run ruff format docs cookbook cookbook --diff
poetry run ruff check --select I docs cookbook
git grep 'from langchain import' docs/docs cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
## format: Format the project files.
format format_diff:
poetry run ruff format docs templates cookbook
poetry run ruff check --select I --fix docs templates cookbook
poetry run ruff format docs cookbook
poetry run ruff check --select I --fix docs cookbook

View File

@@ -59,7 +59,8 @@ For these applications, LangChain simplifies the entire application lifecycle:
- **[LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/)**: Turn your LangGraph applications into production-ready APIs and Assistants.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_062024.svg "LangChain Architecture Overview")
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_112024.svg#gh-light-mode-only "LangChain Architecture Overview")
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_112024_dark.svg#gh-dark-mode-only "LangChain Architecture Overview")
## 🧱 What can you build with LangChain?

View File

@@ -62,4 +62,5 @@ Notebook | Description
[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
[oracleai_demo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS.
[rag-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag-locally-on-intel-cpu.ipynb) | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release.
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.
[contextual_rag.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/contextual_rag.ipynb) | Performs contextual retrieval-augmented generation (RAG) prepending chunk-specific explanatory context to each chunk before embedding.

File diff suppressed because it is too large Load Diff

View File

@@ -530,7 +530,6 @@ def _out_file_path(package_name: str) -> Path:
def _build_index(dirs: List[str]) -> None:
custom_names = {
"airbyte": "Airbyte",
"aws": "AWS",
"ai21": "AI21",
"ibm": "IBM",

View File

@@ -1 +1 @@
eNqdVWtsFFUULrb4TvxjpPERx0aDms7szL63a4Nlu7SFttvultKl0XX2zt3daWfmTufe2e4WQUTkD4KODzQhosh2V2opkOIDFIwxRGLUaDRqMT4SH2DUKLHGIAre2W6lDfxyk93ZO+ec755zvu/cu6GUhQaWkbZgXNYINERA6AJbG0oGHDIhJhuLKiQZJBW6IrGeXaYhT92ZIUTHDQ6HqMsc0qEmyhxAqiMrOEBGJA76X1dgGaaQRFJ+6uiaOhViLKYhrmtg+tfUAUS30ghd1PUYooYVkUCGZCCTQoqChmUtzaQMpDJhLa3IOMPQzBDTRkRFFrW6eqbOQAq0g3EeE6jWra1n5mFm5FvmepkYGnVr76FvVCRBxX6V1gnrtn00uhLoExMDiipdpEQFQ/qC4uq0G8Q0bAye860tZaAo0V49WsggTKyJ+dXvFQGAFBVqAEm0AGtPekTW6xkJpuzqxmh6Giz31hobhFBnaTFZWJyJsvaJuq7IQLTtjgGMtPFKOSzJ6/BC85hdE0sbqhHrQIQm0dTm6MpTmjRG4Nx+jt+XYzERZU2hfWcVkeZT1Mv21+cadBEMUhC2IgGrOBM8MdcHYWu0QwSR2DxI0QAZa1Q0VK97cu57w9SIrEKrFOq6cLuK8fx2Lk4QON/+ecA4rwFrtEzDq/OCITHyLEAUw9rJT8z2R4FammSsXULA96IBsU5FBx8q0jBi4g0FygV871ipor4XIitmSfyq6rpCM+XFOrzMkOsZp4+JQZ1x8k43I3gaPIEGt4dp6egZD1W26bkoDfvL6k1RKsKztJdAxtQGoTQWuijhh23CaTV2+lSfLMzpCEO2kpU13sdGZ8aObWuenFEXi4y0qMkj5W2tw2Xmh0dywxIwJSmTHVb5wIjbJSehCVIHKiG6gextaEKsiq1dHsE1UbHM9n6M1sqzAs/ywkEqfhlQqdnF6MggLIaADjrJW1P1qpizddboEjwuL8/zQTqMQDElGDOTzUil7OAgoxtQQaJ0KMfSiYGKrMqUmPJv5RDBVsFDg1+70IGgQUiPm1E/NfNH5toNaKPbJZwHcQcCgTcu7jQL5AsEvIFD830wnJuJ4FTxaxc6VAAKTpeKx3Oz/qwsWVO30kVCTPHJJO93wYAERJfkSfmdHp/g9fqB4BS9EtwbWsaGRJCBbKysPqvUHO9s6mgLjcUoegihQRk+fnxBdSIBUomk2hhP5Nyc2aeoruYYkHFPZ9ynu4QeLrBU7423rFTTaLi1vSvHmRLlyef0uT1On9/DChzPCZzAJtoVXVo2EHP2upSRZr8ZjSbC0e62RC9Cq92mSgY6w5isgJ2puBBvXcUZuWTK2x02V/uGMvnWeIuYBss5X743G44sHUo2OaNxdbBVb6JciiTT6AgyVJkybUxjZT5YOh+sPR3eBufsdAQZqayARm7+WRhkWukNEdGUfJCOFZUSpE9RhTGZwMZOpMGpJ2kPzKwsNQ4o7X7BuTI+1J7Melr5UDfn9iXwgAupnTmva6CbI95m4guhgLdtThOEgJ/lK33w8u6ydvjzqf/PrF7pY+eOOxvRZ67CkoawJqdSxRg06PhYY0BBpkSPdQMWKefRprh1wA8CIOmSUv6A3+VPAR+7lB6Ys2j/HQ4F+04o34kPFu2h09JHF4Rv3nx5VflTTb/nzpHHhMjb/LUb//z7yt9/uO7lDub2/v6aen4fs7HruwbH7rdOTSqr2LG/vra+dF9RvWLTmfufXjPcXlsdb3rrbuXgoj3dH3/fsPcoeOX4m9yqyLsnT//w/bpz63/84uzpf6qe+Ob59xdd+cDQPVcNbFnqaHtu/75jjZs+yR85Nrk9vO7p8IfRh+/8/PpvS1MkG4wE39lWuPuyhWdaa8HPtW/u3D7xW9XC7R3c8s0fnLp058mbhUu39XStDy5+9sTxt6X1Gw603PTTxG2ll4TJvl9vvObZ6c7dnUv+WBL7pHbFDZ+dunfv6Datf/V3W8nl2R1XN48srz3zSMsvZ6drwG7PkenpLQ013KHt9z2D1svv3JWfWLZjekkkvvNE9OBHnD6x2HTVvPfX1lOt+pIPQP+nUV9my1NDRf/RO04vtHtVXbW8pufemy6pqvoXOupqkg==
eNqdVX1sE+cZT0B0qKgd3daxqWw73KqbSu5ydz5/xfM2xw6J58QOcWgI3Rq9vnvtu/ju3uM+/BGCplKmUVGJXkcrbR10LcZOoizAkjEGI+oWLZoKq9YKFaWldGu3BtRSVU2r/VEV9p7jjETw1+6Pu3vf9/n8/Z7nefdU81A3JKQ2jkuqCXXAm3hh2HuqOtxpQcPcW1GgKSKh3J1M9R6xdGnuIdE0NaOluRloEoU0qAKJ4pHSnGeaeRGYzfhfk2HNTDmNhNLc7C6XAg0DZKHhaiEe2eXiEXalmnjh6tWBasjAhIQpQiKDZBkVJDVLZHSkEG1qVpYMkcCRISJmAlkCqquJcOlIho6yUTJMqLh2NxErbIrSpuVSlgF11+4f4x0FCVB2trKaSXLIEVLxksFfw9QhUPAiA2QD4g1sWMNwmJbuGKEp3+6qCIGAwTpQFpFh2hMr0z8GeB5is1DlkYAzsH+THZK0JkKAGSe9MRyfCmvg2mM5CDUSZ5OHlUUt+zjQNFnigXPePGggdbyeD2mWNHjr8ZiTFIkRVU17KomDCMeau0uYJ5VgKA9LsceLpGECSZUx8KQMcDwVrXZ+ZvmBBvgcNkLWa8CuLCpPLJdBhn20C/DJ1AqTQOdF+yjQFS83uXxft1RTUqBdjXTf6q5+eNOdm2IYyndihWGjpPL20RoNv1+hDE29RPII27BfoCeW8JGhmjVF+wgT8I/o0NBw1cHHK1jNtIw9ZcwFPP/Xar38XkzGl0i83LChHMW82Gf7oNBEMG4igfIES7McfrUwXAvrJdq7escjdTe9t6XhRK18M5iKtiXaq7xoqTkojEVuS/hZh3CcjRM+LlASFjVkQLIelT2+nexZ7DsyFp1crC4S6VmgSkM1t/bZGvOFoWJB4C1BEPMFhQ4McW4pDS0+M1VX0XTkuMEBkYphlzl3wD1RP1oCfwwnS5MMTdLM6SKJKx3KkiJhQGvvevdjXQ9N06duFTBRDuI5UfHTzjO9XECHCibN8X3TChcIBP54e6ElSz4s4g2cXilkwOWxMKxinLpVoG7hCD4dLy6Jk5Jgzz2AFwOsL+BxwwAHPZ6MD3qghwdpkM54PCz0s5Dx/QE3v8RjMw6ZGtJN0oA8nnRmyZ5rUkDR6bOQm/G4vTjTIJ5GvGwJMGWlo8jJwQgSmg5lBIRjkS1kBPAiJFO1+rOr0f5EuCsWGUvhKCMI5ST49BuNqwcG+MxAWglFslsUQWkTOQADrVsf3trf2Z2Ss5FSoC2W2JFPx8F2X7idy5S6k2GS8eEIWL/f6yUZiqYYiiG5AQpFfJ5kh8aFt7WlVSHf0b1jW/uOIcrYQgl+fyfnicbZbYXWjnQ417MzkFZLBa53J62gMOqRmbQvzirdPX28mUAdRkEd9LVTHi6LswGmGGoOErg2JQxwqN4hJO4QcrE/uKX+CBJCDYMQtXIaBokOfEkkVbkUJFIOmBB/gQJTkglDCaTCuYMYAysvCaFYbzIeB34GJeStkUJKTLhzecjusOJ0f7vpLj5ctAa9lBXlEqhtGQgM5yPpOg5emluswpuh/59RndxOLm94Mqkt3oZVFRmqlMlUUlDH/WOP8TKyBDzYdVjBnPeE++0pP2QZr+ALsALr4/gMJFvxyFyy9r/xUHZuhSqQcY3leXtSdIdcGE23K0goIOT3cjRduzMfqzg1qWb/0jj8rf1rG2rP6ief3hqfob/y07c/25w+3Hf1dODi2bu+cWX/vi/fG/3hs5dO/kJSO1jt5etnyu+/fve9w1dnPv8wEJo/OXDPpt47W+4/2Hd+4Y2R4vVTHZ9+3BYiN/z8kz89uqHn0lvDly+j5y8uPHn3nlV9uX9c/3XL/Pb71jz64mVefay0dmLm3KsfuQ4lvj1zx4Xv/GBNvzT83Vl/12s7f9X5rvDanZsPbtx4yL64b33r9C+z+0bWDP/967E7fha765UTX7rywano2t+N/HkvO19e6NzVcoVovxb2jR57YGbye5Pd8sZ1r36yZTQ/HZzuBFLTxI/OHVhoHbw6dmmVMfL4iQVRjBTu2YS6Rv8WSww2vT47/823w1+YmboWPbM5r11gDxwePfwe/X3m3Jrig8+8/9ZefXbz+kO/DRJvrn0itP7Th26c//i5/0yfvPDvn3wQfCnwtS+6fF/d+6/q7jOedf/c8FnuWvGdB2nrkfkXXlk3Gj8++/LnfVTn4Ubhqa7KS+LU/unGhoYbN1Y3rFv1/HMfrmpo+C/xZpFc

View File

@@ -1 +1 @@
eNptVQ1sE2UY3iCKI/GHIEQiQlkkKtnd7vp37caQrRvb2Fi3dowV0Hq9+9rednffcd93WzsCxkFUQiRcECIhUXFda+bcRkD+BBOVP6NkigacGogGIYZEwWggRsDvuk62jEv6833v+z7v3/O+153pADqSoJrfL6kY6LyAyQGZ3RkdrDMAwpvTCsBxKKYa/cHmHkOXRhbFMdZQSXExr0k01IDKS7QAleIOtliI87iY/NdkkIVJRaCYHDm8vlABCPExgApLbGvWFwqQuFIxORQ267yKZB4DG44DWxTKMuyU1JiNBANtEuZliVdLCotshTqUgWWAkggDpXBDkW0CTlwar2QgoBdueIHcKFAEsnUV0zDltHRUcmLJL8I64BVyiPIyAuSCwGqkANjQLQyG5jZk4oAXSXku5j2WikOEzYGJKQ/yggAILlAFKJKozQ9jXZJWZBNB1Eqpj8SngmxBzb52ADSKpNMB0qNW5hCvabIk8Ja8uA1BtT+XD4WTGpgs7rOyokgVVWwe8JMgymuLG5OkN6qNpZ0emhlKUAjzkiqTYlMyT+JJa1n5x+MFGi+0ExAq13czPWo8MF4HIrN3BS/4gxMgeV2Im728rrid+8ff64aKJQWYGV/jZHc54T13DpplaW7fBGCUVAWzN9uIQxOMAdaTlAAJhrmXSQsQtkvAHPkzHBai4YhSFgonnLTRKiuOyqAgoeaGEKc52GbaW6G1hKpXKjHYWVPfmKANkaFYzs45XXbO46JYmqFZmqXC9bImLmsL2lscclelxwgEwlWBptpwC4SrnYaC2xqqEK4DDdEQG6pZReuJSNTdVGWs5tbFkzWhaj4mLKe5ZEtHlb9iXaTcHggp7TVaeamNRGd0SGJZm1zvYe0rQ+vqIx2uGsbXRDu5MGpzQKUh4Xa0NdHYXYk5H/S6a8eFx3o9FJOL0M04PYz1DIxxQwZqDMfNHtbjfF8HSCNTBjalScmwgbpThIfgqzOZ3Li956+7R+FZqUrCSfP4Ml0qstk5WxBoNjtjd9pYV4nLW+LkbNUrmvt9OTfN96Xgvuy4RgkNq8YonxHihtoOxD7ffcl+3CI76aQVPplOCiQ0iACVi8rsb6UCo3uGqq3cPzpZFNRjvCp1Zd2ax7Os7+xKdIqCIYrxjk6F8XY5HVIEGEL0QM5E06HlhgREKcjscbDugZxkjHd9JFdCAoZi2CNk9CWBjJmVjAZ1TCEgkM2Gk+ZIkcInrBkrc7Auh5sUvpSsIkE2RBA0IpVQIcxEpTZNBzLkxaMJiuwLIEuKRBqT/c5tTWSmXMT48GQFDNsB2a+92bZ+Ml6uAwvdSuEeiNPr9R67v9IYEOf1cvajE3UQGB8Ja1fQ4ckKOYCUnVVQf2JMn5JEc+RpcgjbgV2Ielgv7+IjEYHxsC7ew7HAzXgZ4Ha77YO+ZZSPF+KACmbZZ2YqQw3lK2p9B1up8TSi/NroOyWjQqRK0Wg6CHTSFrNPkKEhklWpgzTBCpSHzAMewStEHEAUIizriQocVUGW0Bja/6RLWXs2+3J5JW01U42dzK+Yv/WhvOwzlXzu3sXbWfgZ8/jmm/9Ov3a2ceFzC599si3zSK/vteUrZ+2+vPCwZ+PA7e1LX7/7oreg4KnWxT9dLX3nj7kF+X3da5ce2bH2YMvwheHQG1d+5ZacWx+5uefXH46d+u3HK8ifefStB6YpLw9d/uDa3tT5NrEqfbrX98UTgR1FZ2f7pYOrO97tnW4MuXZd3Zo8Vj9v2ultzVvm6TdmFSw4caTUxwV/nqHPGB5YvvXilw8umF9ckL8NNy7tGlxc9zmz56XaV7/5y9iZWnN5aElizqLdm85v2jjn0rnvWqvXPH+9aXBn08ZVnjffrj/zd/3skq4tm2/9fgffWjWzv+e8/umB6plTdqycP+3brpn1006dCFw6c6PM37Sgrvf22aKvLwTTu6f8sk+4WXv9meHvZ39kf7jkWvOhAHdy4T9TrUJNzbtTtWzX3Cl5ef8BjH8o1g==
eNptVX1sE2UYH5AsKGSggkii8WxQkOy6u/b6tTml7cY2C+3YxsZmZLy9e9vednfvcR/tuoHIlxGB6AExGJc42WhhjjHkQ4RhNAZiBBQlIAODIwYSjGGIwUjE4HtdJ1vg/rj2fZ/n/T2/53l/z3Nr0nGoqDySxvXwkgYVwGp4oRpr0gpcrkNVW5cSoRZDXFdlqLqmU1f4gbkxTZPVwoICIPNWJEMJ8FYWiQVxuoCNAa0A/5cFmIHpCiMuOXCizSJCVQVRqFoKidfaLCzCoSQNLyw1CpBUAWiQ0GKQiCBBQAleihIRBYlEqRQVeDVGYGaIqNCAwAPJkk9YFCRA87CaVDUoWlbmE2MwY/xzo710FSqWla/jHRFxUDC3orJGMsh0kvCSxr+qpkAg4kUECCrEGxhYxuXQdMUEoayulekYBBwu1uWcqV0xpGpG79gC7AUsCzEwlFjE4RyMPdFWXs4nOBgxE+zGDCWYKa/R3QyhTOJ84jA1fMroA7Is8Cww7QVNKpJ6shmRWlKGD5q7zbRIXFNJMw6EMAlvRUFlEt+URNBWh81q62shVQ3wkoBLTwoA80nJGfvR0QYZsM0YhMyqwEgNH+4d7YNUY+dCwIaqx0AChY0ZO4EiOpn9o/cVXdJ4ERppf+WD4bLG++HsVpq2uvaNAVaTEmvszFzEZ2MOQ01JkizCGMbHVIpFqJmHxsCtxkY20hgWi/3R+SInlsYYAD2+RbWL6hdUVgtRf9JTWhFsiIcDYInLW8ZEkpUhL0m77LTD5nY7nSRtpay0lSaZRivyuxyhcpnxLi4NS1y8vLJhcVlDq1Wdb+Xc7gWMoyRgW5zwlYe9zVXLPWEpmWBqllMi8qIqgQ67AjaxsqqO1YKoXE1ITa4yq4OJFhGYnR7nueKKmlAgANw0CgqL/InqWNDeHIe2Bj1A1Zdp9pbaFr3JadVLmCAqHUWPZlwklWXopBg3ZT69I9oQoBTVYkYn7XHvUqAq456Da1O4ZJqurunCOoSnvklnm29HKHBfwk92lWBNGsfqIJdP0HYiiOKEjbIx+FVIM4U2J1G2sKbHnw1T81AJ7ss0bwTLsHRE8mk2pkvNkOv2P1Tsx0yx45s06eP2JGGLjFRIZlkZPUvIquGpQ1aU7B/uLBIpUSDxrZmwxrGM6hOtLQmO1TkuFk+IlKeVsfNhqLORA9kjsoLMMJgQKapGp81J92YtI7rrxrlSJE2RFH2khcRtDgVe5HE9M+/s6FONLgcu9uEHHTTUDPGQTA3fxhejHRQoYr2aoe+jMB6Pp//hTiNILuzi9BwZ66TC0Vxom6geftAhi9CJrT0tI+4kzxkDs/CikXW6AaAYB+WgKQ9nd1F2t9PF2AHkIu5IxBX+HE8+nsUw5l3KSNFIFbJ4zGtJYyBfBC3miCnGYrQ7caZFeBSzgs7Baj1cgswc1CJCVqCAALfXP5/0AzYGyeqM/Ix0SX3Qu7DCf2gJOVpHZEge/sSkJaRKfCSSqoYKvhejmxWQzuFZqcAUxqry1hsH3NBGOzmPkwIszbARSPrwFBpB+191XeagTQMBc4+zxv6YvdhSyDB2SxEhgmK3k6GozIdodcrMVYoeH5d4duPEnMwzYdMWb2jtvKnrh+6+o39/vLLhscMfnFi7ofWHD2nu6OS++MDA79bdm6+vcgVePLNrw4y8P0+39btuTp/qK61Qfcmvn0YHJw8tHWL/6W9Lvv3X9JduTAkpx67k3M47VO+bsXT1pSmDN3qOXLvQ9EiwoxBIb3pu9p68dO7WC8Gzs657Nz5fXvDJwUk/PXViUX9eJ7Mpt/Fyx+AyPnh5vG/ind8+ennH3Ctbr81qaJ9NLjnZ1HHncMlEx5x/19ma1+sLPEe2E2VX593e7Vpz5Vz7zF2FJ1cw5yo8p9799eSs8xcHz+etaK/zPXJpWu720qub9/gXtrVOnpgcCugzPLudF5rq3D+vvvXWH2c2BOvaB5uW5T2z1X46+Qao3Ta4/b14/IlvZ6KO9m1nx9XXPdq+elntnej7v/SHDq7vvse+Mu47b9GcSWLHtLufDh2IfHkvvGrP5tk3JlS39eVd3PF4d27nZvfKq6/OmDkvt37L+M6v6L9zc3Lu3ZuQ82N9ue/G+Jyc/wBrj1IS

View File

@@ -1 +0,0 @@
eNptVWtsFFUULpSgEh8VRGOMYboaf5jOdGZfs1Mkpt2Wtva9Wyqr0XX2zt3daWfmTufeaXdbqxEJCVHUkYjvH6XLLja18mgEQYxRVDQIGo2mKBpf8YGvoMSoIXhnu5U2ZZJ93HvO+c7rO2c2FAaghVVkLJpQDQItGRB6wM6GggX7bYjJxrwOSRopuc6OaPeYbanTN6cJMXFNdbVsqhwyoSGrHEB69YBQDdIyqab/TQ0WYXIJpGSn3xr26BBjOQWxp4a5c9gDEHVlEHrwdFuygTWZQIakIZNEmoYGVSPFJC2kMw1GSlNxmqGRIaaZyJoqG54qxmMhDbrGOIsJ1D0jVcw8zLRaOVfLxtDyjNxFb3SkQM29SpmE9bs6Bj0J9BcTC8o6PSRlDUN6QXFNWg1iWy4Gz4kjhTSUFVqrL8oqcmmEiTM5P/+XZAAgxYUGQApNwXkxNaSaVYwCk25+4zRAAxar64z3QWiyNJ0BmJ+xcnbJpqmpQHbl1b0YGROlhFiSNeFC8bibFUtLahBnqoMGUdtc3ZmljTIYgfOHOH5XhsVEVg2NVp7VZBpP3izKD84VmDLooyBsiQROfsZ4cq4Ows6ONhl0ROdByhZIOztkSw/69869t2yDqDp0CuHOhe5KwvPufJwgcOLuecA4awBnR7ER++YZQ2JlWYAohjPK5wFCfSp0pk/H4yAZT+hrYvGMn7PXa7qvPgpU3N0eE02f0M1JdWZPrHGdnkKDTa2dGc5WeFYQvaI/4BVDAVbgeE7gBDbeqpnK2t6ot8enDdWH7Egk3hDpao73IHSH39ZJb3sDJi2wPRkTYk23c1YmkQx2Ndh3iP3pbFOsUU6B2zgx2zPQ0FHXn6j1RmJ6X5NZu5qh0dkDqrKmV2sNCd51sf7WxECgiQ93cX4xjnt9SG/PBH29XRwJ1hMxjKRg85zwBCnE8qUIg7w/xLvP5Cw3NGikSNoZEyRxpwWxSUcOPpinJSM23pCjPIRHjxRKs7e9o+U8hVfm6iknnUNrLbWK8YpMFJqMl/f6GSFQE5Bq/EGmsa17Ilxy031BCu4uzm6S0rBhlvIFkLaNPqiMhy9I9kMu2Wkn3fDpdLIwYyIM2VJUzsR6NjKzdNjm+r0zk8UiKyUb6lDRrXOoyPrBocygAmxFSQ8M6rw05PepCWiD5FTJxLSQ64YGxOrYGQt6xcmSZJZ34zRXSgKe5YVX6OirgI6Zm4yJLMJiCOiaI1lnukqXM+6MrfEJAV+QFn41XUVAsxUYtRP1SKfMxKsZ04IakpUDGZbuC6ipukobU/wurVDs5ALUeP9CBYL6IF22O4ptfW2u3IIuupvCeRC/JEmvXlhpFkiUpKB0YL4OhnMjEbw63r9QoQSQ8/p0PJGZ1WdVxZm+kR7iCb8EkkmJktIrepWkKCdlPiRBRQ4lFN4bkF4Kr2XDMkhDNlpkn1Ooj7XXtjWHX17PzqUR22HOvGAKBsKGmkzmo9CibXHGgYZsha5KC+YpVqQ25kyFgAQSPhiSAAiGkkBk6+gSmkX7n3Q5d88W3zQP5N1mGqm3FoVXPXRxWfEpp59z58hj77e/yVds/C27bO/3V595dOl3myqWL2dClUvqnFPa8ug7fT9vWTL+z69H1GemLuKko98/ffQYd+VisujpsnUtPVt8Hz372dThr0dPPP7VFPvTwcKnT54+u/tjdPzLv/jNl5+4d/TrsVORrm8u3XPT8OvvjSkPv/AHfTtMXv3jmQPll61Mb3/RuuWuttSpv/cMfpxbufTI4UbPxoqDv153cHrx/cyKve92qfySWKXy5g2Vr2/dvOxIBblHvfamG7qVa38ZndxG+t+7ZZv+2O6t+pYfz5z93Rd+4o+pS9as+jNy5/U/fT52qQLMe6wPPhzxHE4Ndg9/suK5wtv9x6Yariu/5mwbf0x8Ax5/av0zK46fvrXjbqZFf+NdztxZ0ANLj2565KRTcV852PrkiZ4fnv/0ZOvJq/799gq3UuVly2L7Oq5fXFb2H5UFMyI=

View File

@@ -1 +0,0 @@
eNptVWtsFFUULi/TABoSrYlBZNhIgqYzndnZ3dktElK2hdbSd6ksKMvszN3d6c7Mnc69s91tg0aeKkaZf4gBHyy7ZikFpEFEihKor6AQHz8agpqgJCoYwRJAMXhnu5U2MMk+7j3nfOf1nTPrc0lgIgXqk/oUHQNTlDA5IHt9zgRdFkB4Y1YDOA7lTHNTW/tuy1SGn4xjbKDKigrRUBhoAF1UGAlqFUmuQoqLuIL8N1RQgMlEoJweHup1aQAhMQaQq5Ja3euSIHGlY3JwtZuijlQRAwrHARWFqgq7FT1GRU2oUTV6TFVQnCKRQaoOi6oi6q5yymVCFTjGKI0w0FzryqkJmHFl3ngtCwHTte45cqNBGajOVczAtMfR0cmJI78Im0DUyCEqqgiQC4JrkGpgy3QwWEZYl4sDUSa1+qFkViYOEbb7J+a/X5QkQHCBLkGZpGDvi/UoRjklg6iTX54EqINCde18AgCDJukkQXbUyj4gGoaqSKIjr+hEUO8rJkTjtAHuFuedrGhSUh3bA00kiKq6iuY0aZROcYzHz7AHUjTCoqKrpPK0KpJ4skZB/tF4gSFKCQJCF0lgZ0eN+8frQGTvaRClprYJkKIpxe09oqn5PIfG35uWjhUN2Llg893uisI77niG4xjh4ARglNYle0+hER9MMAbYTNMSJBj2O2xWgjChAHv4ajgsRcMRbVEonPIw1kpV46vbJAW1N4YEg+famcASoyO0bIUWg921y5tTjCWzNCe4BY/XLfi9NMewDMdwdHi5ashLO9vcHbzaU+23WlvDNa0tdeEOCFd5LA13NtYgXA8aoyEuVPsMY6YiUV9LjbVK6Iqna0PLxJj0NCOkO5I1TUu6IlXu1pCWqDWqFlIkOiupyIs61eV+zr0i1LU8kvTWssEWxiOEUScPtcaUj+9sYbCvGgtBGPDVjQuPC/hpthihj/X4WefpH+OGCvQYjtu7uYDwngmQQUYObMiSkmELrc8QHoLTn+eKs/duU/0dCpdlqgkn7cGlplJOuQWqDRiUm3V7KM5b6Q1UerzUsob2vmDRTfs9KXiwMLtRQsOaMcrnpLilJ4CcD96T7IMO2UknnfDJdNIgZUAE6GJUdt9KunV06dB11YdGJ4uGZkzUlZ6CW3uwwPrunlS3LFmyHE92a2ygx8MrEWBJ0YGiiWFCxw0JiNaQvdsbYPuLkjHe5UmuhAQszXIfktFXJDJmTjIGNDGNgETWHE7bw+WamHJmbBHPeXkfKfxCsook1ZJBmxWphhphJlpIGSZQoSgfTdFkXwBV0RTSmMJ3cYUiO+MlxkfuVsAwAciy3VNo6/HxchM46E4Kd0A8gUDg2L2VxoCEQMAXODpRB4HxkXBuDR25W6EIkHHzGupLjenTimwPP04OYRDgOFkWWB/r41k37+b8gsT6vSDikyOAFGd/cCkdFKU4oNsK7LNz1aHGqoa64OGV9Hga0U3G6Asmp0OkK9Fotg2YpC12XlKhJZNVaYIswWqtCtkDfikgRXjAS1FW9EclgV5CltAY2v+kyzh7tvCmeTHrNFOPDU0Kzt1aWlJ4ppDP7dt4G9d0kp216fqt6Qht3TEwf8Hs/KnSmdR9G10Xnm3dcGHrpRl0/u8f7fOeJ6bUb75+c/BY75bq0rNlD03+ruP9zoX8x+d+tr5/YfHxi//0Xrt69qlroS9/5Y/uDF1e+/CjI1defuVMf6JGi23zDj/Y8cB8qbbhr/LwvkO3knOq6st2rE1U7tp/ceubOOA/nCnrWXBqtWvjrC037j+vbupdcCqS7Fg1ssZXOnSjttSQ10/+pvdyaNXm4alloaWzf9cHMnvrLx74Y3DmThWefnvuZ2890lm5bO7zP+1ek9j772G8YNHwrEtvVL40sr30ymOv/3Z7ZOPXLee6vhjZ5JsaPdnAfiWc+OWT7St3zDhzdXFTaF69dmuIMfI5jZ9W/ulrV2r/XLzrIJ3QZuOd33Zl/UPczWlOpaaUnJj+6mtzJpeU/AdOizBy

View File

@@ -0,0 +1 @@
eNrNVstu20YUbZbNqv0DYtDHRpRE01ZsFUXhyInbxI6d2ECcxIEwIkfiRNQMMzNU5BpcNO0P8BPaKFIgOA+jAbJouy266A+4i35L75DUK5ZtZFEn2lAzcx9nzn3MfdxvEyEpZxcOKFNEYEfBQsaP+4I8DIlUP/VaRHnc7W5ubG0/CQU9+txTKpDlQgEHNI+Z8gQPqJN3eKvQtgotIiVuENmtcXfvnwvuPmrhTlXxJmESlQ2rODefM9BQCnbu7SPBfQL/UCiJQHDqcIDClN667WH1pTSUR4xHBMNHGJQZW5gZVwVmDpUOzxmV5W9QdF+b5S7xtZrj49Alpm0umB6mzdD0sYLLaONSCYJbIKRESGCtOPczHAy3EhwNoqqZN63hEukIGmhi9OkqUVN46oK3DGz43MFaJK9VKAtCVZWOR1oYdPZRACQRoWhy5X00FE4Wai9I3AIyyhooisCAZp8K4mpgY2l9x6E0rz0gjgLp+1HfI9iFMP770Sddj0sVHx4LzUvsOCRQJmEOd8FL/LzxPQ1yhkvqmpoBUM5IEvt40CQkMLFP26SXasWvcBD4NAVReCA5O8hCZGo0x48HOpImBJip+M3yEEdhcw8yiRnFvH0pb73qmFJhynxIBQgPQOoFyflvkwcBdppgx8yyNO6lyi8mZbiMn65jZ2NryiQWjhc/xaJVmv91cl+ETNEWifuVzePussOxOztvWflLh1OG5R5z4qd17EtyOCJ5pDKYK87ZZrFkFq03U6aJEnumw8FD/HPxxZBAn7CG8uIn9kLpmSAygOIjP/ZATYXycReCRf7+q5+Vyy8b18eh/rS7AoGL/7hN3Jxh2cYN3jbA9bxhlcrzdtkqGavr2weVzM22jtORoUhHFUhb76R18JXheFhIor4OVd1cPNyGqpJ1CN6VYaL0HS9kTeIOKjNT5HkFQ5ab2g+UcfyMcdPRO0doTI0AoD5tUWVmTQUCrpdxd75YLB59caqkgBKiTCPp2ktLS2fYBQqJil9rIkzLMi17e0jH3SNjlmbamTI8PY0HEH12iuQYTw/waGnjVOkT8Qwy0CZ149/hf7VoLSwuXN9hrte5Vlsv8dbK3eZa55L1pE1xPLDyltHgvOGTl5WrZsr6VpImcX/lzo3l9e8qBzvmLV7jQMM2BroYZ6S3RQRkZjxwfB66UOuC9ED91vKd+PUimSvai/PWXN2u2bXSknl5Yytp2z/00lb0z8d/JrlSNrIErEJeCnXRxQqPGxeaOkS54RqV9xF19blswN3stfblRbuy9+2NoL56J2ytidVH7RUQn7YCG+lzgLCUFEyyxGTa2I/3dU0svClzIDN6Ne7dz0Ev5UEVElzqHstC38+2pOacOWS4GY6QJh17+E7Z9kIO8VBN7llRZOhfdPFixkrmsVqDBt2czc0MEYBKmUs6qFzMTZ9rHJmeLlTNjf6UEYom3QYQmrf8GChIno4ToAF5Cp8OLRGZgpbuTEMaiWXA1ohCKSvZ79wRQGpCCyNOU7/Jk1jOH4kTCqETcIqQ9wFkNJlwAWDO3//b4xk8FACFUTyk5twh5dFZtcuDs0qXBxNuz4zuu3cD65RuADNqFWYqLexm61D31Ic327Ur69cJaW/aOzevhUs7lU4HpNJJdnqQTVscWI2i/yU7reOhSJuqnghHwgHcmWI/2cw62/vGsL87nrCjD4WYXWjquyipJJjx3y+WiVL+cAha3kXRqdnzrjVtndS2h/PNzItNHaKJi0xNIJMVPGMOiSYGkbemjtJSdAKcWfebPEPji/wHR26g7g==

View File

@@ -0,0 +1 @@
eNqFVF1MHFUUXlJJ1RdLNLYxJk4X0VS5uzM7W2BRk9JdFPnpImxTadPiZeayM93Ze4eZO5uupFax9UVFrz7U1FgrLLt1RSiRpBK01bSNxoga00SwsY1GH0x9MJo+kCbiHdjlJyDO08z5/c73nTN9uRSybJ3gkmEdU2RBhfIPm/XlLNTjIJsezSYR1YiaaY22xwYdS5+p0Cg17Vq/H5q6D2KqWcTUFZ9Ckv6U5E8i24ZxZGe6iJqeMXq9SXiok5IEwra3VpDEQLBS8BaDuGVfr9ciBuJvXsdGlpd7FcKRYOqaGpBhEO/h/W4OUZHh2hQDOioCMtgONKgnHGBAyoF6D+c0BFU+zVXPpoxGbMrGViEchYqCTAoQVoiq4zj7MP6sblYKKup2q+R5a4zmKWD5BEImgIaeQtmFLHYGmqahK9D1+w/aBA8XoAKaNtFqd96dCPBBMWVn64o4/K1pTigWRJ9c7ZPOHAI2hTo2OCV8Eg4pa877J5c7TKgkeB1QEItlF5JHlscQmw21QCXavqIktBSNDUErWRX8aLndcjDVk4jlwq2r2xWcS+1knyT5qsdWFLbTWGFD3dCw0dgiyYsp+YAYkIFYBUTp7IrSiFppoBDegb0njhQJNBCOU40NSmLNaQvZJt9B9GKWp1HH7stwsdDXX+YKazMQbVqSuiwT4cKxT/cgtVKQZGEXSQm8dVCQqmqDcq0UEJ5oiQ2HC21ia+o0FrMgtru5VvXFvcgpmoMTSM2H19yIGe/SxBbvb+hJnYLCyXAd3U+WCYqiOPPAupEWSnJm3I4ZORQK/U9dzgyibNydD0gSkORYYUpp74ywVubC4RXwZF08HNH960Qu4SlGC+tG/weewN58ATTQVfYJf+8UpfagaWuhWFOgOdzaE8YojSO74cHBlA5ZXvJJQpyQuIFGw4+DMFQ0BNrn1We5SMeuupYnw8NPgzbSRTgNMcjpwgSjbDuy+MKxvGIQR+UnbKEsT2+r62DjNSggyjWBKlFVQ3JXVQjs5JdR3INFnTPu/c//q17g22Zx08Xf7nv5Vs/8s6G5f6rxonjXsc6Kk87tyezgG1N/0AM7yuobDd87YhLnJp5q+n2yaWPpno3n/r7QsenRypcub0c3/vzr8uyVWfbu9R+i5+c+lp1HXvnu9XQG1G+b6D8l9ZwvOd7W8MvADuWZ6GOVP15F0Usj05sf3PzNhDj0/gffb9HG7z028Nk5cOTtndf23Xzo7i9O/Rq586fT5Ue2VbMToYe/3ZpqnJ6cff7G/vLbXpuK+HqOltZfazjpfe7VyPHmsYrpt24eKO+MXind2t9xz8/ypVt6Rz/v+OfNr4av38HnmZvb4FFi2oWyEo/nX+5BZNA=

View File

@@ -0,0 +1 @@
eNqFVF1sFFUUbosRHtQUUERCcboBUejsznS2he6D2G6haumP7dZSTK13Z253pjt77zBzd2FZ+9AqmlhNHIOiMajIsluW2lKokIiYINji7wMPmDahogKmivyYEEXFeme725+01n3Zufd855zvnO+c2x4PQd1QMMrsUhCBOhAJPRhme1yHW4LQIM/HApDIWIpWV9V69gZ1ZXCFTIhmuBwOoCl2gIisY00R7SIOOEK8IwANA/igEfViKTyU6YnYAmBbE8F+iAybi+G5fGceY0uj6M1TEZuOVUi/bEED6jZqFTGlgoh1VS8DstJgiAyZrRDQP51REFMLELNBB0hUDBHnMe7idbbWRisslqBquYkqCEqQFdgCVgaKP8iqgNBirOAEYzWVF4FAMq8PkqZUdAshQUPUFc1qhGUtg2RK/mYdBxjAqFgEFsRuuShIC5ImQ5RhAFCfiE2jTYE6UZIlRmxpcPJAwloyrUF0Bflsra00gNVtRYeSRWwCbdWURmNvCxQJRTe2xmUIJCrbcEZ2VMYGMXunSdEDRBFqhIVIxBLNYn7g265oeYwEm61WJGiLEUxqbSb8EGosUJUQjI15mQeBpqnKGAlHi4FRV0oS1mIz3ZywlGOpoIiYR4vTPBzVYTo5iOHswho7f3AbaxCgIJVKT+WglGJa0n5sskEDop/GYVNTacbGnLsnY7Bh7qsAYlXtlJBAF2VzH9ADhc7Dk+/1ICJKAJpxd/X0dCnjRDrBzvP2Nb1TAhthJJr7moFqwN7xJo+7JPK5fIHlClmOPzolNCR6mBUxzWDu4brTDVQh8hHZ3Cs4uU4dGhpdNvhcjLqRoNEepWLBr07HU+vxflX5hNTzo6VUOPN4PZTyGF5gKnGIoamdDF/ocgouelNW4elyp9J4ZtSp10OXxmimWq1Pz0VclIPID6WEe8aJGLRNVKzT/KoSUAibehuojtbRjDo5jht8YFakTjdDQVbGqFBUVPQ/cWlnIDH7rPpYnmd5wZOqMn/zIDOT59gDk+ITs/hQRstnQU7wSaOZWdH/wUfYnEiRZhXJ/Jh+N3F8qZOv31pe1rJJqwv6/BvDdZXrt/iMvSEFmAnezjM+jH0q7HFvYN2APhlsbVJ9M17aUFlc8Zi7axNbg72YtsEDaLsQRjBWC3U6cGZCVHFQoiuswxh1ryluMPvWwnxOWJtfVOAtbBa8hUVsCd2M9ByM6xy19j/5KLfFxh6ezzKL7u+Yl5H8zZGqz5Wf5LJH6z89/sdrj69b6h2+FN14os11KremJNQHlTMFuwKNI6P2/S+1Vw2cu/XJlds7jzLvbYHzVx6oONx04cuL/+T2PP3tW8sjw0u3n3+ktSNzxzefL2hz+4KLH/2ppexMtiZfXlQwr3rzn+yGBmfHqoarz+7sH6lbEH9Iqz6bvSTr+sN/ywveyYqsyrri6Gu+5+c3bpwV6vYM8YlXc56o2ZFz4M6I+cLF3c5E6Av9yJuHu+2rvb89+Ff54ps9hz786Jr91G50x3HHDxdixxrXvb1w2XeXtt78foTPDT+5+vwz0sJtK2qu/3pvZ+zGbd6MwOvRoSXZ1xxZL/84d27ZEda1E2Xd6n130V2/L3ulJLvf//WJqp67T+5yF4de7G87fei+ub+MXL3syhmooj0aHZ2T0edPDOzPzMj4F2C/3M4=

View File

@@ -0,0 +1 @@
eNqFVGtsFFUU3spDSNDwsuoP43SpbQid3ZlOKWyXCHVpa2laSrvKy7Lenbm7M3T23unM3UK7VEMp/IAYO6gYkSDIdpesLXSVhBhAQ7CJCDEGiKYlxQSEVBvwUWIgUfHOdrelAXH/7Nx7vnPOd853zm2PN0PdUDDK6lYQgToQCT0YZntch01haJCOWAgSGUvR2pX13kNhXel/QSZEM0qcTqApDoCIrGNNER0iDjmbeWcIGgYIQiPqx1LLQJY3Yg+BzT6CGyEy7CUMzxUWFTD2DIrerI/YdaxC+mUPG1C3U6uIKRVErKvVMiD5BkNkyGyCgP7pjIKYeoCYch0gUTFEXMB4Spfa2xqssFiCquUmqiAsQVZgF7IyUBrDrAoILcYKTjBW03kRCKXyBiHxpaNbCAkaoq5oViMsawUkE/IHdBxiAKNiEVgQh+WiIC1MfIYowxCgPhG7RpsCdaKkSozYM+DUgbRoqbQG0RUUtLe10QBWtxUdShaxcbRVUwaN/RuhSCi6oS0uQyBR2a7YZkZlbBAz+YAUR4EoQo2wEIlYolnMnmCrohUwEgxYrUjQFiOY0tpMNEKosUBVmmFs1MvsBZqmKqMknBsNjLrTkrAWmwfNCUs5lgqKiHm8NMPDWdtCJwcxnENY5OB7N7MGAQpSqfRUDkoppqXsJ+43aEBspHHY9FSasVHnI/djsGF2VQNxZf2EkEAXZbML6KHios/uv9fDiCghaMY9tQ+mSxvH0wkOnncsSk4IbLQg0ewKANWAybEmj7kkCrlCgeWKWY4/PiE0JHoLK2KawTzIHck0UIUoSGTzkFDEHdahodFlg9ti1I2EjfYoFQue/zqeXo+PV1aNSz0rupwKZ55aDaUChheYGtzM0NRFDF9cUiSU8AuZimpvtyedxvtQnZJeujRGgGpVlpmLuCiHUSOUEp6HTkS/fbxineZXlZBC2PTbQHW0jma0iOO4/rxHInW6GQqyMkYFl8v1P3FpZyAxj1n1sTzP8oI3XaWwrp95mOfoA5PmE7P4UEa5j0CO88mgmUei/4PPwnWJNGlWkcyT9NvH8aVevM6vlVVt3FwceKW+pnCVJq0Q6g41K8BM8A6eCWIcVOFRTznrAfTJYOtT6pvx5WtrSqsrPd1r2Drsx7QNXkDbhTCCsXqo04EzE6KKwxJdYR3GqHtd6Vrz2GJYyAmLBaEY+BcL/mIX+xLdjMwcjOkctfY/9ShvjY0+PH1ZG57fNc2W+k16bdWqqjPLZvyzoGLDls/fCg/W/loN24c6klsPv3x2oPvUzdbjg9f3Gbe/rPrk6m/zc+9+scW3e97QY3MvHTw/9azq+iD/h5+vX8xhv41/dKLvalPrvemz7/R2ZZflvF7JnHZfzOYmP/cMu61j4ClpWSnisi9VRa5t2N46J3LmtPbujB2ezk73OenZS8O/b8q7WHHuVnv7CUFcXhZY8KRj5+41Wpn97SvvwGT+TztPrnE5b8h9PRUfDne656Pvht4beVX0LZm5a29H/p97r1Rq2wuONbh88xqD7jfcPw5/6t8aWXL7oPnmnn3C2hk39w82DSwJbJPqdlyfFh/uG7nw/fD7W27kud1Tp2cP71k9O7g0e31isHxqePK1P2qy5/61qG1K9UheTu/lwH6OHJhz98gs+emzNyZ/Uy7mfnXr8V/u9HQmL3c+ceACKThd9mJPQ271yBSb7d69SbYVd/6Wr2bZbP8COXvwZg==

View File

@@ -1 +1 @@
eNptVWtsHFcVdpIKpapEAoKKVoVMVoilrWd2XvuyZRp7N3bWsb1+OzYFc3fm7M545+WZO+u1S4uaFqSoiGZKK/UhpW3i7ILlOAkxaerEJSEQAU5EhJWGVLgVAvWNaIGqjSIU7qzXxFYyP3bn3nvud75zznfO7C4XwHZU01g3rRoYbCRhsnC83WUbRl1w8OMlHbBiypOd6Z7eA66tXrlPwdhy6kIhZKmMaYGBVEYy9VCBC0kKwiHybmlQgZnMmPL4G+uYhwI6OA7KgROoo779UEAyiS8Dk0UgRSmoABSiLGTIyKGSCKNmG+lABeVskBpTsUJJpubqhkMFG3MQpIgdFWxGNgQZasBWMZBzGShsUr5nl6yx4u/ZNmjIp0FlAI8BGJV9PGau4DFUN2DXNqh2ZOdlc8ygsqZNmHSOk4CNZVTHUC0LcMWpYWJFNXIUaA4wgVoqYJsa+DG4DtiBh79DdnRyR/O3chamBSZME/yM6dsaZJcj/w62AelkkUUEhmxg0C2Sd2LoY7FM9OGyAkgmVXmzZvOkYjrYm1mb6cNIkoDgg0EoEkLeodyEatVSMmRJwDBFsmtApY7eVB7AopGmFqC0fMs7gixLU6VKZkIjjmlMV6tB43ELbj6e8qOjSe0M7M2mCYnGVKiaIY4J8wx7pEg7GKmGRkpMa4jwKVmV85OrDywk5QkIXZWbV1q+PLPaxnS8g+1ISvesgUS2pHgHka1HxGOr923XwKoOXjnRebO76uENdwLDsYx4dA2wM25I3sFKIV5ZcxmwPU5LJsHwXmZLkmnmVfCu/Gt4WMoOZ/SGUWY0zLpopzuyyxyMpRuZtv7uvogYG2qx4gWc7h8oDAliUu1vk/M0F+XjEZ6NRAWaY1iGYzi6qPRn7eHGUXWgwA5vH5LScmuTU+S1CLNjtHUIN4lsKiy0J5LtmZTTCn2DI63y6E4YyghJkLv4drVLacpa4azQvzMitwy6SbelZ4jrqqcIO7egyg0d0TamS9/RrPCKklBRX4uYSm3vyU6EBwdizR1N0T4hFQ+39RfcVNcqejwn0myVYYQVY6z/zKxoQwMjhxXvgMDyP7PBsUhzw2MlkjLsOrsn/VY5/7tytcn3p3fekPCdk6SjwZvvdYnYeZ5KS5jiWV6kuFgdx9VxUaqlvXc6UXXTe0sJHu21keFkiQy3r0i+LCmukQd5KnFLsc/7YieV9OmTLqWhaJkO0FVW3vQuunt5vNGp5LHlzqJNO4cMdaLi1puvqH5sojgmS64sK4UxnY1PiIKaAVfKzlavWLbpuyGEaN3xJjmRC89Uj1aEN0WCZWmOpVlurkiTPgdN1VWS0MpvdciSu2GS7RM3G2AzD2Qcl8VKOdjXVlvYoBPF+s5vwIjxePzUrY1WoARiEhcic2utHFjNhuN158TNBlWI/awzXVyxplXZu/J1shiO8lE5i/h4RghnQYzGgCcvgijLGSnGyVH2VTL7VImg+NW0TBvTDkjki4LHvSu1Oir6Q6ZB4MJChERaT6mGpLky9LiZpOnH4NRTFhnpJpIPJ5rpBJIUoHsqAvTKycGOxvZU4vguerWS6LS1/DUrGyYZ5NlsqQdsUhhvStJMVybT0oYSwepuHPRmY3IkE+UhiuKCIGTiAt1E5tAK2v91N+mP2jLSCPeC5B1ThIZAnSgKgXpKRw2xCClT5Zv3aMmP1cj9dt2jW57YWFN5NmhdZzrWc1869c8Bcf6l2/fClzu/uOnuMNe16RvnNmw+jrecvGQML36rt+mzsUVAW4vqn56LLj532f3NVwbu2sIxdzT+YnbOufb9Fy9c+nRh6eR7C5OfXv33xx+V/nb58Jmzi5f2vvLY+ndepNTy+621hz7XrRyfxdTG3Xnz2Qc3PVVkPxxI7rnrXWqBdpm5/36eXrqnfjT03uU/rH/mtj+2aPdw5/48V5do/umZ0JajJ+YTrR999++nW65tPXci8dXNzKmBfY9vfdPYsedHp3uPLoq/vP++i20/Tr6z7Ym9+5aOaJyx9PbF5wsbHrlz5slfN+YnHhHjnPr81ybqd9XOb1w63/3B3W99eHbk/KGzrx6+2vBaZ/PVzNvbiuLWZ4OzpzLRL0iLI/cuHNj3l7pa+sLGX52+7ezCX1//SfSti3tT82zHjqboZ98L1tkvwLYm9QdHepkf1lL5p0dx5z/+c+F67fFPnv59+sH9b+y59kzwm7k7wLZe//n7L0z1GfsfILm+fn1Dzcdd3aGD62tq/gdFO3xA
eNptVX9QHNUdJ8ZMrDPWkLGpY5pxvUlELLvs3h73A0I7wB0ECBw5CALR4tvdd7fL7e5bd9/eD9K0TcyM05bpuLXaX2ongdxZiiQWBjUBbU0TW3VUaqZTdKKjqY2mjSG1pZqxtW+Po4FJ9o+7fe993+f7+X6/n+939+VT0LQUpK8aU3QMTSBisrCcfXkT3mdDC+/PaRDLSBrpiHZ2DdumMnenjLFhVVdWAkNhkAF1oDAi0ipTXKUoA1xJ3g0VFmBGBCRl31zF7PZo0LJAAlqeamrXbo+IiC8dk4WnmZJBClKAMoAuAYsKAwwaTaBBqkyKl1FpBcuUiFRb0y2qrC4ByyhiR5U1AhOWMdRdpoIhOZcghRHlerbJGsvunmlCFbg0KAHiNIR6YR+n0RIeQ8Ugtk2dagNmUkJpnYojkzDpyJKA9UVUS1cMA+KCUx1hWdETFFQtyHgqKI+JVOjGYFvQ9Oy5h+xo5I7qbiUMTPNMFU3wBeTa6mSXI/8WNiHQyCIOCAzZwFAzSN6JoYvFMoE9eRkCiVTl7ZJ1IzKysDO+MtOHgShCgg91QpEQcp5MDCpGBSXBOAkYjpLs6rBQR2c0CaFBA1VJwdziLecIMAxVEQuZqRywkD5WrAaNswa88njUjY4mtdOxMxklJOqaK4sZ4pgqL+M9kqEtDBRdJSWmVUD45IzC+bHlBwYQkwSELsrNyS1eHl9ugyznUBsQo50rIIEpys4hYGp+38TyfdPWsaJBJ9/QcaW74uFldzzDcUzgqRXAVlYXnUOFQjy94jLEZpYWEcFwDrA5EaGkAp25f/T3i/F+QattAlJnONbeW7/TFlpR4i7Jn8nG0shs9Lf472tVt7c28RaQU0lFi9BcgGdDoQAXCNIcwzIcw9GhLNvbVhXf7reMRLK+rquNb07W6V41oTBqZ4zZ0b4zGIzx3bwexN12n9UqdYQjHaLcGk2GBpKaz5foTRotRrzVO9Dd3dTSE8j2DUTq62oows5OKVJtc6anZbthRncY4cZQtgWpYa8mtEU6d+rxHqOBiQQSTdu7bNsbySSX0fMG/TRbZOhnfUHWfcaXtKFCPYFlZ5hnvU+Y0DJIc8P7cyRl2Lb2jbit8srv88UmPxhtvSzhDSOko6Ez0yXbFRQboNpRivKyXh/F+at5vpplqaa2rrGGopuuq0rwqS4T6FacyDCyJPm8KNt6EkqjDVcV+4wrdlJJlz7pUhpmDGRBusjKGeuhY4vjjW4OTyx2Fo3MBNCVwYJbZ6ag+vRgJi2JtiTJqbTGhgZ9vCJAW4xPFq8YJnLdEEK0ZjnDPs43XjxZ0t0oiZWlOZZmuaMZmrQ5VBVNIfks/BZnrOWMVJFkP3OlAUZJSKZx3leoBvvccgsTakSwru/LML5QKDR9daMlKJ6YhHj/0ZVWFlzOhvNq1jNXGhQhDrLWWGbJmlYkZ24zWfRDwSdwfq8gVQGWF/xcwB+AARgnKykgBuPCs2T0KSJBcYtpIBPTFhTJBwVnnbkKDWTcGVPLc1W8n0RaQym6qNoS7LSFMHJjsGoog0x0BKTDDY10AxBlSHcW9Ofkw73tdW3NDVM99HIh0VFj8WOW1xGZ4/F4rhOapDDOqKgiWyLD0oQ5ghWr63UmgxKEQjAeqhIAISsG6HoyhpbQ/i+7EXfS5oFKuKdEZ0Lmaz3VPh/vqaE0UBv0kzIVPnl7c26seuLEqgO3fv+6ksKzeij22/Yz7E3TZ79a0X902+QAODC5fi1zz8Jkfd8XvhvbMv6fH978la+Lz5es+dc7zuk/Tr4ZeW99OY8a+Quf1opDyRP5oZf7HlQf/bD/9HTtZ9/678d/uzQ+PfPTtlc+iBx+nDk58eOFC+z8mSd+8/7qbZe6btjsf7VEmdzYN7ju2lnb97VN7befmT3Blt5wJ7ujO3TuF1989dKpS1sWHhh+aOLJXzfSt82X/zwxxC08/hw3vHP+pXzFp5tH9q4Lr/nm0Y3C0G5f6bueazdt63nvxU3ls+v/fNOxO65/+uAt52ce/qD25tI3JlpzL2UvjO57W1h38daL6du3/nND09qH/j11x/f2oBvhxvDf47vEqU8Gv/yne9/fyhx/5J13D01Vn3p2benrJ+/d+/rF343zJzco5efroh9t9f/AfKHpyOz82Yz44jEe/+UPKfDog/s/nP84Ort31/h39vsB+uXd1/fef+5Hp9eEf+X7qHTuG2e/ffyz13a/seutczULm27bErzm4ZobW14+/sLpnzzwybbnb3nrfHDNX8GXrvvZ9Kmpay5Mt7+2+THzsT2kFp9/vrrkkYbyu7esLin5H4DplPE=

View File

@@ -1 +1 @@
eNqlV3uQFMUZPyQVH6moUJRIyortFirozdzszr7JlbkH935yD7zTq73emd6bYWem5+axtwt3UICYKFTMJPkLHzFw3FrniahXCipa5aOCQBnjK0UETGJSIdFYFWOh0STmm9ldbk+IaOw/dqenv/7619/369/XsyWfIYYpU23BjKxZxMCCBR3T2ZI3yKhNTOu2KZVYEhUnuzp7enfbhnzsBsmydDNeVYV1maU60bDMClStyvirBAlbVfCsK8RzM5mkYu63F/11g08lpolHiOmLo1s2+AQKa2kWdHwD1EYSzhCEBQFskEURRjrWRGwiEVs4ZWCVoGExNcyiUmsiBkEy2EoEUdvSbQvRlGvDSgSLK1ayFk2o2EiLdExbsXI4fqtWmjn3NDw8PNcZL/302EZGzhDR63QJCgZE46jDhfDV2jjqIdmS15oRUvAuJxVZGzGrenRqm8RENUmKjeJi2ICImFV1kqyI8Fg+hhpgEI3fqo0zDBN3f0qtrBMfjzNfvZ2ZNH6Ww7PalxuLuzgR4grIC62sw8Nju8GizjHYYxM2DMhjrYFtTTxPPFWskFKqAoFy759r/i8Y84BE2EDI9eThnGdd1vF7OE0WtVBJcxGKCsmhFQ0KheQIBN7IIyMm6pVwjhgrUZ2tupmFSSniIfUc8dGvgTPiZwNRni/iDPwPnF48ZROAtmFZw3A25HRa1oh2Pn6W4wyEzxezL45nzA1oASd/3nhiwRUWAIotCa1ok5Ucasc51EWIshI12JZBFIWcGycf+hrxDPGsH53Je/A8/FwrK4qMVYCpGTlUA5C0L8/PL8R53nhGWa7Iz3OK1dxTI0iVBnIJWmIgT61BdCvRmCFbxBPHrhxIt4YEKhJPWDVzDCxla05K1xDLNjTU2dE24M3IYEUW580DKUYatSSgNyKKSebm1lPteguBRNogj7lz6LiWQ4qcNLAhQ77BBaxtSVhDSVtWLEbWSutQDcbddQq6z/oqkc+gCnHrg5kzLaL6JirRvLKxFirN9YUKIFADKIPdvaMkscaIG5ORAvAUCGe5NzdSvokheKPC3hT31YhuMTwbYiAOSeraavDWD/8mMBGr0Elh2Da8ABw6VEgwdH1xbMR9R6mSECQqC+67DT4rp3sLpWzNq6SuwzPProEGtcQ10L2tJwyiKwlsWr6JiaK3Yon8vx2BnUhMwZD1oqmvphRnU4KjxaI+02WH7OWIZIlgA1v0UsZVFeIGx7RZc8uqKVFbESGswLICNeYbwjmRINy26bKj6JMqlcikcGuQVTfthfoMIziZNEhGxhaUVwaOS5og03YLuYVECpZAMqRQmp5nmCQpENziAmAJpAGmGUUqe0zRsXtFgLuL6cVFN+BOYlgyKXThWBg57+lzUfHIbWqyrhPLjYRhezEuxR2yD0sW0uJehWSDiG5mig6Hykxpch0R3AxODE3k3RsIQDlZcfmkRE3L2Tv/lvSwezyAcVBDqAgLOA+NrJf1SiSSFFCYTAPFNeIl2ZlOE6IzEPQMmSrMcvZhXVdkweN61TqTajPFI8G4WM4ennb5zsBh0CxnthNA1DRXFbngZ0MBltuXZUwL6oYCR5ZRMOCZKiT4qfIBHQtpcMIUr4rOVGHy3nIbajp72rHQ2TPPJTYEydmDDTUcfKz8PQTbpYeTr+s6e7ni4NxyINwcG3xknmMzpwnOHu9oPjFvMrGMHCNQ8OH8gpsSgFAycY59kEgIqURSrR5lR0OcjVvtdTfTgWhnDdvWv6YvHIwONuqxjNXZvzYzyAfr5f42Mc34I4FYOMCFIzzjZznWz/qZrNSfMhI1o/LaDJdYPSh0ii21ZjaghNmm0ZZBqzbINYf49rr69mSz2UL6Bta1iKOtZDDJ1xOxO9Aud0u1KT2U4vtbw2LjgF1vN/YM+rtXIUBnZ2SxuiPSxnarTQ1SQJLqZNzXGGxuXt2TWh8aWBtt6KiN9PHNsVBbf8Zu7i6DF/AHGa6IMMwFo5zb9pa4AaVrxJKcyUAgGnjAIKYON3OydQpiZtnmlkn3IBw9lC/e0Hd1ts5x+IrJeiClc7DXBv2DS1enYKEAFwgifzTu98cDHGps752pK67Te04OPtJrwGFNAQ9XlzifFyRbSxNxuu6cbD9YKCGMix+EmyFZnZqEKaJyZm5m1hS+TZjm+scKR4uhxgjW5PXess5Bj/Zj67NjomCLopQZU7nY+iAvJ4ktpGaLU0Ao3GUAEKOazu6QP7q3OFIi3jTslWP8HMP5n8wyoPxEkVUZAur9Fj+QTGcyBNHef7aBRdMEPqXyQS8d3DPlFgbcamTNXXvOTTAWiz19bqOSKx5MQtHgk/OtTFKOxh9Qzf1nGxRd7A6r5ky2ZM7IonNsOXQSRORiwWCAC/ERLiykBDEUFfhANMalUlwKi/wBVxAFcONmU6eGxZhQOeCmkXOOVao466pMNe8P8WHY6irQaUGxRdJjJ+upuwlzFdKhSlMsPlzXwNRhQSJMj0dAJ18/0FHT3lz3+M1MOZOYTr3wKZrXKMh0KjXVQwzIjDMtKNQWQS4NMgW+1tQMOLNRMZyM8JyfSwmET8Z4phaEqOTtDO8mXa3NY7h7mBnBeUziq33xYJD3rYJqVB0NQ568D9bNUwXxf3HBq1dvv6jCawuVNUfufItbcuLTG184MbltebSq8w8nvr9q26uXiqFty69c/czG2UcnW37D3ffjTVUbXz8t7tp2+Uen7jETS+/yHxIuWvbDHT9dseOe6v98vDj/3N6tBJ1amX3rhU8Pv/f4TUvHSWJ4kP/wn5FLPvnVzm8NXXnHz9JvPPec7/qZvqWXHRna9+HWh96u2Paj7jdnX39FCb199IHZkep/LD3S/sujFvtKzaFLvyvsGNr51MvdL358yTVpa2fT5pf/vuSa927yf7P/SPyC4weOL1r7jdcWLBlcNB1X+1piFzSll13M/mn4+H1HTlUH72xdpvIvvfPvW05/MPPM3zYd3Dh07bt//uOG9+761+BfDgcvvnbP4omtDdmrL6NHr1z44vbb0zccnr2z8jvLRhsji3cseEIeuH9XfGL/Vb+ndzwer83eFo1ddd1dKzumlA9eYDo68gdeeXPjJ+O33fj0awfWf3Qy33135XV73nn03cTSZ08vdOKh/p+faN90zUt97z96yDfWfu/vmu5+86nT8lX3j0dauf6lt/te3dT1xpZ3fzD04IXP7/32qeSJzZN3HFmUfmLnyV9vTy6/L/L8+x+16g7+XtcE7T3e+NqGxermAy03PZC9d/H+w4uu6P7Jg0seeiT37I6Tb3x6YUXFZ58trFi973jT9oUVFf8FrWip8w==
eNqlV3uQFMUZP5RKoMwDq6xoosZ2IwJ6s7e7s29ylne7x8G9vTvgTsS73pnenWFnpsfpmb3b4y4pIGoiUDqhSEqtPJDlTk88PCEqhosFRCt/EB8lEVGDSsoiqMEqn1WpAPlmdhf3hIjG/mN3evrrr3/9fb/+fT1rx3LEYDLVZmyXNZMYWDChw+y1Ywa5zSLM/NmoSkyJioWO9q7urZYhH75OMk2dxWtqsC57qU40LHsFqtbk/DWChM0aeNYV4roppKiYf3XWs6s9KmEMZwjzxNGK1R6BwlqaCR1PL7WQhHMEYUEAG2RShJGONREzJGITpw2sEtQvpvu9yGmLiUGQDHYSQdQydctENO2MeyWCxfkLvCbtU7GRFemANn9Bf/wWzZlV/O3v7y8+DJd/uiwjJ+eI6HY6BAUDgmHU5iz51dow6iKDZa91GVL0LqcUWcuwmi6dWowwVJei2Cgthg2IAKtJSLIiwmPlGFoEg2j4Fm2Y47i481NuFZ34cJz76u3MpOGzHJ7VvtxY3MGJkK+IvNgqOjw8thpe1D4Ae1yMDQNyV29gSxPPE08VK6ScqkCg0vvnmv8LxlwgEW8g5HhycU6zruj4XZzMi5qopDkIRYXk0fxFCoXkCATeyJkMQ90SzhNjAUpYqpNZmJQmLlLXER/9Gjgjfm8gyvMlnIH/gdONp8wAaAuWNQznQc5mZY1o5+NnJc5A+Hwx++J4xpyAFnHy540nFhwhAaDYlND8FlnJo1acRx2EKAvQIss0iKKQc+PkQ18jniHe60dn8h48Dz+Xy4oiYxVgakYe1QEk7cvz8wtxnjeeUa+vxM9pIlX8bQR50kASQT8M5CoyCGs1GjBkk7gi2JEHedaQQEXiiqfGBsBSNoty2UlMy9BQe1tLr2udw4osTpsDUos0akpAZ0QURorzklSbZyKQQwukMH8OjdbySJFTBjZkyC1MhzVNCWsoZcmKyclaeQ2qwbizRlHTvZ5q5DGoQhztZ3lmEtUzUo2mlYTlUEXmFRVeoAbQAzt7RiliDhAnFpki6DSIZKU3J0KekZXwRoV9Kc6rjG5yvDfEQQxS1LHV4K0f/hmwDqvQSWPYMrwAHDpUPzB0fPm8EecdpUqfIFFZcN6t9ph53V0obWlulXQcnnl2DDSoG46B7m69zyC60oeZ6RkZKXkrlb//2xHYiYQJhqyXTD115TgzCY6RFy1lDitkN0dkkAgWsEQvZ1tVIW5wJJdoTtlkErUUEcIK7CrSYrohnAkJwm0xhxkln1SpRozCjUBWnbQX6y+M4FTKIDkZm1BKOTgaWYKY5RRqE4kULIFgSKE0O80wRdIgrqUFwBJIA0wzShR2maJjp/zDvYS5cdENuG8YpkyKXTgORt59+lxUXGIzTdZ1YjqRMCw3xuW4Q/ZhyWJanGuObBDRyUzJ4coKU5paRQQngyMrR8acGwZAOVI1pyBRZtoT029AO5zjAYyDekFFWMB+JDMk69VIJGmgMBkHimvETbI9niVE5yDoOTJanGU/inVdkQWX6zWrGNW2l44E52A5e3jc4TsHh0Ez7V3tAKJuSU2JC35vKOANPDrIMRNqhAJHllMw4BktJviPlQM6FrLghCtdA+3R4uSJShvK7G2tWGjvmuYSG4Jkb8OGGg7urHwPwXboYY8lOs5erjT42XIg0n5vZHKaY5bXBHubezSfmDaZmEaeEyj4sLf4RgUglEzswx/09QnpvpRa24jFrmRnW2/9UivVTDPLxfBgvnOAGovCTeHbmpWW5kaeYSmXldUGzh/hfbFYxB+Jcn6vz+v3+rlY3tfbGkq3hJmeydbXdbfyS7J1WkDJyF6lq9N7U9vSaLSTX8ZrUXOZdTNrFjuSDR2C1Nyeja3KqsFgpjerN+np5sCqZcsam3oi+ZtXNdTXLUSAzsrJYu2SwZ6mFt1ov0lPLorlm6iSDKip1oaupVq6R094GyKZxpZuywo0DGYr4AWiYc5XQhj2BaM+p02UuQFlKmNKdiEQCIYfNAjT4dZN1o1CzEyLrS04B+HAX8ZKt+8H2ps/4/D3CkkgpT3VLVnVyBdBbTSHAr5AEPnDcZ6P+3jU2Nq9PVFap/ucHJzsNuCwpoGHDWXOjwmSpWWJOJ44J9uniiWEc/CDcHNkUKeMcCVU9vYerrP43cEtSe4sHi2OGhmsyUPusvaUS/uBocEBUbBEUcoNqL7YUJCXU8QS0rtKU0AonGUAEKcyeysfDU2URsrEG4e9+ji/j/P5nxrkQPmJIqsyBNT9LX38MLsQgmg/ebaBSbMEPpPGgm46fH+qtDDgBiNrztqfuQnGYrE95zYqu+LBJBTjn5puxUglGn9AZU+ebVBysTWssu2DZXNOFu3D10CnL+zj+VAqGk4H/SSU5lNBHIlif4hPp6NCJB327XYEUQA3TjZ1apgcg8oBN4y8fbhaxYOOytTyMCEMW10IOi0olki6rFSSOptgC5EOVZpicUdiEZfAgkS4LpeA9liyt62udUni8R6ukklcu178zBzTKMh0Oj3aRQzIjD0uKNQSQS4NMgq+Out67V1RkZBUzBcJx/xCNC1EuHoQorK3M7wrOFo7huHuwXKCvVPiaz3xYJD3LIRqVBsNQ57cj9E1o0Xxf2bGB1etn1Xltgs3dLXe/apvztSx5T1sy+aqvY+tCO/eMj6lXK9++ugKq86+7dq9b74yUfvQ4otPTzXtu3ZN50Ox5Cd78vSGkVmXXXjvjX++UYn+4vXxF4++9v6/9o/4X3vq7vDJf89d8NKVzycKI53Vv+3v8R8YWr/2SI+AjYfvKbz47eorjH1NVvx3rfv0G6aefmjN69c8eN3NaTl4+RsbH9Sz/H0t9ETbgY32BbXJ90L9GyYTJ3cYB2efuDq16a375za+n79jzh/aNqzhxx+r+vTWoaP39b+yZp9Uv3LbIxdtG6r6/tbU3s3vXBB5Y8vAiR88s/mu/YHh2T9989TVzVcdP/SPsY8/PLFueOjUkfWHFv/68bb53C7p8oP7V6w2L97gf+HHO3Y/Xtj44mTi4dSJ4/Vr/W96rzcOdm/77p1z/1P9wubmn//qnsKOdQ13rPtbz2XHjiw99Pzrb/V//JPZjdGXYt849NzDt95/xejtx945+PSmk698J3nvgfjOTy49eXv3quORycSJd+N7N022vndd25Uz7trZu9a69olLNh3eMjGTHsNbv/n3CeufH30y85dXTwxt18N7Hrh03uJbXt6vfdqsT+4tbM2tfnZ98r4fzvlo/NTC+7Ovtr7dmTn9ozdm1P9+bkvqXZ99dM/R5/6KP77k9Nsv53a3XTZ8fPaH0c3xROY3ylUJ/VunILmnT19YNeujeTMvmllV9V/1Y8Fm

View File

@@ -1 +1 @@
eNrtXM1v28gVb09Fc+qlpxbFQCiQpBBpfojURxAsZDmO7Vi2Ezt27NhwRuRQZMQvc0hZSuJDtz0XEHruoV2vXQRpdhe7aLfbbs89FL1nDz30T+hf0DeUFMtxHGWzzm4sDRHIJGf4+N5vHt8H9VPeP2ySiDqB//0njh+TCBsxHNDfvn8YkZ2E0PjXBx6J7cDcX1pcXvkgiZxnv7DjOKSliQkcOmIQEh87ohF4E015wrBxPAH7oUtSMfu1wGx/9YP/Psx4hFJcJzRTQncfZowA7uXHcJDZ9NeDBOGIoN0gajh+He06sY0wCrFvYopMHGMrwh5Bjo+W2qCKL6IVmyCfnQssFMP+wCSK7pnWPXHTX7FhH/6x8YjQxI3Z7HthBHZeMi3RJti8dPnyvdKm/wjBxj6Wk6jpNImZHiwZLqYUdheY4K+3PULLpNWXWq6TrnSn5oKBdGI5DBJKKCrXAhz1bgYI+DGdqNiOa8Lu4BiaZvA8Aj0FQSixj/42cFB6VBK+/vb8okcnBJ7YXm+s9CjFU+pq3t0GDlTYrUYiWtwFG2dwFMEKTUY48c0heHrYJT08kaIMSn9hk18xliqSFxWNSUr1PDZ74EBO9aQimgtsn2louqSNLk27ASyOQeCMU69TcETcJtFlVEk8trJwkUVSTVNBauEb6JmXRaWgqj09lVP0TPF0KCg6jx0foxniNOApAnCH4Dmop6IPw+zVeBYZoF091aF4YoPFFVAUw2N+ad5x26iK22iJEPcymk7iiLguebmeqvYN8NRUUUbP1z03xD/XHNd1sAdq+lEblUEl//X985V6DsWzIEqpf2ayKBMFLmFRkrZpTLzMXhYdC55rEG8vdkOcEUSAHGZhF9VIvEtAYQi4CKIosiB+vNeNhThG9QigJxE78E+9lF2SXkvTkIjd9wbVgegVZfa24IwXmMRlp+phLKiiJsRJVAvYXB/OyvCXwopiDw7iKCFwDHaEkGdgHhMliXl2LgjcXmqI22F6Cyvx01TERD3fL4H5LO6zCWGaC7YjErrbmMZsnkmoETlhb2qm3MsXiNrgUyK6TQnYy2AIEGkRI4kJ6koBCDwPrAX/nPXDJIYrgsQ1AQ1IRGC8Y74wERzEBpQSyhJWT2bgZhENIFs6kOxQkMRMEIzgWi0iTQcwN5EAftIgDFRIVTEyA5jpBzFyg6BxbGKNWBBpejeAmZD82kESwZLQXRKJzNoQs4wHK0lTXMIIcnEUO6R7CA9Z1E73XkDFgBVD1HfCkMQMiShJMe7jDssFt8zs7bHlhhLAiYjJVqYncGtgalC7T4wYpu5t7R2ydAqq/Od7P9q3Axp3nh6vDj7ChkHARSB4BibcoPOn+gMnzCKTWOB55DE4tU/SRe48bhASCgB6kxx0r+p8jMPQdYzURSfu08B/0nsIBKbLyeHHzEEFcH8/7ny2CEqUZyd6viCLmiJKH7cEGkPAdKEsEVwM+hx0F/hvgwMhNhogROiVSJ2D7sVPB+cEtPNhFRuLy8dE4siwOx/iyNNznw6eB7CZe3QOK0snb9cbPLodRCxJzH1yTDBt+0bnQwu7lPzl2MUkjtqCEYCMzu+lAwMcyiGdZ//b3jas7Zp3dUfc0aQE30ju3wnWC4tlcX711m09V9i4Hhab8eLqWnNDzU05q/NmQ5DzSlFXJD2vCrIoibIoCy171Yq2yzvOWlPavrZhLJpzk7SluLo4szO3EU/mpFlNrVamqrVZOkdur9+fM3dukI2aOkXMm0rVuWlPWqFmqas3dPP6ejKVXF/ekG9eQaBd0nTMqwv5efGmNzNtK7ZdcfDt67nZ2WvL1gNtfa0wvTCZv63OFrX51WYye3NAPUXOCVJPQ13KFSS2Pe37BsTsemx39hVJLvwRSsAQKlLyqwPALE7o+/vsQfjXPw97lekfFm8c+fCP96fAKTtfrrCIBdXGohEjRVJySC6UZLkEZ65XV55UevdZYT74DMWkFU+QJjvTjXlXENTDESXx1SS2hMInKxE8vRY45rX+Q3Bo2InfIObjykvd/0vm/rC2zCAIvQJphQElQk/NzpM7wq1ukS7MTn3afdaEIKpj33mQPgudL9PnYPdBa9c0EtO0m7ueVHyQU50aSQzrs94lEDnYbUAhwaOdD1RNfdob6XviYzBeEmRJkOQvWgIEb+I6ngMIp5+9ToF29jWA//OTE+KgQXzaOcyl6yP9Y3BGBPnd8dm9j8TkisXi318+qS9KhSm6Uvzi+CzAekCMrHj085MTeiI+0Dz6pNWfLjhm59nP4WC7lpNVrajkNSUPHxrRTKum5bBpyZpFVLPwVxYhDRDDVjMMIlhtSCWRE7c7z7IebrGwc1WVNVUHU69A4DbcxCTLSW0qYEbQKyiEbBtg86PKtFDBhk2E5dQjO4dT6wvl6mzlz3eEQdcSFtPYDeN+AHHbsg6WSQQr03lsuEFiQvyMyAHIulVe73xWMPVaXpUsHStYrRVVYXJx+RC7oGTT6Hxqq1czpVxOzVyBPHS1oMOCpC3aLw+6Yf+rn/ybtVIsazgQ9TOsnzOgmxPK81OmUw2X7sypUnXH252ZNKRddbVyZ+P6rfVMtp8KuleIRx2gmDo4TDDSogNk9h/egpLtFw/HawfwMUWDK7olz7YFapEo7doyJT9xXZBlB47B0hyUC45vklamJGUhzbkxzpQe9oqUDPRuDjzqcFn2qGTqCmC5etvArvuijK7RMLB9f6q5jOs3bWvyhqetFrwdvLqz1KisgLBu+hsoUAbqk355crI6yeConnisx4NRyJlbWUivVkKx29VqL5txgzo8ijXaVxNMd6gNMjBlstNZW3sXLpz/JTp1AQaRHATs4WaKGQfpVSB1KzQO0xCYNjMl7k3DYTItjtEwjO7evchBGgZSuU44SMNAupjlGA3DCPGHbShG0xyiYRBBu8RBGhqPtrZEjtIwlNibc47SMJQuXeauNBQkNzA4SEPrbV4B8HKbl9u83OblNi+3R6vc5hgNf2+7N24gDbd1EMYBm+9OLS5c27pw4SxpnD9UOY2T0zg5jZPTODmNc5RonMeVSbPM0ZQBFgU6lrJP42qyXIuGcShen9I5UAAgxgJIv+PehIPNjGndvXsRgvLFLLrIQit7ZckwuHRZdAPjhaHnxcNx5CVRlhVVKeiqXiwWc7KuDOLD7D1m9/brWMd5sZwXy3mxnBf77vNiVU07W16sOtq82MJ54cWqb4EXa6m6aeSJoeYkrFtFM0f0vERMomKZgMMZ54EXK2tSERffgBf7s5+e/i7h/gMpT9bD1Y2F5prSqFRxoswoxgKN3uxdgvpO82JxMBMlZUkp4Elppa4sre00/OlqVX63ebHnZYm+U17syIJ0trzYkYXpbHmxIwvTGfJiRxajs+TFjixI0xyib/O7w5EF6Qy5DCOLEeLxaChG7FszjtIwlNj3ihylb5U+PLIonSl9eGRROkv68MiCdJb04dFtS3gVwLsS3pXwroR3Jbwr4V3JGHYlHKNvk2V9TkB6p1jWF37DWdacZc1Z1pxlzVnWnGXNWdansqzfGqCnU5zODtAULoCtn0NfRPXk+KnQKpou5wtaDmqkQlGX1NeG9lQ7OYGdE9g5gZ0T2N99AnteUs6WwJ4bZQJ7TtLPCYFdzr8FArteNHVVJgVFJrJBdEKKpCbDyZpOFDNPlHNBYMd5i7zJf+ws/u701zQNdaM1U/btOmneXppMqq1Vac6f3ylee7PXNLnvgsCeybwN3vh5QeYIhhWbZMbU9MHWbGwx6LWkY2s/ro+v/0P7Mra2W+mX8mNqvEPH1+dDsLnleKwUbI8tCuNquDSuhovjargsK+NqenZsw9uu7bjjm91j3tLxlm6MW7qxLu3Huafrf5vO+zre1/G+jvd1vK8bXcMVTedrPm7hbcUmUUqH4S09b2p5U8ubWt7U8qaWN7W8qR1R03s/JhjjpI99XvHwiodXPJyZxQseXunyVzsjZfhr/I6exkH4sl/Q/x/QUZLh
eNrtXEtv29gVbldFs+qmqxbFhVAgSSHSfIjUIwgG8jN+O5ZsxxMbzhV5KdKiSIaXlCUnXnTadQGhy67a8dhFkGZmMIN2Ou103UXRfWbRRX9Cf0HPpaRYjuMok3EysXSJQCZ5Lw/P+e7heVCf8sFxg4TU8b3vP3a8iITYiOCA/vaD45DcjwmNfn1UJ5Htm4cry6Xyh3HoPP2FHUUBLYyN4cAR/YB42BENvz7WkMcMG0djsB+4JBFzWPHN1tc/+O+DVJ1QiquEpgro7oOU4cO9vAgOUlveph8jHBK054c1x6uiPSeyEUYB9kxMkYkjbIW4TpDjoZUWqOKJqGwT5LFzvoUi2O+bRNE907onbnllG/bhHxsPCY3diM2+F4Rg5zXTEm2CzWvXr98rbHkPEWzsoxSHDadBzORgxXAxpbC7xAR/s+0hKpFmT2qxSjrSnYoLBtKxUuDHlFBUrPg47N4MEPAiOjZhO64Ju/1jaJrB8xD0FAShwD56W99B4WFB+Obbs4senhF4Znu1scLDBE+po3ln6ztQYXcxFNHyHth4C4chrNB4iGPPHIBnHbukiydSlH7pz23yS8YSRbKiojFJiZ6nZvcdyImeVERzvu0xDU2XtNC1adeHxTEInHGqVQqOiFskvI4m4jpbWbjIIommiSA19y30zMqiklPVrp7KOXomeDoUFF3AjofRLeLU4CkCcAfg2a+nog/C7OV45hmgHT3VgXhig8UVUBTDY35twXFbaBG30Aoh7nU0HUchcV3yYj1V7VvgqamijJ6te2aAf244ruvgOqjphS1UBJW8V/fPl+o5EM+cKCX+mUqjVOi7hEVJ2qIRqacO0uhU8NyAeHu1E+IMPwTkMAu7qEKiPQIKQ8BFEEWRBfHjvU4sxBGqhgA9CdmBd+6l7JLkWpqEROy+168ORK8wdbANZ+q+SVx2qhpEgipqQhSHFZ/N9eCsDH8prCiuw0EUxgSOwY4A8gzMY6IkMcvO+b7bTQ1RK0huYcVekoqYqGf7BTCfxX02IUhywU5IAncH04jNMwk1QifoTk0Vu/kCURt8SkRrlIC9DAYfkSYx4oigjhSAoF4Ha8E/Z70gjuAKP3ZNQAMSERjvmM9NBAexAaWYsoTVlem7aUR9yJYOJDvkxxETBCO4UglJwwHMTSSAn9QIAxVSVYRMH2Z6foRc36+dmlghFkSa7g1gJiS/lh+HsCR0j4QiszbALOPBStIElyCEXBxGDukcwkMWtpK951AxYMUQ9ZwgIBFDIowTjHu4w3LBLVMHB2y5oQRwQmKylekK3O6b6ld2iRHB1IPtg2OWTkGV/3zvR4e2T6P2k9PVwcfYMAi4CARP34QbtP9U3XeCNDKJBZ5HHoFTeyRZ5PajGiGBAKA3yFHnqvYnOAhcx0hcdGyX+t7j7kMgMF3ODj9iDiqA+3tR+/NlUKI4O9b1BVnUFFH5pCnQCAKmC2WJ4GLQ56izwH/rHwiwUQMhQrdEah91Ln7SP8en7Y8WsbFcOiUSh4bd/giHdT3zWf95AJu5R/t4YuXs7bqDJ7eDiCWL2U9PCaYtz2h/ZGGXkr+cuphEYUswfJDR/r10ZIBDOaT99H87O4a1U6nfnMFmaXJ1aXN8La7M+9UNU2+2VqHkmtbn9Pvz7sL8jEqx3ag59SlBzqpSPp+VszlBFiVRFmUh35I2FzVrQadBtTZeLC+qs7Wip7hVR3RLq+LtpbVcblVdV71ctB6/T+fNlcmpFcOeX67ld2v1TKa6WQvmAmte2V1fn5m7k229vzs1XryBQLu44Zg3Z5t35haCcPl2MDmdb8357qRSryxOldY8604wIU5lqzML5ThWppq1PvWUnC5IXQ11KZOT2Pak5xsQs6uR3T5UJDn3RygBA6hIya+OALMoph8csgfhX/887lamf1ieP/HhHx9OglO2vyrbcRpJWbTkN5AiKRkk6wVVLUB8nlksP57o3qfMfPApikgzGiMNdqYT824gqIdDSqKbcWQJuU/LITy9FjjmVO8hODbs2KsR89HEC93/K+b+sLbMIAi9AmkGPiVCV8324zvCaqdIF2YnP+s8a4IfVrHn7CfPQvur5DnY22/umUZsmnZjry7l9zOqUyGxYX3evQQiB7sNKCTUafvDjKY/6Y70PPERGC8JsiRI8pdNAYI3cZ26Awgnn91OgbYPNYD/i7MTIr9GPNo+ziTrI/2jf0YI+d3x2L1PxGTy+fzfXzypJ0qFKbqS//L0LMC6T4ys1OkXZyd0RXyo1enjZm+64Jjtpz+Hg52MRpRKBVu5ipbR81rWxLJCZLBMURRiaNm/sghpgBi2moEfwmpDKgmdqNV+mq7jJgs7N1VZU3Uw9QYEbsONTVKKK5M+M4LeQAFkWx+bH09MCxPYsIlQSjyyfTy5uVRcnJ348x2h37WE5SR2w7jnQ9y2rKMSCWFl2o8M149NiJ8hOQJZq8XN9uc5k5BKXlaMCsnkLCMrjC+XjrELSjaM9me2ejNVyGTU1A3IQzdzOixI0qL98qgT9r/+yb9ZK8WyhgNRP8X6OQO6OaF4e393cWNttiwVs9Ob8oo70XJsqVELN9bXSCrdSwWdK8STDlBMHBwmGEnRATJ7D28uk+4VD6drB/AxRYMrOiXPjgVqkTDp2lIFL3ZdkGX7jsHSHJQLjmeSZqogpSHNuRFOFR50i5QU9G4OPOpwWfqkZOoIYLl6x8Cu+7yMjtEwsOPZxvwdzQjn5ldnXH1/5ZbkB/trdQeEddJfX4HSV5/0ypOz1UkKh9W4zno8GIWcuZ2G9GrFFLsdrQ7SKdevwqNYoT01wXSH2iADUyY7mbV9cOXK5V+icxegH8l+wB5sJZhxkF4GUqdC4zANgGkrVeDeNBgm0+IYDcLo7t2rHKRBIBWrhIM0CKSraY7RIIwQf9gGYjTNIRoEEbRLHKSB8Wh7W+QoDUKJvTnnKA1C6dp17koDQXJ9g4M0sN7mFQAvt3m5zcttXm7zcnu4ym2O0eD3tgejBtJgW/th7LP57uTy0tT2lSsXSeP8ocppnJzGyWmcnMbJaZzDROM8rUySZU6m9LEo0KmUfR5Xk+VaNIhD8eqUzr4CADEWQPId9xYcbKVM6+7dqxCUr6bRVRZa2StLhsG166LrG88NPSseTiMvibKsqEpOV/V8Pp+RdaUfH2bvKbt3XsU6zovlvFjOi+W82HefF6tqGufFfgNerHRZeLGZN8CLlSQrW8noiqzlclmzohoa1iSzopt6RdGUCr4UvNhc3rTyr8GL/dlPz3+XsCRtLHjWXKYcr/v2+MZsthTfzu82ct7rvUvQ3mlerLI3Xppd3di4tX5rshaUFas5aUD82n23ebGXZYm+U17s0IJ0sbzYoYXpYnmxQwvTBfJihxaji+TFDi1I0xyit/nd4dCCdIFchqHFCPF4NBAj9q0ZR2kQSux7RY7SW6UPDy1KF0ofHlqULpI+PLQgXSR9eHjbEl4F8K6EdyW8K+FdCe9KeFcygl0Jx+htsqwvCUjvFMv6ym84y5qzrDnLmrOsOcuas6w5y/pclvUbA/R8itPFAZrABbD1cujzqJ4dPxdaRdPlbE7L5HUll9cl9ZWhPddOTmDnBHZOYOcE9nefwJ6VlIslsOvDTGCX89lLQmCX9TdAYFckM6OaFtY0xVQyJtFMMyerOJvL6IpqkcvxHzublqXmXoPALv7u/Nc0y0VvukhX1/dm9jduz9SX31/Z0/apdb/xeq9p9O+CwJ5KvQne+GVB5gSGsk1SI2p6f2s2shh0W9KRtR9XR9f/oX0ZWdut5Ev5ETXeoaPr8wHY3HTqrBRsjSwKo2q4NKqGi6NquCwro2p6mhc2I2h7xBs63tCNcEM30oX9KAe+3nfpvKvjXR3v6nhXx7u64TVc0XS+5qMW3so2CRMyDG/oeVPLm1re1PKmlje1vKnlTe2Qmt79KcEIJ33s8YqHVzy84uG8LF7w8EqXv9oZKsNf4Vf0NPKDF/1+/v8gasg2

View File

@@ -1 +1 @@
eNqdVmtwE9cVdgJtSNppqIG006RlR+3EgbLrXT0tO6a1JT+EbWT8kLGLca9273rXWu2u9yHLJmaIC81MHWayLUnaSdsMwUiJ62AYKOFR0mQChiRQSukwtpsQh4Hy6CRMkyZk2gI9K8nBDvyqfkh77373O+d+57vnaiCdwJouKvJdI6JsYA2xBgx0ayCt4W4T68bGVBwbgsIN1Ycbm7aZmjixVDAMVS8uLESqSCkqlpFIsUq8MMEUsgIyCuFZlXCGZiiqcL2Td19Y54hjXUedWHcUEz9a52AViCUbMHCECAElMIEIDhmI11AcEwUcX0AgmSN6kGwQhkLEZKWHMARMsIqmYQnZ3EQUGz0Yy5n5grJOnF1SUIk0eGQVyYzLumMZ4dAUCduBTB1rjv52mIkrHJbsqU7VIF2UhzRMLarYWBlmGfjVDQ2jOAx4JOkYJgwcV0EcANpcNOWz5xRF6mAFRWTtuXUOo1fNBOJNOSOiTfj5sw2QYXM2QO0FQeUODatSB9INR39/ji2nzv9NBDgO66wmqjmoo4yoz2AIXcCSRBHNOgbBRN1WFScxaxqYyLKAZPE4KKhTREhWTQNWKKbEgcxQmwSSRO4LQIpoEUB+UxflzmlORVpG6AoYRoR6E4pp2ETwBkWjGk6IyMAcQRJxFMOEDlISokFwCiBlxSAkRYnNAkYxrwAmGwCQokz0KqYGZdZ7sEbZu1WRbRiwrZ7RRdXAjpoh4uwQ/Kv1Zp6+oAoLBiB0WVRVnPGXZmY0ntYdqg8hs2WxT4GoYc6uTI6wfQZUiXZh1q5gf3t/WsCIg1TO5s0fEhTdsHbMPiCjiGUxOA7LEB8CWC939onqMoLDPFgaD8OhkHGmyNZwDGOVBNETOJVdZe1EqiqJbMb7hV26Io/kDhFp53L762Hb7yQcOdmw9oQhibJQYc4LDOVxUvTOJKkbSJQlOJmkhCCfVLbAB2e+UBEbAxIy1yWsVHbxjpkYRbe21yE23DiLEmmsYG1HWtzr3j1zHsS27WGlA/W3h8u9vBXORTE05d41i1jvlVlre+ZovjJrMTa0XpJVgMPaSqdYMJSIrYmPOjpYviMaL+2muj20iWrMrtVKa1G4jKqNNDR73UVtVao/YYQjLYk2lzsoRmq5GMn4nH6vk/b6XCRD0RRDMWRSiPBaR1m32JKgOyra2DC3olxPOiUvVd29os0od9Mhj6suEKyLhvQVuLm1awXXXYPboq4g5lY568RVQjmvenhXpMbLVbWaQbOqsY1ZVUJAdmZC5EpX+mqpVfHqSsEpCAERNVe5Q6GKRr7P09pSVLmy3NfsCvk9tZGEGVo1Iz0n4ybpXIZe2l1E258d096QsNxpCNY2n5t5UcO6Cj0Z/yQFkhmmPjBkn4Pjx9K53vxCuOaWhR8YCoInrUNNJrQ/p5MIswbhpJ1ugikqZphixk9U1TWNBHJhmu5owV1NGpxVHmxYMW35NCuYcgxzw4E7mv2QbXaopJ0+9G0SJ1VFx2QuK2tkNdmQvZXIUHB39mSRitaJZLEvE9Y6lHF9T1+yh2NNjhMSPXHa3+d2iVFssvye3BLoE3YYSIiM6yCOi96RezPtu2HYK00yNEkzB5IkNH4siXER9Mx8565G3RrygNj7bgcYSgzDJZp2Z6pBvzoToeE4GNaOfYvG7ff7/3Bn0DSVCyB+j//AbJSOZ2bDOOP6vtsBOYoXaH0kOY0mRc6a+B4MOjyYiWLGw2PG7fJxjF2BKPZgt4v3FXmdUed+ux2ywGIXU1U0g9Th3tBEo9eaWBZHSbvHlLoYj8sLOy2BLs1KJocbzWhQsfeglxAq3NkK4kYDlWQAsQImGzP+s9LB1pVldaHA3tXkTCORYTX7HyQtK9CkeT7ViDUojDXMSorJQbPUcAq4GsparT1FnDfqc/I+D4s8rqjfRZZDG5pm+9x2Q3anTSMJck+w1m7BVeoodrtdjhK4i0qLvFCmzD+Vx1PZ1n/krvHFg/PyMp85UkN7+G/0wv6Tozs/rPEteXmyHJ14cMFzD7sP//K+/Njvjpny8shR76ebK1/693vpx57a2PCD1WdKl198Z/TQ0W9yX5/71eBbofOVXxtdv7YqeaMwcvmNbY/d4BefOP33a59duPHp5aPcfevf/PNzf3zk0mZPfrEvf8np4Z9uqli5engllf548PJn5z6q+dL54vLWv+QH3th9Sqre9O6l95PO33iON3Xe/e250mVmbPzVA28/1LyxcP7OXTcfePOU/7fE3A8Kxl7raq3cUPKs/9zPXrv6w62Lti49Hdm0pXTDnme2TjUkxj6ZuHa2fdu+Le+Pjn5l8uz13kP71x5fvNe3f3/L4MH/frL3YkvN1dqn69c8csr74hOv/HPq2JNM/aOj+/iqlyJbK74b+PBS9QAz5V2iNTWl7v/Poke/IfR8f15k/pG3L0wtff3xk19+b/+R1/tK8ycnw8+3qBt6iu+5+td178Z+cWSfusZ7PXbuiQTRUDB2ZXPJlcLDv3przbfazu8YXvCQ2Lju2eVPz1l4lb937Mo79dV9g8MLf13b+fvvGAfPBE4++JS7e2Lw+RPXF/ShjYd3d41/sP5equHJ4ivj/1h07dS/tgTbf5z6+f1L126bK1+c9zH9p6aBM+Mj7ZVXp67fk5d38+acPGflQM3gnLy8/wG1OUq0
eNqdVmtwE9cVthMozYMmhVCaIUk3KlOX4l3vSrIl2XUbIxviN/htA3Gvdq+0a+3uXfbuypbBZUIobYnTZvuCttAGMBI4rgMxIZRXJ4Q8yoQJIR1aEkI7bUMLMzwmdJpOO4GeleRgB35FP6S9d7/7nXO/891ztSadwCZViJ4/ougWNpFowYA6a9ImXmFjaq1NadiSiTS0uLG5ZZttKqe/JluWQUuLipChcMTAOlI4kWhFCaFIlJFVBM+GijM0QxEiJd+57f2VHg1TimKYekqZpSs9IoFYugUDTzUjowRmECMhC0VNpGGmQIoWMEiXmF6kW4xFmLhOehlLxoxITBOryOVmItjqxVjPzBdUxHB2ScFCZMKjSFRb06mnkPGYRMVuIJti0zOwHGY0ImHVnYoZFuvjilnLNiPExeowK8AvtUyMNBhEkUoxTFhYM0AcALpcPBdw5whRu0WZKKI7t9JjJY1MoKitZ0R0CT9+dgE6bM4FGEkQVO82saF2I2p5BgZybDl1PjUR4CRMRVMxclBPBbM4g2GojFWVY1opBsEU6qqK+7BoW5jJsoBkmgYKUo6p1g3bghXEViWQGWqTQKoifQLIMe0yyG9TRY+NcxK1kKEEDKNAvRliWy4RvEGRiIkTCrKwxLCMhuKYoSAlo1iMRACpE4tRCYlPAkZwlAAmGwCQis4kiW1CmWkvNjl3twZyDQO2pRldDBPsaFoKzg7Bv2Yy8/QJVUQwAEN1xTBwxl+mndF4XHeoPoTMlsU9BYqJJbcyOcLlE6Ak0oNFt4IDywfSMkYSpHI2794hmVDLGZ18QJ5DoojBcViH+BDA+U2sXzEKGQlHwdJ4GA6FjjNFdobjGBssiJ7AqewqZxcyDFURM94v6qFEH8kdItbN5ebXw67fWThyuuXsaYQkKqqLcl4QuGIv593Vx1ILKboKJ5NVEeSTyhb4wMQXBhLjQMLmuoSTyi4enYgh1Nlej8TG5kmUyBRlZzsytRL/2MR5ENu1h5MOL745XO7ljXA+ThC4wO5JxDSpi872zNF8cdJibJlJViTA4WzhUyIYSsHO6Q+6u8Vod0QrX4Sk5sqmhs4FrXaklsTapZK+ZFMvMReW1JSsqFXrahf5KJITcUWrYoWAjw+FAkIgyAoczwmcwIaSfGd9cbSuhBqx+IKKlnpfdbxC96oxhVObm7glDa3BYJOvzacHrTa7i9ZKiyurFotybWM81BPX/P5YZ9yoMaK13p62tkU1HYFkV0/VgooyBrKzE4pUXt3XUVNnmI1LjMqFoWQNUSu9WqS+qrlVj3YYYa4qEFtU12Lb3qq++IT0vMESls9lWML7g7z7GR33hor1mCU72wJ+YYeJqQE9GT+RAsksm64Zcs/BG6+nc715a2PtDQt/YagSPOkcapHtQoYPMA0kwXh5r58RSkp9vlJeYBbVt4yEc2FabmnB3S0mnNUo2LBq3PJpUbb1OJaGw7c0+yHX7FBJN33o2yzuMwjFbC4rZ6SDbcreSmx15Vj2ZLHEjCFd6c+EdQ5lXN/b39cribYkyYlejQ/1+31KBNtidE9uCfQJNwwkxGrU2ebz86O5N+O+G4a98qzAs7ywv4+Fxo9VRVNAz8x37mqkzlAxiL3vZoBF4hgu0bQ/Uw3+8ESEiTUwrBv7Bo0/FAodvDVonMoHkFBxaP9kFMUTsxG8Gt13MyBHsZWnI33jaFaRnNNzYdDtDQb9Pui7QigqihEUEgNIwH6hWEQlkhCKhH7rtkMRWNxiGsS0WAr3hqlYSed0oYb63B5T7hOKfSWw0zLo0qJqS7jZjlQSdw+0jDHgziZIei68kA0jUcZsc8Z/Trqys6Givjq8t4OdaCS20cj+B0nrBJp0NJpqxiYUxhkWVWJL0CxNnAKupopOZ09QwjgS4vli7EPBqBhgF0AbGmf72HZDbqdNIxVyT4jOmOwr95T6/T5PGdxF5cESKFPmn8rjqWzrfyX/8pee/Gxe5nP7YPNL69/l7z30v/lHl51cMOOv01HgsHYXPr99Ss07P5qy7XVx6c7CnSc3qjM+uPxK4lTi+Xl/mPblc8cPfhj1l04dfLyOiTBmw9ZdPf86GLx0eP9H6dkbBsaKThdderFTfgx/+0zM98W2WdED4QrSKqI7n3166K3phQ+YR9YXr9rUcO7qa1fmjTx+ZO6O4effOCmX/uqpHdZS3y/W3jM74syZdyJ8tC1fHNx8YfX7Swr+c+fnf3n3nz1T+9/0rd1RH7l705LWKfs+PDnzj1Ne/tx9C2deOLaX++mFfGlw/dT24Zn64ebLZ6avnz/r6q+fPP9WR9e1P1383YZLY9w/au/nZpcfWvffb8XDys6qsmceOvvqtjnCq49c3Tr6wrRjT/3gwa5ZtDFw32B+b2fnM6lj39h06kq6srcjvGfd3Afv+tuZ7/7w2ZfouieO91z5TtkD6ilj3fyfUePywfPTf4+6+ke5tw8cffOi/dUfd8b2Xovuf7qGHXio7O+HL834+v3vyu2lL9z2k/bAHad+fqSp7lzgwnp+1cbiVSe0rns2P/aI92jrN6/5D748u+DRZafa39M3/MXz2tD8i3sHN3zfX8780/7o7PHySym6c+z6lst5D5+Y8ZXd9sZHV7y3YsO5Xf3fu14TTy+b85nVV9ZWbarZvGWsYW/jw8Fpq/Pz8q5fvz2PbHr733dMycv7Px+JTHM=

View File

@@ -1 +1 @@
eNqdV39wFNUdD2AR1DKMRWtHKdsrLcJkN3e39yOXCEN+ELiEECAJIeFH2B/vbjfZ3bfZt3tJgEwrgiAtwk0VBapWCQnGSFSQIjZUmHGwDoNOodOJFToUR5QOtlQFLBT7fXt3yYXwT3t/7L597/u+3+/7vM/n+96t604gi6jYGNWrGjayBMmGD5Jc122hFgcRe32XjmwFy50Lq6prdjuWOjBDsW2TFOTlCabKYRMZgspJWM9L+PIkRbDzoG1qyHXTKWK5/aOxX6zx6IgQIY6Ip4BZtsYjYYhl2PDhqccOowgJxAiSBDaMjRmBMRxdRBaDY4wpGLJAGFmwhZglgBeOyfzmIQsxKgFzItCI1NzCrYSJWVhnkCApQ9MYcMPYCmLMdliMwUhYRvAt2EwreHcIkmngODIAARu5limnBcuNTLyh1qpVq1J+hrrkWKOPU5AgPzyds3GjLljNMm41Hp4+ZDJr1ixmLW3QR1EcpT7KBFjF2uXGWpZlC+gj601bMMIw3pSt30/fYc4fpF7cEV9qhM+Hd9jH+fN5Pj3iT88JpeZE6KTUCJ+eE4R3kOd8zKC3QNYIk89503Gy1r0c1nz75fv/l+Wnlw2NasdKqAnA/xYMslu3AJEBYJAJtHsQjUEUhoZ92ZBkoBg5zGfjMXI4kA3KLbGHITSSMHNhhQYQFYhmMa6sQB2MIGLHdrk2RO9cptVS0wxcmM1VkIVBWmG6ag9JoBQb02xGIMQBirffRklGO6OpoiVYKiIMBqcWZb3BiI6q2axqZGJgA8apRlJ6G4pQKTSDFBzLzcBCMapKQ2unXzTFBHgWRA0m66BnWBRsJCwrgThPLuOxsIaoxkk7sZHu6chlhknfQrZjGSkA1Bi4RoYEik71SNiykCa4QInIbkUUwHhKyDFKnoyib2c4aEBcdgladjZ0EzwdK6BHB2Q12hU3bZbngizkI2Jqa0CvD97EtpCgw0dM0AiCDliHSWsEIAK9Xi5M+zDWGiUFqxLtW+Ox2003UMwx3GpKHQ62qYEBO00NUipqtJCpNQrE9nR0pL2ly+T/7QjsZEQkSzXTpp6izE4TBWkax9QSyjHVZQlqQ5JjZ9VGXQfsgARRwwSCEgU7mgzYAoEBSlW+xZBj6hTA3CGqEc/4xFouQzCcHKpOiefY1BGt1aJooYQKNVZmWEYfJJdqMzIGSwPbjIZx8zBDEcUw2KQCgCUwBLhupQXhMs0UqHrg/CIuLqYF55Jlqyj1CYqz2t3WLai40iKGaprIdgnuuBhncIfdh5CpbaHHoWohme5M2uGKLFMsNiGJ7mDHio5uWgghlbM5EzsVTOzkvuEnZR8VKDAO6I5lCJB8Nb5aNXMZGcWAx6gHJGIgd5OTPc0ImSyAnkBdqVnJ1wTT1FTJJXxeE8FGb1pSLM1l5HAP5TsL0jHs5IEqSKIompfmgo8L+jnva20ssQXVABUTVhMgn67UBr+dPWAKUjM4YdPXhWRXavK+bBtMknsqBamqephLwZKU5B7B0kOB/dn9ADalR7K7ZOHIcOnBoXBQl71c4PVhjkm7ISX3uNL87bDJyLbaWQmDj+SL3i4JCKWi5MC/GhulWKOoz2zhWoJeR6hwmpbi+vyqIm7+ksW1oUB+w1wzkrCrltQlGvhAqbpkvtzM+sL+SMjvDYV51sd5OR/nY9uUJTGrsahFrUt4G+c0SFVyeTFp82shbl5LeYNdHPBGg3xlSWmlGCXlqLa+qVxuqUANIl+K5EX+SnWRUhwzgzF+SUVInlvvlDpzqxt8iwoZyM5JqPLMBeH53CJ9XpniV5QSVaidG4hG51THVgfr6/LLFhSHa/loJDh/ScKJLspKz+8LsN50hiFvIN9Lf/sy3NCQEbeVZKcvHA7stRAx4XaGHusCzGyHrOukQjjxXnf6lvZSVcUQh+/vLAVSJvtrHKh/cAOpkmzG7/UHGF9+gc9XAPeLuZU1vSXpODW35eDrNRaIFSo8OyfD+W5JcYxmJPeU3Jbt/alDjKX5Q+FmUZuJCWLTWSV7l7KLU/dTNlq6PyUtFltxwVBXu2GT/S7tW1e3tcqSI8tKolX3RlYHeFVEjhQ7kJ4ChYKGgYRYnQA6fMS/Lz2UYV4PLNbL+rys13e4jaXXQ03VVUDUfaZvyTA3CHAfGmlg42YE9+nugLsf3iPZFhbSgbI0+JCbQCQS+d3tjTKueDAJ+4KHh1sRlJ2Nz6+TQyMN0i52B3TS25YxZ1U5OTAVPholJPJ+OYBEUYxE+GAkFhRDkXBY9MViMIS8b9GKKIEbup0mtmyWwNEBF5b25ECuLrTRMjOT9wX5ECy1EAq1pDkyqnbEUkwXQQoZE85qLMh9JWVsCdzQEVvtMjDZXVq/oKgyWnJwKZtNJbbKTP0f6TYw1OlYrKsaWbAzyR5Jw44M9dJCXeBrcVF98kC+HBKpDoIRAfFihGeLoRJlvA0Sr5MW224Brj8kISX3K/xMT0EgwHsK4TiamR+CfXL/tTzalar+746eNOUX43Lc3xi7unLrR96J/Rfqlh75sPznzFNbt5wec7Hn0uaDG1e83HNtG/p11GzR+i71bp9z4+r70XmjK8im3vYvz+66/Muto4qfPZ5f/OylvdKO+/u/2Xp96Y2bPzjcfXzj8XdfiIeWTT0/EH6649xrx1d8sPAvv3/n4x1/is84NfqhujfW7xh18Z1XJnSfOzP1ofmx86Pfq7nn2KnFl1bvvnnogrW/tv+o96G6E8/97Z7cJz+cMXvci9Llq8H3H1jZsn7sGTJ1TLN6V37PryYeKxs/5ejpBmPy0xNaH9w2/fPZ/34DcaV3m+Xjj+2duKXlm8nLLnz60t6Jn1y3en526Rl8teGvL9/J71qwfWDt5e1v3ij//DHurnLrDxfvbjrnPHLYXzKuYud9yzvMwp88MYod9+YX5aN/NL9yZ5x5Lm/KlD9fi9xJlp5Xtuzwj73j6wnru2bf25TTUdZS/PgrJx/ZcLLrH9Wznnq07/Ts+Jct/dv15y9M3LXr+zeL50QnfPBWsHC58/zmSZsqNjw7bfwD205cP5O7e+PNv1tHG89/78hn1yZ9XIAbxgS3fbJ9c9Mfa2/U9D258OVQU3NfaNS05dbbPx67l/3pulerD391ZdaGMSfzDk5o3u2/tKl6zbaV/zyl1Fw892lt4DO54dEFkRk7r7UtZK/s+eETRxet7EX/ufcBVLjpwYuMErx69qvtpQPMM58XPv2bxceurJosTz3n/87u7/a9UGnedXNUTs63347JOXhf19fBO3Jy/gtz4IV3
eNqdV3t0FNUZB1GMtECKp1VPVa6rVtDMZGdns49wQgl5NQkhIRsgiXDC7Myd3cnOzB3nsclGUhWonqMcPetRVKxYTUg8EQMcoKISn6ittLVVrEalttoeqHhqq7U+jtR+d2Y32RD+8HT/mJl7v+/+vsf9fd+9u2k4jU1LIfrMXYpuY1MQbRhY2U3DJr7OwZa9ZUjDdpJIgy3NsbYBx1TGr07atmGVl5YKhsISA+uCwopEK01zpWJSsEvh21CxCzMYJ1Lm7dlPXu/TsGUJCWz5ytG11/tEArZ0Gwa+DuKgpJDGSBBF0EE2QQLSHS2OTURkZAi6JFhIEmxBNgVAYRH9/QSbGCkWqFoCtUZVTdJjIdkkGsKCmJxcggAC2UmMjAwEoiORSBjGgo16ANmxsESNJrAO0dvY1fRAy9fp1Jb33LBhg7feG0pyF8cmsSAtWszapEsTzJREevRFiz3x0qVL0Ub6QR+VCewNagXweuM6fSPDMOX0UfCmXyBByO/pBgL0HWYDZRTFlXCehI/AO8yxgQjP5ySB3JqQtyZKF3kSPremDN5lPMuhCbRggQRFWH/OTi7WdRDn9JAD3zbkXKjwEXPMtJKGHJ8Wd+HXacHng0YTP39BBiYinxRzhWnIhz9dzBfmYLo4WJiI02xPZGWSEHUQlQ4EBAKZyC0VYDwS4sSxXQ5NUrYE9ZhKjlkthRwEqutWDyxXbI/W1US/ykaCZTlA28wZKkPPIFWJm4KpYAsRADQpk3UUdxTVZhQ9j090kFPee/XjoTcJKaC2Y7qWTSzTCtPVDB1R19KAKsRVWKhBbUIwsGkQThqzvhLkM4mKab1aGcvGmq+/BE0pYxPbjql7gSsyQGNdhAr1ZkRimlgV3ATFsd2DaeISXmHKlCj5Cj2T4oSC5TJJUAu9ocn39a+HGQ0yqtKphGEzPFvGgD9xQnV1mOXgbdkmFjQYyIJqYZiAOAxa85ARmPWzYTpHiNolJoki0rnrfXbGcA3Jju52Rgo48U0VdNhhquBVSpeJDbVLsGxff38OLdfy/m8g0JOwJZqKkVP1VeZ32UpiVWXRaotyS3EZgnux6NgFvU7TIHdAgHrdAGJaSeKoEuQWiAupVKTTFFm0Ngk5dyxFT+QxiVqCLAKngKJR0jk2BaK9Nx43cVqBnikhBmkT5FJsJBHQ1ImNVEJSUxTjWCag4xkATWAI8NzMFYLLNEOgVQNnkeXmxTDhjDFtBXtDqDQz436dlhW3pCxdMQxsuwR33Bzn8w67Dya9baFHm2Jiie5MDnB9gSqJd2OR7mD/+v5h2vDAlT/NKB5MEsvOjk499XbT4gTGAd2JBAayjyX6FKMESVgGHuMRKBEdu5ucHUlhbDCQ9DQe8lZl9wiGoSqiS/jSbovou3IlxVBfpotHKN8ZKB3dzu5vBicq60tzXODYsgAb2NPLWLag6FDFFqMK4M+Qt8FPFQoMQUwBCJM7+rND3uLRQh1iZXc2CWJzbAqkYIrJ7E7B1ELBfYXzkGxKj+xwVct0cznhpDnowRwb3jsF2MroYnanW5qPT1mMbTPDiAQwsg/5h0QglIKz4590dYlyV1yrqBOkWHXryo7lq514I0mslUK9mdYeYtaGGkLXNaorGut4S0imU4pWw3Bh3h+NhrlwhOFYP8uxHBPN+DuayuQVIctIpJZXtjXx9alKPaAmFFaNtbKrVq6ORFr5Nbwesdc4nVaj1FJd0yImG5tT0e6UFgwmOlJGgyE3BrrXrKlraA9nOrtrllcuQeCdk1akivre9oYVhtm8yqiujWYaiFod0OJNNbHVutxuVLE14UTdijbHCdT0pgrcC0RCjD/nYcgfjPjpbzTPDRXrCTuZHeTCgcAjJrYMuGnhzUOQM9uxNg3SQvjNr4ZzN66HmxsnOfyDwWogZXasLemUIH8YrSRpFPAHgogLlfN8uT+C6pradlXl7LSdkYN720woVujwTE2e88Ni0tFTWBqpOiPbx7wDjKH+Q+NmcK9BLMzkvMruamdavbsmU1+9zysthpgJQVf6XLPZMZf2PX29PZLoSFIy3aP5o31BXoljR5T355ZAo6BmwCFGsyA7XJQfzYnyzBuBYP0M52f83JO9DL3uqYqmQEbdZ+7GC2vLIN0HpyvYJIXhbjwcdPfD/3Shhok1oCw1PgkTjEajh86slIfiQSUcCD85VcvChd5wAc06OF0hBzEQ1KxdvXl1RpGy41fAoIuXomJACAhiJIijIT4QlARZ4jAX9GPZHxWlJ2hHFAGGbqdBTJux4OiAi0omO16iCb20zVTwXBkfglCXQKMWVUfCMSdeTWgQ1hJkwFlNBGl3VS1TBTduzMRcBmaHqztWVjbVV/2ynSmkEtNseP8thnUCfVqWh2LYhJ3JjogqcSTolyYeAqzWyo7s/oiEcTwa4AU5FInIYphZDp0ojzZBvEHabIcFuPpYaTG7L8lX+MqDQd63BI6jikgI9sn9B3LTkNf9XzyrbOFtRTPc36ytbevveNtfvPHV3Xu+Xn7NzPcH7rq8deHoaw/0ja15+M2Gx25HRyL3Hz66d/07D7UcvmH3z5ctWHRV2fxV+/fx+1IfywvRlszWqi0dtZccOXDi2E+/3Lfjht7t37w18he5e/y3p15ol//+8h9eP/ZRxT/6bu587KYnzn/01bbRcyKj0jn8rKM3rb1k5cubx9XyQ/dtWzYaL96xrfvNH35+VcXae+/+cO7NA7/O3rf4d8++X8yctaX47XtI18BW4baq4Ikdvmh7Y+xZ4YEtxaGGL4qG/miefMQ++M6NK+rPfe0e5juXdd4Yayiu/Vly66OJ977S5906p/LSj5/+9NUn/vujY+mxU6cO3r9rx48r/vrp8W1dV469jivrL+voX8Ycuv2WC+Qiof7TOx/tYWN3Xvj7TS0n2SuKnz86WDPvtS8/+dfS55+Z43+q6Jbvya9cWHx0SfH2s+ofmfnveS+edxFb/vhie1v/g5+MzHk3NuuBv51cdKe243jxtffOfa9lVbZu+N3tFZ3209sWF6+dwzUv8KP7Pn+qs3b+2AnzpTv+OfNBZ2PRaH/ptRd0o8Ul33/5/qUvzf/sRNHd3QNr/hNfYDx/5eMXccu0zOVvle+8hnwcK//z++qHe0JXv3l75LPzBy/YyF+6tu/g+K2z+5dUbFoZvXr7F5Jx8UdWxYLNA4cuPtJ3/oX4osNzD4wXdR5ed+DGX7zx2TNvrHt9tPx44NTCL2dv+O7+zfcmZ3GJ45eEldlfN3+wc/9Xdx3ZtPdYx7nR+oMrnku+cPLA3HnPdQE/vvlm1ozzXjn7gwNnz5jxPzpsi1M=

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
eNqlV21wFdUZDiKWqf6g7RTtVOT0Dh2+spvd+52ktOaDhISESz5ICGKTc3fPzZ5kv7Jn9yaXJHakdlpRyqwOY4cWWyAklUlBBRWFoJ0WWoodWywqqaDTH8WvmYrMiLW19N2995IboaL1/Lh3z573vOc57/uc5z27cSxNLEYNfcY41W1iYcmGDnM3jlmkzyHMvmdUI7ZiyCOrEy2tuxyLnl6i2LbJykpKsEl5wyQ6prxkaCVpsURSsF0Cz6ZKfDcjSUPOTM5+azCgEcZwN2GBMnTHYEAyYC3dhk6gw3CQgtMEYUkCG2QbCCMT6zJmSMY2TllYI6hLTnXxKN9WEIsgCrYKQYZjm46NjJRnwysEy4sW87bRqWGrVzb69UWLu8rW6/mZU09dXV1TnaH8T4tjpWmayH5ntaRiQDSEVnkQPlsbQi1kIO+1optkvdOkSvVuVtJiGg4jDFUkDWzlFsMWRISVVClUleGxcAzVwCAaWq8PcRxX5v3kW0GnbKiM++zt8qShKxxe0T7dWJmHEyEhizzbCjoheGy0eJTohz2uwJYFeay0sKPL14inhlWST1UwWOj9Y038hDEfSIwPRjxPPs5p1gUd0cfJeFRvKLqHUFZJBi2qUQ1IjkTgDe3uZqhVwRliLUZVjuZlFialiI/UdxSKfw6cMZEPxkOhHM7g/8Dpx5MyANqAqY7hbNDeXqoT/Vr8LMQZjF4rZp8cz1IvoFmcoWvGE0uesABQbCtoUQNVM6gRZ9BqQtTFqMaxLaKq5Oo4Q5HPEc9IiBfR5byHr8HPdqqqFGsAU7cyqAIg6Z+en5+I85rxjPNCjp9XFaupp1qQKh3kErTEQr5ag+gWo36L2sQXx9UZkG4dSYZMfGHVWT9YUntKSpuJ7Vg6Sqxq6PBnpLFK5WnzQIqRbtgK0BsRlZGpudWGvtBGIJEOyGPmKjquZ5BKkxa2KOQbXMDatoJ1lHSoanNUz69j6DDurZPVfT5QjAKWoRKvPrAMs4kWGC5G08pGO1SahdkKIBkWUAZ7e0dJYvcTLybdWeApEM5Cb16kAsN3whsN9qZ6r7pNmwvxEQ7ikDQ8Wx3eivDPgIlYg04Kw7bhBeAwoUKCoedL4GPeO8NQOyXFoJL3bjBgZ0x/oZSj+5XUc3j52TPQoZZ4Bqa/9U6LmGonZnZgeDjnLVci/29HYCcTJlnUzJkGKvJxZgocLR6tYR47qJ8jMkAkB9hi5jOuaRA3OKZ1uldWmWI4qgxhBZZlqTHdEM6JAuF2mMeOnE9DLUbMgFsD1by0Z+szjOBk0iJpim0orxwcl16CmOMVchvJBlgCyZBqGL3TDJMkBYKbWwAsgTTANCtHZZ8pJvauCHB3YX5cTAvuJJZNSbYLx8LK+E8fi4pPbqZT0yS2FwnL8WOcjztkH5bMpsW7ClGLyF5mcg7vLDA1kj1E8jI4fOfwmHcDAShni+aMKAaz3b3Tb0n7vOMBjIMaYsiwgPur7g3ULEYySQGFyR6guE78JLt7egkxOQh6moxmZ7mPYtNUqeRzvaSHGfp47khwHpYrh/d4fOfgMOi2eyABICrqSnJcEPlIkBceHeCYDXVDhSPLqRjwjGYTfKhwwMRSLzjhcldFdzQ7eW+hjcHc3Y1YSrRMc4ktSXF3Y0uLhvcXvodge/Rwx6pWX7lcbnBqORBugQ8/Ns0xy+iSu9s/mk9Nm0xsK8NJBvhwdwijEhCKEvf0e52dUqozqS3r4/sigoNXOj1rjY54ooJvaGteEw3H19WapWk70daeXhcKV9O2BrmXE2PB0mhQiMZCnMgLvMiL3IDSlrI6K/poe1roXL5OSsj1lWwgqEb5FX316+zKsFAXCTVWVTcm61g9WdPRUy/3rSTrkqFqIjcFG2mTUpkyI6lQ28qoXNvhVDu1LevEpnIE6Jw0lZetijXwTdqKGiWoKFUUr6kN19Utb0ltiHS0x2tWVcbWhOpKIw1taaeuqQBeUAxzQg5hVAjHBa/tzXMDSle3rbgjwWA8+EuLMBNu5uT7oxAz22EbR7yD8Pzvx3I39J2JlVMcnjtSDaR0J1od0D+4dCUkGwWFYBiJ8TJRLBNLUW1j63hVbp3Wq3LwsVYLDmsKeLg8z/kxSXH0XiLvqboq2yeyJYTz8INwc2TANBjhcqjc8bVcc/bbhKur3p89WpxhdWOdbvCXdSd82vdvGOiXJUeWlXS/JpRuCIdokjhS6kBuCgiFtwwA4jTm7orExb25kTzx9sBeBU4UOEF8ZoAD5Scq1SgE1P/NfSAxdyQC0T54pYFt9BL4lBoL++kQjhRaWHCrobq39pSbcGlp6eGrG+VdhcAkEg8/M92KkUI0YlBjB680yLnYFdXY+EDenKOye3oBdDrDwKKIHA6SmBBPihiHJSxIYlwIiim5NCnHnvYEUQI3XjZNw7I5BpUDbhoZ93Sxhgc8lVkWEiOhKGy1HHRaUh2ZtDjJasPbBCtHJlRpA8v7qmq4KiwphGvxCeiOVXesqmisq3pyLVfIJC5hZj9Fx3QDZDqVGm0hFmTG3SOphiODXFpkFHw1V3S4B+JyNBkLpiSMcTKULA1xlSBEeW+XeTfiae0YhrsHS0vufiW0LFAWDocC5VCNlsWjkCf/g/Xu0az4H53x6vz7Zhf5baba3LhlUpgzca59LduxtWjWJFmu/3XfgZOb0k+MzV0t/6zuby+dP/hky5e+8Z/BrddXdr38hRPbzl08OzTvumO3j84R58y6uDPc8967Hyz+48PP/u611uWXkpt+ftuFMy/27PveO8c5/KqgXPz6D76zf3Rj/a4/rTw5r7jY2rL5rXfuOrL++d8c/ejxWfX1TZEDr35w/NCbZ37bnq590Iw+Wy7e+MB186rVecKxV3ZM/PRWZ9Pgj9bGv11bb//43OymS/eKj45sm22eG7mwfvCWztsrJnfpJ1/a2jzzua88Um3u5o+dKb3p7EM1fbsfGPjmzef3f/S1h+aff7v76TtOnfph55KlN9y13dYm73m8/r4jFTMPb6mNpe6/++/kROSJkhUL526uP3gh9uX7ZzyVWbPz/cYPzVeee+2L2sTdykz65He33Gbduvn0Gzc+FZ/UxzaOD19IL/nqvU78YS1m3rP0qPTu5GC58pOj49b67WWZTfP/9edFf9i8e/uhN8Ny88r2BXOf7l16w7nj9tnNibeLTp1q/fXaJZsT3/qo4szsI/9oe2Teuczb/Bv/PjD/9eMs0sW9ML50YsHyF0j5+3e9eP3Ns0Zv7rnlpk2D9MNt1e2/+Ms/b9z6+sj1e9O3nVjwYFlt7/g7LwfjN1yaUVR06dLMItZz+NDWmUVF/wULibGL
eNqlV310FNUVh6P2cKpW2x5RCtV3tiqgmcnszGY/gjm6+SRfJGYDJAiGtzNvdyY7M28yH7tZklhJeyytFDu1H3q0VDQkGiMKqIgardhS21prtWKxPdSjPbRHOVKl1h612juzu7gRClrfH7vz5t133+/d+3u/+2Z0IktMS6H67ClFt4mJRRs6ljs6YZIBh1j2N8c1YstUGuvsSHTf4ZjK/ktk2zas6spKbCgsNYiOFVakWmU2WCnK2K6EZ0MlvpuxJJXyL83ZOxTQiGXhNLEC1eiqoYBIYS3dhk6glzpIxlmCsCiCDbIpwsjAuoQtJGEbp0ysEbRWSq1lkdeWEpMgBexkgqhjG46NaMobZ2WCpUWLWZv2adjMSDSnL1q8tnq17s0q/K5du7bwMFz6SThmVskSye90iioGBMNombfkp2vDKEEGS17jaVLwriRVRU9blQmDOhaxUDxJsVlcDJsQAauyTlZUCR7Lx1AjDKLh1fowwzDV3k+plXWqh6uZT9+OTho+xuEx7ZONVXs4EeIKyAutrCPAY7vJoo4c7HEpNk3IXa2JHV06STw1rJJSqni+3PvHWvAEYz6QCMtXeZ58nDOsyzpBH6fFohYq6x5CSSV5tKhRpZAckcAbJZ22ULeM88RcjOoczcssTEoRH6nvSIh+BpyRIMtHBaGIk/8fOP14KhYAbcOKjuE8KJmMohP9ZPwsx8mHTxazE8cz5gW0gFM4aTyx6AkJAMW2jBa1KWoeteM86iREXYwaHdskqkqOj1Oo+gzxrBLYIDqa99BJ+LlSUVUFawBTN/MoDpD0T87PE+I8aTyjLFfk5wyRKvw2gTzpIImgHybyFRmEtQLlTMUmvgh25kGedSRSifjiqVs5sFTsglx2EdsxddSxrK3Xt85iVZFmzAGpRTq1ZaAzIqpFCvPqqb7QRiCHDkhh/jgareeRqiRNbCqQW5gOa9oy1lHSUVSbUfTSGlSHcW+NgqazgQoUMKlKPO238pZNtMBIBZpRElZCFVlYUHiRmkAP7O0ZJYmdI14s0gXQKRDJcm9ehAIja+CNBvtSvVdpw2YEtoqBGCSpZ6vD2yD8W8A6rEEnhWHL8AJwGFD9wNDzxbER7x2lap8oU0X03g0F7LzhL5RydL9Keg6PPnsGOtQNz8Dwt95nEkPtw5YdGBkpeiuWv//bEdhJxBJNxSiaBuKlOFsyHCMWLbc8Vih+jsggER1giVHKtqZB3OBINute2bRk6qgShBXYVaDFTEM4EzKE27E8ZhR9UrUCWRRuBIrmpb1Qf2EEJ5MmySrYhlLKwNHIEGQ5XqG2kUTBEgiGVEozMwyTJAXiWlwALIE0wDSzSGGfKQb2yj/cSyw/LoYJ9w3TVkihC8fBzPtPH4uKT2xLVwyD2F4kTMePcSnukH1YspAW75qjmETyMlN0uKbMlCb7iehlcGTNyIR3wwAoB2adPSZTy3a3zbwB3esdD2Ac1AsqwQLuPel1ilGBJJICCpNJoLhO/CS7kxlCDAaCniXjhVnufdgwVEX0uV7Zb1F9qngkGA/LscOTHt8ZOAy67d7fASDizZVFLgTZKp7l7xtkLBtqhApHllEx4BkvJPiR8gEDixlwwhSvge54YfK2chtquVvbsdiRmOESm6LsbsWmFg7tLH8Pwfbo4U7UdR67XHHwo+VApINsZPsMx1ZeF92t/tHcNWMysc08I1Lw4W7hxkUglELc/W/19YmpvqRW04SlRH3Xst7a5U6ylaZXSuHBfFeOmo3hlvBAq9rW2iRYWM5mFK2BCUYELhaLBCNRJshybJANMrE819telWoLW0Y6UxvvbheaM3GdV9MKqya62CuXLY9Gu4QVgh61VzirrFaps76hU5RbOzKx/owWCqV7M0aLkWrl+1esaGrpieRX9TfUxpcgQOdkFammebCnpc0wO6406htj+Raq1vNasr0hsVxP9Rh1bEMk3dTW7Th8w2CmDB4fDTNcEWGYC0U5r20rcQPKVNqW3TGeD4XvNIllwK2bfGMcYmY71uiYdxCefmqiePu+vaP1Iw7PHasHUrrT3bJTgbgIWkaziOf4EAqGqwWhmuNRU3v3VF1xne7jcnB7twmHNQU8bChxfkKUHT1DpMm647J9ulBCGA8/CDdDBg1qEaaIyp3qYboK3x1Mc/3OwtFiqJnGurLOX9ad9mmfWzeYk0RHkuRsTuNi60KCkiSOmLq/OAWEwlsGADGa5d4REqLbiiMl4k3CXjkmyDFc8OFBBpSfqIqmQED93+LHj+WOVUG0HzrWwKYZAp9JEyE/Hdxj5RYm3GAU3Vv7IzehWCz26PGNSq4EMKmKhR6eaWWRcjRBXrMeOtag6OKOsGZNDZbMGUVy918InT4pIgYFjiN8FRfFISESJimuKpTiQgSHQ1KM3+0JoghuvGwa1LQZCyoH3DDy7v4KDQ96KlMjBKuEMGx1Cei0qDoSSTjJeuptwlqCDKjSFEv31jUydViUCZPwCehO1Pcui7c31z3Yw5QziekwCp+ZEzoFmU6lxhPEhMy4k6JKHQnk0iTj4Ksr3uveH5UIScY4IZLkuGhKjDC1IEQlb0d5N+Zp7QSGu4eVFd2dslATqA6FhMASqEY10TDkyf8YXT9eEP9fzP73BdfPmeW3UzYm/rTpJe7skd+t7DlSe+nsV3Zcheg/x++ZvEd+4LUzO6Vbm/+xb/uaoQCDPphu2XPx+q678humYtYzN8+74azZIhLnzNvALHR7X7jl6wcuv2b1Te9NX/PAzeyB4f/w73LTQ398ddcA/8VnmfNSRzaMbu4dwfzd3xv7/TkVC8w9tU503lmpnb8yqi9Z//TP7/z8qv67Im/85V+LcvOfn39G066FB/ctXX9+7YNnnCce6vzg+q84GzvO3hF/J/b9Le9c1jDn9Qv3/mze8vprhyprBtqf1a7AoS3sr/ft7mKu3Xil+3JL1d4XcvJPbokPbL39p7nDBx7bfOcF7x0+Kxlcevlvswev1g8++aO+i6Z3knjztoaa2+Ze/csnTxf2XnFk6q8vfVX77qYFq84Z6Dj3Sxtn55TNW8ZvH45OvZzbpO8aNU4b2LE6e3Bx9uGpI5uXHJp6cQTfOvSmeVpT9PnY51585vV9Ny24ePffXgs9nni/p28Df5eTerXy4Onf/sNz9Gvf+vOPR0YXipce3jNhX/v+IfaJ6ht7v/PU+7Vr5hxoS/3w/NAj+2578pYvL73uof7tqVffOGd6UeMrOx49tO7xU8+9auP8/p3py27YqsyO3/3643fvHl/jsM67l791Jjp11Z7oji+Ii59Zoj+3YG7Hm33K2z/4zegThy/Dcy/qmvj71I1vi3M2PXEBpPjDD0+ZRbbNn3f6qbNm/Rf+NLbx

View File

@@ -1 +1 @@
eNqdVnt0FNUZD9IC0p7qQUpRsU73aCk0M5nZZzaceMyLuEDIk7w0jXdm7maGzMydzGM3G0yPpfZYi6d0pNYXtIWELOwJIQSo4X30IOk5emgUqCdIfdRjY9pTBKtWKYV+s7uRRPir+8fu3Du/+/u++/t+97u7PhnDhikTbUa/rFnYQIIFA9NZnzRwh41N67E+FVsSEXurKmvremxDHlsqWZZuFuTlIV1miI41JDMCUfNiXJ4gISsPnnUFp2l6eSImzt70wTqPik0TtWHTU0A9uM4jEIilWTDwRCgJxTCFKBFZKGogFVOLxehiCmkiFUeaRVmEatdInLIkTAnEMLCCXG6Kx1YcYy09v7ioDWeWLF6ODHgUiGKrmunJpTwGUbAbyDax4elugRmViFhxp9p0i/YxAdqyDZ64WA1mOfg1LQMjFQZRpJgYJiys6iAOAF0ulgm5c4QorYJEZMGdW+exEno6UNTW0iK6hF8+uwANNucC9AQIqrUaWFdakWl5uruzbFl1/m8iwInYFAxZz0I9RVRVGkOZElYUhlpjYhBMNl1VcScWbAtTGRaQTFVBQZOhIppuW7CC2IoIMkNtYkiRxa8AGapBAvltU9baJjmJkkuZBAwjQ70pYlsuEbxBPG/gmIwsLFI0paJ2TJkgJSVblEgAqRGLUghpnwbkcZQAJhMAkLJGJYhtQJnNODYYd7c6cg0DtjXTuugG2NGwZJwZgn+NRPrpK6oIYADK1GRdx2l/GXZa40ndofoQMlMW9xTIBhbdymQJW6ZACb8WC24Fu1u6kxJGIqTyds6tvRIxLWdg+gHZjQQBg+OwBvEhgLOrrUvWcykRR8HSOAWHQsPpIjupdox1GkSP4b7MKmcQ6boiC2nv5601idafPUS0m8v1r1Ou32k4cprl7KuEJIoieVkvcEzAy7CDnbRpIVlT4GTSCoJ8+jIFPjT1hY6EdiChs13C6cssHpiKIaazvQIJlbXTKJEhSM52ZKhB/96p8yC2aw8nWVJ1fbjsy2vhfAzHMv4904jNhCY429NH88Vpi7FlJGiBAIezle0TwFAydsY+bm0Voq28WtjBdARYG6201zaSpvzKImZVfc2aoD+/uVwPx6zK+oZYs89fKtevEttpLuQNB71sMOSjOYZlOIajO6X6qNFa1CE3xNjWsmahUlxRbHZ6lSDzQMeKZqvYz0YCvoqS0go+Yq7Aa5rWrhA7VuJm3leKxWpvhVwtFUf1QNRXvzIoljfZpXZ5bTNXvYyC7OyYLBauDq1iqtUHlkteSSqR0ZpyfyRSVhvtCjQ15C9fXRxa44uEA6vqY3akekp6Xs5Ps9kMg6w/n3U/A5PeULDWZklOT8jP7TCwqUNPxj/tA8ks21zf656D1/6YzPbmbZUrr1l4QW8peNI5UmdD+/N6qUrBorys109x+QUcV8DlU+UVdf0l2TB1N7TgnjoDzmoUbFg2afmkINlaOxZTJTc0+xHX7FBJN33o2zTu1ImJ6WxWTn8jXZO5lehI6d7MyaKJ0YY0uSsd1jmSdn28qzMuCrYoSrG4yoa7/D6Zx7YQ3ZddAn3CDQMJ0aoJ4gQDA9k3k75LwV5ZmmNpljvYSUPjx4qsyqBn+jt7NZpObwDEHr4eYJF2DJdo0p+uBnt0KsLAKhjWjX2Nxh8Ohw/fGDRJ5QNIOBA+OB1l4qnZcF7VHL4ekKXYxpr9nZNoWhadsXtg0Or3CiKQIy4UFFneG8IhP0YsNF8+CGl5+QNuOxSAxS2mTgyLNuHeMGQr4YzlqqjT7TGFPi7gC8JOl0GXFhRbxLU2X0rcPZjLKB3ubILE3SXL6RIkSJiuTfvPSZY2rS6qiJT8oZGeaiS6Us/8B0lqBJp0NNpXiw0ojJMSFGKL0CwN3AdcNUVNzr58MciHvFGOF3ns48M+uhja0CTbl7brdTttEimQe0xw9kq+Qk+B3+/zLIO7qDA/CGVK/1P5SV+m9b8y4/TdG+bkpD8zlZpXf3mWnf+X8R/2H9r6dM7FH1QNlfxrNPfZVKrqzNwqcfMLx2tH3ty/JTXn4kc996x6Cl14eej8xfD40dUzBEqYc/vPn1zn3Blo3PXR+5+XXT667vOB4NjGEf5S5X1Xz/VORA/+ejTvdquz6L13GkuKDlR/eO/EHQsWHBipsQ8/mzu+6fVH9ux4dDNfdy8d+DF718lZRwdn3/HkiU1jP3vOmf8hJb71KHdi9ZmWv+0QLt/yvTs/fccTLhipfWzwi+MLG2vw14bNNxIDhZ88WPWrz3a8us/eWjf7QunSeQ/t/PPwfcmTh9T5DUu3vXzzv8VXLv0Dfdy+pOH9x7+4bdexwvjltRM9O+etmEs/P8u+8P3v/ug5btbz499+c1H38oIhqe74lmNPPfzN4YoNi/bnnXr81LeuLJA2LpsTv3XxWyp5mt/9p66N53hy7O65Q5+dGr9r83tbToaHP307flbklixqeOZKy1/Ho9QvaoSJdwcufeOJ029gp+x8quClhHd/fKI2fP9F7TcP535wovy3V37PfH1kU9Ou2f8csEfzvjO6aTx12++Y/4Q+2baw8Jkzgy/8d95D4sIXT1ePLjnfM0E/cfD+naduWfpS2c3nXk+81lP69/m79gw1nXh30eGbcnKuXp2ZUxweeWTDzJyc/wFGZ0j4
eNqdVntwVNUZjwpTsGUmZVqwlhnObLUMae7u3Ud2s0lDm2QDJiEkZBNMeBjP3nt2783ee8/1PvYRpFVIYawZ6nUs1pGMQkJWQxrloSLPqY+Kji0gpUyACjq2MBbCYJvGsTr0u7sbSYS/un/s3nPu7/y+7/y+3/nOrs8kiKaLVLltSFQMomHOgIFurc9o5CGT6Eb3gEwMgfL9TY3hlj5TE0eKBMNQ9TKXC6uik6pEwaKTo7Ir4XZxAjZc8KxKJEvTH6F8+sztf1/rkImu4xjRHWVo1VoHRyGWYsDAUYsEnCAIIx4bOKphmaAFfHQBwgqPklgxkEFRXKFJZAgEcVTTiIRtbhQhRpIQJTu/oDJGcksWLMYaPHJUMmVFdxQjh0YlYgcydaI51q2BGZnyRLKnYqrBeJ0ljGFqEWpjFZh1w69uaATLMIhiSScwYRBZBXEAaHOxzoA9R6nUwQlU5Oy5tQ4jrWYDRU0lK6JN+PWzDVBgczZATYOgSodGVKkD64Zj3bo8W16d/5sIcDzROU1U81BHJWrKYpAuEElyoladgGCibqtKUoQzDYJyLCCZLIOCuhPVKqppwApqSjzIDLVJYEnkvwF0ovsFkN/URSU2wUmlYqRTMIwI9UbUNGwieIMjEY0kRGwQHjFIxnGCdJASiQbiKSAVaiCJ0vgUYIREKWByAQApKihNTQ3KrCeJ5rR3q2LbMGBbPauLqoEdNUMkuSH4V0tnn76hCgcGQLoiqirJ+kszsxpP6A7Vh5C5stinQNQIb1cmT7hmEpRGOglnV3DdmnUZgWAeUvmwoLBfoLphDU89IC9hjiPgOKJAfAhg/T7WJarFiCdRsDQZhEOhkGyRrcE4ISoDoifIQG6V9TJWVUnkst53depUGcofIsbO5ebXg7bfGThyimHtbYQkKmtdeS+4nSUep+flFKMbWFQkOJmMhCGfgVyBD0x+oWIuDiRMvktYA7nFw5MxVLd2NGCuMTyFEmucYO3Amuz37Zk8D2Lb9rAy1U03h8u/vBHO63S7nYFdU4j1tMJZO7JH87Upi4mhpRmOAoe1jR3gwFAisUY+6+jgoh0RuWIJ5sOh5mXtVa1mpJ7G7uf9qXRzkmqL/XX+h+qlpfVLvDoWEnFRrmHcAS8bDAbcgVLG7WSdbqebCabZ9oaS6FK/rsbiVZUtDd7aeKXikWKiUwo3O5cvay0tbfau8CqlxgpzpV7PN4VqmjihvjEe7IzLPl+sPa7WqdF6T+eKFUvq2gLplZ01VZXlCLIzEyJfUZtqq1uqao3L1dDiYLqOSiGPHGmoCbcq0Ta12lkTiC1Z2mKanppUfFJ6nlI/w+Yz9LO+Utb+DE94QyJKzBCsvoDP/YJGdBV6MtkwAJIZpr6+3z4H7x/N5Hvz9sb6Gxae0x8CT1qHWgSzGLEBtIwmkIf1+JDbX+b1lrFutKShZag6H6bllhbc1aLBWY2CDWsmLJ/hBFOJE36w+pZmP2SbHSpppw99myEpleqEyWdlDbUxzblbiakN7cmdLIZqMayIXdmw1qGs65NdqSTPmTwvJJIyG+zyecUIMbno3vwS6BN2GEiIkXWrz+tlh/NvJnw3CHtlGTfLsO79KQYaP5FEWQQ9s9/5q1G3+ktA7H03AwwaJ3CJZnzZarCHJyM0IoNh7dg3aHzBYPDgrUETVF6ABEuC+6eidDI5G7dH1vfdDMhTbGf1odQEmhF5a+QeGHRwQcK7cbCULQl4ON7DeTx8gAtGeezxlHC8u+R1ux1ywGIXU6Wawehwb2iikbZGimWcsntMhddd4vXDTsuhS3OSyZOwGQlRew96OVLhzqaYf6l6MVONOYEw4az/rEyofVllQ231q23MZCMxjWruP0hGodCko9GBMNGgMNYgJ1GTh2apkQHgaq5st/aW8oRESqNcNOgOwk+AqYI2NMH2te367U6bwRLknuCsPYK3wlHm83kd5XAXVZT6oUzZfyqPDuRa/9u3jc1/fEZB9nNHT/hq/Cxb+NXokapzT0tPdH+J7hl7bfOJYSz1DPc8Oa3vvtiqx4oXPrl6y8Yvr76Z6Cxvv3PsWz8a/duh8VGjbFr3IxcKZz+4cKzYuzv533+a3utd4188fGTthb6MtfWYf9x1/tWR5KJTx5mif3dteP9I69gzf+4JufYOPhi1uorbNp28duEf0bKts+rRyXlk+Za7zv9wfEHFc2dfGZ21se/dFz2n1m/b8ETx7VUzgk9df6Pv48/nVnkuHa7xGJt/PAP3hma0MY/PaLrY/a+W9CU/OvleX5G0c/V7cy68suW7fc3+N04lhd5nK5cv7Lt44tg7m+Yf+OTY5p8dvvS7L86fzbgeHl3Z9f2xD5/a+PHqXV0/Zw5u3jQ30v3I9vb2+H8WnSraLfCh3iPPo2kvNHz2nd+6Lr+JZx6Yd+UXMz+anToXO3Om8bm5vWenle+subh55eWhLWq3dwunXjnYOevdyNbyosaT5W//Zsy8r/eunadHI/sLv33v6rvb+4cvz956euADp1E3/egxZXbPqWeuWadXtUz/SPW9+PnzI3vnlC169IG1RWevtY/f2fHWifDyB8xDa3p2fy8dPiP94E+vH2eQYH515Nn4udaLVffO30YL/ng89Ym16+j08qv7ave0/fSt8Q8+Xfj0xj9c6X2n+Ne/qjt+908+zRTO2/BLqO/163cUsH/5a+HMaQUF/wOC7Fya

View File

@@ -1 +1 @@
eNrtWGl0FFUWDvsijDiDMiKMRR+YBEl1urp6SRoiZF/IQlbIgp3qqtfpItVVnVo66ZDAEWEEEpUWwQURh4REM5AgICAYHUERZHEZkUHUEdDhzKDoIEfQIzK3qrtJBxjQkTlH5/B+dNd779Z9X937vXvfu/PavEiUWIHvtZblZSRStAwd6eF5bSKqVpAkz291I9klMC3TcgsKmxWRPXyXS5Y9ki0mhvKwesGDeIrV04I7xkvE0C5KjoFnD4c0NS0OgfG9P2jAbJ0bSRJViSSdDSubraMFWIuXoaMrERTMRXkRRtE0yGCygFGYh+IZSsIYSqacIuVGWAXjrNBjoZaORISxIOtCmKDIHkXGBKcqo3chiokar5cFu5sSqxihho8aX2Er50Nvdj9VVFR0d+pDPwWK6GW9iNE602iOAkT1WI4K4ce1eqwA1Ya0JlSigHbWwbF8pRRT4BEUCUlYgkOgxOBilAgWkWKSXCzHwGP4HJYKk1h9OV+P47hN/Qm1sI6t3ob/+HbxpfrLFF7WfticTcWJYYYA8kAL65DwmC3qsdwa+MZ0ShTBj4kipfDMNezppjgUcpXRGK79kkZcZU4DYtUbzaomDWcP6bAOoeGU9Fim4OJVhAyHfFhUKieAc2gEI2xlpYQVuigfEsdjSYpb9Sy85EQaUk0RGfsTcFoJvTGWJIM4jf8Bp2ZPVgKgWRTLU7A32Koqlkf8tfgZjtNouZbNrm7PONWgAZzkNe1J0WpgAaCU7MKisljOh2VTPmwaQtx4LFWRRcRx6Mo4SfNPsKeZ1BPYRb+brsHP6SzHsZQbYPKiD0sASPwP5+dVcV7TnrF6Q5CfVwxW3U9pEKp4CJcQS0RMi9YQdKOxGpGVkRYcp/kgdPMYLTBIC6y8VAOSrNwdSpMFPlLGIMwpEOJ8V4jFvA/jWIdIiSz4TAClImimeMyhsJyMs3xoDYGHeQjbwdit70aZjySPABMMKyJaBm+DWhVcCDC8SoetjXhBqXRhLO8URDelCfSArovGdKLAITV5SD5JRm5dQzTWI6dMhzQUGUgPtCACnwJqHEiuQarBIByrSJ0QVcO1qWbUNcyEETcYjFOHKj0yTurNuKyIDkGV5WGUgH8JaEq5oeOkOAnBAODwQPoEQVWXQW9VxwSBs9MugaXVsdk62efRFnIqvJZmVYUXn1UBHhKNKuDRbGoXkYezU5Ksa2gIagvmz/9aEcgxSKJF1hMU1SWEHCi5YN/psSJJpQ6rOR/VIloBKnlCNHK7wW6whzN4NedKLkHhGDArUNBLcSxziSBsIheYW5EgMoZ0Clw0JglwpGDdKp8CyRtmKIdDRF6WkiH34rCXqhAmKWqWlzFGAElekDFOEKp6CDoQcAQFFwBJYCPQSAySRWOKh1LPD3CwkTS7eEQ4sIgyiwJdoKDo054usYq2YySe9XiQrFpCVDQbh+wO3oclA25Rz0nAbEb1TFDhzDBRwTELWA+iDTMb2tTjCUD5KGJYi0uQZH9HzyNUp7rvgHGQYAQGFvCvq6xjPdEYg5xAYdQOFOeR5mR/exVCHhyM7kWtgbf86ymPh2NpjesxsySBXxvcEriK5fLpdpXvOGwGXvZvygUQCRkxQS4QerNRb1hfi0syJBUOYgHOUYCnNeDg7eETHoquAiV48Bzpbw283BEuI0j+NdkUnVvQQyUl0i7/Gkp0W0wbw8fB2Co9/G1J0y5fLjjZvRxEdYPe9FwPxZKPp/1rtK25pcfLSBZ9OC2ADv8fDa00EIpF/sOn7XbaaXe446v11WaDQk1VZs0QSmJzE/RZxflFFlNsaZonzivnFk/3lpKmZLY4i6nCCasxzmI0WKwkTugNekJP4LWuYqdoT6hmp3sN9pRSOpfJTJRqjZxFn16dWSonmgwZZjI7KTnbkSFloqKSWZlM9VRU6iCTEZNnzGbzXIlOj9lJFk+1MGklSrKSVlBK5E3EAJ3iZZn4HGuWPs+dnuoyulxJLFWUZsrISClw1plLpsem5iRai8iMOHNWsVfJyAuDZyRMuCGI0GIwxRrU1hHiBuS1StnlbzGSBPGMqIVqCd3XCjaTFWlei7oR9u1uCx7fV+dO7ebwbS3JQEp/V6EC8Q9OZLm0jBkNRhNGxNoIwmYksLTswrVJwXUKr8jB5wpF2KxO4GFKiPNttEvhqxDTnnRFtncFchOu4ofAjaNajyAhPIjKv3YGnh+4uOAZyRsDWwsXxEqKZ+u0Zf1dGu1r6mprGFphGJe3xm2IqzORrAMptHNT8BUIFOoyAAh3S/5mK0F2BGdCxGuHbzXghAE3ENtqcYj8iGPdLBhU+w3eniR/ixmsvfVyAVmoQrzkbzNp7jC8FC4hwpGH5dW1u9WY4uLiXryyUEgVCSJmq3VbTykJhaMhjG5p6+UCQRXNFre0tjYkjrOM//BY6NgNjDmOjEMOi8lhNjOxNG2xmAirlXEaHIQ51mp6QQ2INKhRvekRRBmXIHPAMcTnPxztpmrVKBNPEmbSAp86EeI0zSkMKlAcyYL6EdJEzANZWqCYzqRUPImiXQgv0Ajob0suyUnIzkjaPAMPZxKeq4VqmOcFCNNOZ2sBEsEz/naaExQGwqWIWkFXfkKJf1MsY3FYSYOZYiwU6Ygj8UQIRCFtF3nXosbaNgoONZKX9m90kfE6m8lE6iZCNoqPtYCftNvsva2B4P9ar4N3Ng6M0FofLn9m1RHD8IazncPOjhs9dszfMs4X29675Y17Njcl/LpJZz8yC2Us7Xh9w5ILd3Ntm1ek3pz4QbN04LytX9O9TWMYzJij2zLryy+/iztQnPNEs33h5Du2fGN5/Pz7+3YtrT1jWzBnj+H+EcPP7Bo5eGnZgrEPdFBHl/xmefvxTz+ZQ9yOFx8fvDu7qb1lzfL+fzlaP2XBxMdecU163P7Y4/7e8fO5Owy7/vrstr2jlKa5Y5jm78vGpn3hWzzMsuq3fSoLb+91quvMq+aEl/steHv+tqyyEZlKhDE1Zd6EgyMPnXjXt+WBY5kpKafJZWfOnfuOWdx54aXOE5OX5578xnbq3MF30srGPTK81DT0nx9nT3rRSA8xvTmhIz7fKX/yVvvC2dserhiydWrjKPOyf+xkB7x8k/z66PkTFy79fPPZJY7ON+vsW3d+NGfy8F0vbM+dGLtz4krJc7Lri017qPzaDv357a++uX/KIxnryAFzD9rTW5bf3ThKtB8c9FTTuq6pq97otbhmY9+TWfx43VeLtw1/cl3r87/btP291ccP9xubVK7MOLv3+6FdUanHNoz0brTPaxxyfLltsO21BlPXuLwRUcq3CeO2Rt07uW7HpMyWnLdLIlegIQdemQv+u3ChT0TphSXHGvtERFzPwsdA943Cx43Cx43Cx43Cx43Cx43Cx8+98NFTF69wXJgIGIqFOwGvlRACdQ2Ku1o5gmW0ezQI2RVncSUrJyVlpQlKel1BprnQmpZpzmR+TNWCEivBUZAn1PnZ5YGLdjl0ynWMs6wsEjJNZDQWqeaLyJkz9erXR40v1zXo1Hv3JVYKo52aoLBQoinnA124zmkNWKknCCNpjCvntUR0sd8tE25G1Sw9zGP/IUa4bgWnG+WFG+WFn295gTBbr295wfh/VF4wkb+U8oLZ8j8oL5BWIo4GC5MEwLXCo9XqMNGEGRljY+No0y+jvOCwMgR9HcsLvrDyQt5efodh2IsnJhTf3SezdufQRe2PPtW+Ob+oaHfs1GPHavHZeb6FcztH7Wie13gz+dS/9p1bEjFwZeP4gUWZxcvIzw/vnRTzYeS6xi7vV21n874mDn9eeeTQ0Y13NBYXvnV6wK2rikanLCN3P/HZouLfjaPl/akf7lm/4levl2QXPtK8aMOO6u2Hbt/kO/7ouzkbTz2dt7zl1v7Egs6+ER9PW0U076+e0Lpk84H0BWN2937tjDhoSq3xlluMfUfPKL1t3YRndxvft31dtujp3bNNxl1/GHc6ZnW/Eix9xc7Mur6jp9hOsH+asMflXeLdsPbIyQ/q6srd85pWfrbrsf6T70kcedeTbch17qaX3im+Xzya138DMa1CXO18aNCfa+J3/d0++eikogdHZFVtenv95H05c45tu7Bs/7dD97z6zgfPpN73Xp+XUx6Mfn6QVRo8rmjRN0VP/N766ak7G5jqu55xV5942JSj+6w3/fwMtuzpmnsO3Vn4kC/r+2AJYJ1/zcpVvSMi/g15gOqZ
eNrtWGl0FFUWDsIIDo4yRw+DB8Sy1QE8qU539ZpAlOxkD+kQwhKT11WvuytdXVXU0p2G5KighhlA7ADuCw4hGWIIIHEBRQZmUDTgegAh4AJqUFTcEBeWuVXdjR1hREfnzDiH96O63nu37vv63u/d+96d3RbEkswKfL8OllewhGgFOnLz7DYJz1CxrNzSGsCKT2BaykpdFctUid19rU9RRDktJQWJrFEQMY9YIy0EUoLmFNqHlBR4Fzmsq2lxC0x4z6ADswwBLMvIi2VDGjFtloEWYC1egY5hiqASPhTEBKJpkCEUgUCEiHgGyQSDFOSRUAATtYyn1khobQKWMMGCnA8TgqqIqkIIHm3e6MOIGT3GqAg1AST5GSHEjx5Tmzad176KPmtra6MvDfGHS5WCbBAzeqeM5hAgaCBKtCV/WmsgXLg+rjXDi6PaWTfH8l45xSUKqoxlIsMtICm2GJLAAnJKlo/lGHhNnCNyYZJomM43kCSZpj3iLaGT1pBG/vR26qOG0xSe1n7cXJqGkyBMUeTRltCxwGuxZCRKQ/AfJyBJAt9lSkjlmbPYM4A4HHcVRSVq/14z/8CcDsRhpGyaJh1nH+mEjlnHKRuJAsHHawgZDoeJ0bmcAM6hMYywXq9MVPhQGEtjiCw1oHkWPvJgHamuyOL8GTgdZiPltFhiOKl/gVO3JysD0CLE8gj2A+v3szzmz8bPRJyU/Ww2+2F7pmoGjeK0nNWeiNYCCQBFio8YXcRyYaIYhYkyjLkxRK6qSJjj8JlxWmw/w542i9FMnPK79Sz8nMxyHIsCAJOXwkQGQOJ/PD9/EOdZ7ek0mmL87BOkos88CE88hESIHxKhR2QIrMlESGIVrAfBsjCEZ56gBQbrwZOXQyDJKtFwmS3woxQCwpoKIS18hljLhwmOdUtIYsFHAiiUQCviCbfKcgrJ8nH9Ag/zEJZjsdkYRVeOZVGAQYaVMK2AZ0GlBioOFD6jE9bFvKB6fQTLewQpgHSBPpANyYRBEjisJQY5LCs4YGhMJvrki8mQYkZFwz8tSMCdqBo3VkJYMxSEXg2lByJoojbNfIbGahgJgKE4bcgrKqTFaCMVVXILmiwPo2b4lYGSKAAdD+JkDAOAQ4TUCIKaLpPRoY0JAldD+wSW1sZmGZSwqC/kUXk9hWoKT71rAjwkFU1A1O1ZI2GRq0GyYmhsjGmL5cZ/WxHIMVimJVaMiRoy4s6TfbDHjMQkWaMMqzse12NaBQqJcfoEAmA32K/5vJZTZZ+gcgyYFagXRBzLfE8QNowPzK3KEAXjOgUumZAFOC6wAY1L0eQMM8jtlnCQRQrkWRL2jR8TsqplcYVgBJDkBYXgBMHfR9CNgSM4tgBIAhOBRlKMLDpTRKSdDeDQIut2ESU4jEgKi6NdoKAU1t++ZxV9p8g8K4pY0SwhqbqN43YH78OSUbdoZyBgNqN5JqawOkFUcNcB60G0sbqxTTt+AJQ3koa0+ARZiXT2PR6t0vYcMA6SicDAApGV3pmsmEww2AMUxu1AcR7rTo60+zEWSTB6ELdGv4qsRqLIsbTO9ZQ6WeA7YluC1LCcPt2u8Z2EzcArka5SAJGRnxLjgtloo4zU6npSViCBcBAHSA4Bntaog59KnBAR7QclZOyMGGmNftyZKCPIkeXFiC519VGJJNoXWY6kgN26NnEcjK3RI9KWVXb6crHJ75aDCG42Otb0USyHeTqyXN+aT/T5GCtSmKQF0BF52NRKA6FYHNn9WU0N7alxB9LzEOPKLi+ZkjlJdRcK3smMvT5cHhKkXHuBfUYhV1SYZ5GRL+hnAzmk2WExpaY6zA4naTaajGajmUwNm6YU2zxFdln0+jMzKoot+f4MnuK8rJFzlRsnlkxyOsstlRbeqVSqU+VCpiw7p4z2FZb6U+v8AavVO8UvFoieQqqusjKvoMoRnlqXk5kxlgB0apBl0vPrqwqKRKl0opidmxouELhsKuAuznFN4j1VYpYxx+HNK6pQVSqn3p8Aj3LaSVMMod1kdZq01hnnBuQwr+KLtFCUw/ZXSQ/VMp7TCjZTVHl2i7YRtm1tix3N/1Ja+B2Hh7ZkAykjGyp8ajJhchAlQpCgTJSVMNvTLJY0k4XIK67oyIqtU3FGDq6pkGCzeoCHOXHOt9E+lfdjpj3rjGzfEM1LpIYfAjeJ60VBxmQMVaSjiiyPXkrI/Oy10a1FCpIX8exMfdnIBp32oZn1IYZWGcYXDAVMqTOtFtaNVdrTFfsEAoW2DAAiA3JkmdVp74zNxInXDv/VRJpNpMm8vp6EyI85NsCCQfVn7GYkR1psYO0nTxdQBD/m5UibVXeH6ZlECQmONyyvrf2dGmtqaurTZxaKq7KAiM1pX99XSsaJaMxUQH7ydIGYimX2gNxRHxcnWSay+2ro1DgoJ2Wj7B6PxWZmmFSPxWR1263I5jQzJuR0W9ZpAZEGNZo3RUFSSBkyBxw/wpHdyQFUr0WZdIvZZrHDXx0LcZrmVAa7VHe2oP0JeSwhQpYWELMqK5fMQrQPky6dgJG27CklGcX5WY9XkYlMIkv1UA3zvABh2uNpdWEJPBNppzlBZSBcSrgVdJVnTIl0ORmM3akmtxV7sNNDO8hMCERxbad416LF2jYEBxo5SEfW+izphjSr1WIYC9ko3WkHP+k31Ztbo8F/S79vrpg3KElv/ee7qoU9pksbX1q1+uPB9w2Yu/i8q4R97R+hQ+2BfV1Dy5j7C8Xe1zsbX1tw5YlvFw9o/tvdXeOyd6Zf17s3Rej/7Pg/DjEP8ZcY1raHjrx/9EDTQsfbX53Y//aYxhtPkiPmNR06NPzrY2kb5vCbTN15lyq1R259viOzOXfyanHrBYN3iNSIorZFn+xYPYBatPRa34rlb6y+/oOJH9/Z6hiUkjvrvT0Tbh5psA8eRnPTTswboc7eSLjnfHxl88NHx+XM/aBw/s2W9rlJR2+Yuf/epT03bfZlVreslJdPTbpsmXvTkrr5jwg9aY4F+wu2vLQsFPrm86M9KTVvMte/9umixff1/L70NnJjwUy6+5bKdUPVOVPrL19Ysm1Y/+1rm/2OLnta0DdpW/oDE0zn710evu3eP3+Bxja1jdjnueiVEc07uJ3rCta8hh94Yi/yrjk4+MDeeXfJr2+KvGh0hm7o2vOOuGRW55fHe54LHRm/OH+lpfDGV2uq8snjacGtjVNG1VOPf2ZsKBi4daHngp33bCsvem/gB5mfPDp1qTS3qdr/8ifSxc1Xdk7teL9lVm5KyR1PJjdVH77rLfeC/KzAl/N7/2QlCd+k4yF7b9cjLywvOZn3TL9bxhfe/Y+dyLBlxpbttqYtrmO7HlvfjTe/5XjQNG9yedvBjkVH6EHBTVeAk0+e7J8kl0fqBg9ISvpFKx/55yof5yof5yof5yof5yof5yof/83KR19dvMpxCSJgJBYuBbxeQ4gWNhD3Q/UIltEv0iBUE7TlVfDlVaWuSVmBkNc+g2XDpTbEWH5K2QJJXnAS5ARtftb06E17OnSmGxjPtGmjIKuMSiZGablhVHW1Ufv3o8dMNzQatIv396yUQDEtGRHxpDKdj3bhPqc3YCBcLikLlTqd15POqf53Molm1MzSxzw1P8YIv1jF6Vx94Vx94X+3vmCmzL9sfcH6f1RfsFC/lvqC3fYfqC+kWs1Op9Nuoiw2CiNPKkMxDEVRDg+y0FaUin4V9QWPGztsv2B94f6E+kL5prrh5iEbDk++MK3n2eFjZxyYdtEYcvniO8cPuiaZZoOTV1Re1TLlXutXb87beXsoy7U1pyn8+cjD6dvlfks7Jk4rW5/zSrX4+SOHbxz5zBvDWp965FDpfT0Hr2vrSfnoOaHh0wOdlWu3vbWvPxtGpDW0KVL15cCqvMyHO6byld3d3cVfo/Sky4dL9+wccM+qob3y+wdu7+3+7MNjW7nddQV1g68fnHRT79vbR254OO+xW9/9EL86seD1O/Y8eGdSBXPXqN+6Hs1/ed69C5Z6Ut4Z2OMgb35v2PPo8d+NbX4ze+TqK8cLS5+dUHtJ7aDIFYGmzI8vuHPeo+OmFV1z1Y7GzSOeP/ri5b3fLsp7rnju04V5x6yrVj6xYqUBuR86fvXbS89XapL2Xlz9xar9K4asG1x+wUnDQ7/Z9Yd3M0oXHFnoW5c2e/NsfvOeDq9wcNJtA1YWLrIvyd3XfNkK+sjrw7svWXp3y4ld26ftaXpnp+Nw5Ip+K8cMuufabKpyY8m4zpwXdn298TFX9xy05MHmssk7djouzer6e+OFD61LOXFetAIwttfeObJ/UtI/AbzaHWM=

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
eNrtlglUE2cewMNiXWvdFteT1uoUsdWWCZNkcoGoyCUiBATk8GAnM1+SgclMmJkAifYQ64laR6y1Xm01EEEExQNEsbZ9nvXc5wXaw1Vr1fWoLWppa/dLgAqru+91V99rd528N5P5vv/8v//5fb9Cdx7gBZpjfSpoVgQ8QYrwRSgudPMg1w4E8c1SKxAtHOVKNCSnrLbzdMPLFlG0CSHBwYSNlnM2wBK0nOSswXmKYNJCiMHwv40BXjUuI0c5GiomB1iBIBBmIASEIOMnB5AcXIoV4UtAGk+LACEQOETSAkAEu9VK8A6EMyGiBSAmjmG4fJo1hyDhNqhVQAgeIDygAoKQAJ5jgEeHXQB8wKsT4YiVowDjGTLbRBTnUCvN0h5JFo4p4FMQeUBY4YuJYAQAB0RgtUGnRTvv0YTJsVfdFkBQMCRvuSycIEqVHZ2sIkgSQN2AJTkKmiWtMztpWxBCARNDiKAcusECbwil8hwAbCjB0HmgtOUraT0BfaBJwjMfnC1wbEVrJFDRYQP3T5d7PENh3FhR2houOFjSAC0Jjw1OdMCUsIhCrlbKsfUFqCASNAujI6AMAY0qtXnnt7WfsBFkDtSEtqZbKm35uLK9DCdIJfEEaUjuoJLgSYtUQvBWDb6x/ThvZ0XaCiR3ROL9y7VO3ltOJVdgcnxDB8Uej6R13keI905zNR2UAJF3oCQHdUkfYJVtwWIAaxYt0mqFVrmGB4INFhqYVgo/E+1CoQsmBhzY626tuFWGuLaMfi7r44qESZLqU+ww90olYiBFRIkpcXgLUepD1DgSE59SEdG6TMoDc7IhhSdYwQTzEtVWA27SYmdzAFUe8cDs13uyD73xmA9LFgUFNk4AaKtVUkU6Oral1dDYyI0tpYZyvJlgaad3WaneWwb5zoJ8irRTlCUv34rpnbiKNgI7adrU+omN5zzLQINQqyCtVuLKytaZthyUQ18xVIGhmKKuAIVVDxjaSsN4eu+t/S5ILjWGYbX3C4hcDmAFyY1j3mtHewkeWGHSPGvfU4Pr9frtDxZqU6WCInqNvq6jlADaW6NQWoXa+wVaVazChIqCNmmUpqSGQPiSRakwQkGqtRotqdZo9Zge6JVGjFSYVIRCryf0W+FOQJNQiyeZNo4XUQGQcHMTHVJDkJUo8DRdmEqhVmmgp6EIzZKMnQLJdmMk5/FBCEVsPGA4gqqKiEYjCNIC0GRv/UnuyIyE8PjYiPJkaGQEx+XQYGGjj29WFmnKMlrDmGSQEZcXkZuuHadyponAkWNPyI7NYQCuMWpshlSjis6kc51aY04SCutbr1Hq9HocVcgxuUKuQLVObb48ISPbEKk0J6lys2lNqlKdHm2OcoxUshgVYSApR/oolSrOnpEwugBLs8XGO0maHqPHk0eNtRi06vTMLHWaOtwak5QVIURa8YKMCFM49IYQLWHBoQisTRrGN6y1Q1DYIWhLf6jb+iMUobwxCJN33BpDkVHwXDCwjCMUSfYEE8AnYQXJcIMPS+BY0LAIxsCeR1Nh6sw4DRUerTWLGUmj8ewEwZzLjFampsZrSYNzpEURHu+AhR4lFOTGtguCQqVHsdY4aDBc563Ce6b/h1ZtSUfbNzxqsHlPLsnNcgJLm0ylyYCHDSSVkwxnp+Auz4NSmPOx4RnSJh2lIVWUyYhTmEJnIlXoSLh1tmn7ZXtweY4IN8HAGssjpY0WVVhACI6rAkIRKxGm08B28h6TU0s9Ncmad/nkDijqIvNevkzSp+zHmN/2i6+MG/ZSapdZJ+eVB5eFfXciIjVqd0Ng5z1XGqOz59y5vf9Jv5dr72AbVaEbP4+RjenNPDnm2Jmi3Wuopluu1G+b735ZfTX/+72XwK3x7gGm4mE1t4Oe7prJJd4pvpyQSPZ0xta5p0ddU9TONzUYlx7sU5Qbg6e837V6/x1qZOzAzWOq8WWBl4L8D751ealrxJ7FN/x9ZH/fpj1/7Im76we5vnmSPqKf9ML4lTNlSwb18tu9oviQcDVj6ILCTTGfRWX5rz7iU9Nlh3qsrSRzffobx5BRUv22nplB+fNn3nC6j3Ufvrx4cPncks2hN83bF2XWhdTsff7S+uMOZQX5TPedaScLbld1Ui1+9pxgGH/sIxt+/eop28WpQ3NXDk3zXR5/7jmTY+rKuAvyp3d36qPuPU05T750YtV2cLSmuK6Onz0/4cDmQ2TxO2PTLPPe/qnPohffi+9Dzi+mt5iOvtDsK5P9/LOv7IPb18sW/EEme4i00+gz7d/gzgQWgVciz1F20oM9JpolmDbokbfMttyjCmhBhOXwCxHZbYjIQSiiBcTGQVYLaZFrh0UQIuBJxTAODyDBPQtCFcPxHbQmgHzEa05B2/dou6tlZCRjB0bA83SrWiN8/1eyLfcYeMqxXl5j7+kPglZA94B3nONp8wNcfeQU97nM7zHH/cY4rpT0nopSw83f+KH4CI6r+xhWpdP+Oobt/X/EsCql+nfCsArdf8GwqxXWB0KsSaNQqnV6QBk1KkyhBYSKUpFak05PKZR6Umt89BD7EOCIInSU+iHCUWEHOEr4pA2O1sT7vnpmxKwe5X79us/pZ17Wt18V4KYWNl/fS1vm7N+10Jy3W3XleqcRX2dWjFgzrvry8n2V168zNTsm7qgK/n5z8xTblZjtU5rr6t+d3jBz1OrC8rSUH9fWJR7TT6vY199yIUkZffLS0KxJF4ouVH89e342nnRCd6jr/hCrfbHhZsCyAxsvGRNrXL06byi82En25bbhZML4c+70jwJ/igrpFq4ZPG7PC7Kz+yIj9/qJf6EzXAPL9iobhzVvFjr7jnprZ6j7bdz0x/45R3r39L85ZMqmRiTFVNTLWTXzUsGHOnVTbUNn/6KGdQrfHcNL8WM6Q2rfOwN3HB03c8t3g30blgzG9Ec3/tDpy5Ivdt8yhJ8dap8fV6rddPRGtnN9j3MD5p4XNQvGBB2cdbjouwE11dGLG98stnYdlFrUtGZ54EvHD33x4+G/JRpOf3VgkP/yyFNP7K48Yzn9fv6kAXfN5/YbpbVD3/l29GzqvR9lLbR084BJte5h09Ly3xYtBSH5FpppgZ5/hqBfT1IEC38tChyw0/IfY9RjjHqMUY8Ao3CF8uFilPp/CaN0+O8FoxSPAKPUuFGppYxKAtcQgNRq9ZQO1xpVFKZVU0oNpv0dYBTAMB1Fkg8Ro+a1x6h47jTWrf6HNL+Q01HGvgf3rNt54lNnhjND221GkGpL5uyegSU5iec/fKZ6hS5qcvf3b9/pP2Df7W4jvh7HhK+ZFA1WDLnW/9q+kKZ1zd9caRhef3fKmaZbocu+OFh5OmzW0CG1juheh8uf23XCf+5S2vV2jL9iqzztwldbJnCa4yPS2aOBGReNSbmE+tKqCYtSN6T3+rTJdS3jlW7Gp15Xyt64fTZeX7ZLu2F61GvRQTOQ0J6n47q/cYP5cwDVJSQyNum8qmxJwF/9fjqkn+bjipkVA0nqqy79xx/pNOdPP/R7VtjZuf5k8sTmi0jZ9OiFgfpVMfiMnz+bUPNN7fBO70aXTFP0eK3uvOaM/nD8kjtP7TjpisiavGjqENBNWtH3VHLnG/mnDp/9ZO/Hji3yE1LWgoVnr2Zv6vFtqol3vDgwhjZNmXghq/Y4Gl0WX9tYlTi26dSh/VF56saLK7My5p6jLw/bShUXd9njd/TE5PjnnRebCjbUr11rL9s/tvLZQdXJtwa0EJW5JH/Gh5Co/gGq074d
eNrtmAlY1FYewPFou61a0fWotWpkXW2VDMncw0iVQ67KISCIRzGTvJkJzCRjkoEZrPdRED81itbW2lpAUERRQVHEo1rvC22tRRTrFkVF60URW4r7ZoCCq7v9dqv7tf0M3zdJ3vvn//7ne7+PmTmJgONplmmTRzMC4AhSgC/8kpk5HJhkBbwwO9sMBCNLZYWHRUZlWjm6bLBRECy8p4cHYaElrAUwBC0hWbNHIu5BGgnBAz5bTMCpJkvHUvayvMluZsDzhAHwbp7IuMluJAuXYgT44hbD0QJACAQOkTQPEN5qNhOcHWH1iGAEiJ41mdgkmjF4It4WqJVHCA4gHKDc3BE3jjUBhw4rDzi3KRPgiJmlgMkxZLAIqJxFzTRDOyQZOIbDOy9wgDDDFz1h4gEcEIDZAp0WrJxDEybBpuQYAUHBkCzMMrK8IG541Ml8giQB1A0YkqWgWeJ6QzJtcUcooDcRAsiFbjDAGUIxNwEAC0qY6ESQ3fiVuJGAPtAk4Zj3iOdZJq8pEqhgt4DHp3MdnqEwbowgbvfm7QwZBi3xDvIIt8OUMAguUUgl0o02lBcImoHR4VETAY3Ktjjnd7SesBBkAtSENqVbzG78eENrGZYXV4cQZFjkIyoJjjSKqwnOrJQXtB7nrIxAm4GY4xv++HJNky3LySQ4LlFtekSxwyNxvfPm6fyl2aJHlACBs6MkC3WJn2EbmoNlAoxBMIqZuEq6hgO8BRYamJUNPxOs/MwsmBhw7FBOU8VlhL3TnNEKl55ZfjBJ4s5IQnBHMA0SyiYiUkwqR3CNp1TpiauQgJCoPN+mZaKemJNNURzB8HqYlxHNNZBDGq1MAqByfZ+Y/Z2O7ENvHObDkkWBzcLyAG2ySswbg0Y0thoa5FfQWGooyxkIhk52LivudJZBUrItiSKtFGVMTDJjmmS5jNYBK6kvbPrEwrGOZaBBqJkXM+UKfEPTTHMOcqGvGIpjKIYX21BY9cBEm2kYT+dvU7/zYpYCw7BtjwsIbAJgeDFHjjmvXa0lOGCGSXOs3aJGrtFoSp4s1KxKBkU0SnXxo1I8aG0NLjXz2x4XaFKRgfF5tmZplKbEsgHwJU5OkbhOSWBSlRTTADmp0ivgBWSAApRCKZNthzsBTUItjmRaWE5AeUDCzU2wi2XuZsLmaDovGa6QKaGnWoRmSJOVApFWnR/r8IHXIhYOmFiCyvf1R30J0gjQSGf9iTl+saHeIUG+uZHQSF+WTaDB4nNt2sXFkfo4ndmL4EbrZMHhxmgrFWGUJOhHJ9nUASHS+ORJZpM9Tm01J+PKYOWkALvMG8VVMhxXYzKVCsUlmASX4Gi00Wekzm9SYiBPGM2+ClZCRceMkUTS3gafBEWibJSKoqEIFSUP0ihNATZFaJB+kpHyx4LjjbZohYFI5hkJJlOooDM+UWNjQu2GYI2QBL0hBKOXhxaBtUnD+Ho1dQgKOwRt7A9Fc39oEcoZAy/Jo1ujFgmE50IYY7JrkUhHMAG8E2YQCTd4r1CWAWXpMAbWRJryCh3rR/vKSIYVLFYhQRWCjw2KCowCgXa7jPKNeSfYW6mnJL5+uNmY1CoIGpUUxZrioMTkamcVtpj+P1q1dQzauuHRMIvz5BJzGJZnaL0+OxJwsIHEXNLEWim4y3MgG+Y8wjtWLFQDDJPhakpKwJrSaQjUB26dzdp+2R6yHEdEDmGCNZZIigVGmZebp1wuc9MiZsJLrYTt5DwmZ2Q7apIx7G+zqF/aX1ycV7v5EUeZcsy1pHpIj6HjZlRuEy+tKK/oULO47Yj+pZXBFXP/0bXy9fV96i62O9JZE9i2cu++Yycs+k0nX5juti561bHotF4zh5wddmGhR3FRzZQpxh/ZhXH9ftqnX/tpQ5+pN1d+BF5gGzLEtOsnbru+8cqmYacGZvQ4tsHKx509en0X7a+bmCY5+iNyeebRGUOX3ovf9ea67LwPDD8sOzwxYkjdBBeXoQvZE9fQhpgha/Q9qT3pwsJ4a5+24eIE//TQ1MVGj3ilOHfnqn6Z+b6W5cMD2pe9JD233cuvcHjfjtoFSUV7Uy5Irr1Rcetj7XcHE7oenJJSEmRomJdU+Za5PMVz08ndAy+t8q8d41Lfq/etd5eG97/rmtV9qs+nnaovHw4NmfzJoQ6D3M+F3K4nahfEJoySbxcKMFBYlrEy/cvEH050vei6P+RyUWzFvbfPDLw47cBUt/mfI8Zy11OL7ReSXk9bvvB25qbqeQf3DL655NvSxMrJ8/ufvta+uDS694Sy5F1X40e/kdYThv7hw3YuXYIuFrm2c3F5ijB0rk3Bf6Ch8QzSdIVzLGUlHWSkpxnC1MxFkhaJlqcRNpoXYOX8Ak9WCyKwkJ9oHrGwEOs8W2RbURRkDniwmUx2B0/BLQ4ymInlnrhCKEhCnGbaWutCW10toz4mK9ABjqObltHB91/7puUpAB6YjBP9mJY13aGFMAzAOc5ytOHfhOSZQ2GFi+tzLPydYWE26TxkxbK7v/Mz9hmcfo8hsVym+O+QuMevILH6T4TEMqniD4LEmPI3IHEmbn4iE2NAo8TluFSBQa6VqqVStVIpl+mVhB6o5YBSPXsm/u2spZGq1SriKbJWRmvWCkk4j3Ur+SlmqLB11pYd1Ac7L6d3fXNOaU3EPHG1YVzqrsFeSybsbbjvll9QUdotuYvxynHsXlZR2+n0/to9lUjguYm5OwLu3azt8OGg4qK4frvv528dVr9rmmf9os71c5bGXio3pNdsKBYrXpyVtVP7pTQDObbhvR1X3ovJL9qRuvHdzw5vfjt8y52lZ0Jj1iVvWXvdvD+pIPn2WJ1ucl2si0tJiOHEtW59v+l7/W6fTcsXXxVvevi6uCcv6TIw4nTq7L9FS7JqFpSGnLrlgy0f7tfe80V0Dh6Qsma4dlFF3Zi91as6+ST4m3B/r7Cqn45vob9gRrr15DNx2dTiLPmuzqeK6CsetpyVob5/HTXgFpfh9/bGVK/uLruzzfP7vbLZdWenrxZN8/6gE/ugbAHKblz0nteSyqDlL1xkjoSdf33OgMKgmHIq4lCbMxnjoz38V//cfdR4a4P2mKGiquLcDw/J3a4ROiQqJGVNYf0YW6+05dOu7j4g0X7StkAYOXvrtcJqfO6akw8zO4RcmDf2zpns7/LU3ac10Vdq7CsV6NOmr2O/b/pyR5KMtKkRmv4Von47mREM/GtUaIedmvQcy55j2XMs+39gmRJ7ulim+TP9p1L+h8Ey7BlgGaHWEypSLtVr5BgkVDWgcFKK4wqgwgmNTIb9IbBMQZA63VPEsrzWWPb5vHIHlg3p61metlJRvWjcqqKuY4M7oKtfi03eumVpzewB7ArJg/tu2tSCGz2ScduV48YrQSd7uUQuvjk9VQwgPpbUflrz/vs/X7pTXZUUllFf1zBxpNePlrramqp39t3NWvV9lxEla75v3ysHH7y5Azr0bsrhLfHfr4sfV6Eq/xAbMGpQ2be6UZFE+uj92ee/GlHW7UxtXt+glzuaXrooc5le9WAFNzZ1d9mrR+pnvtZxRudDiWf7u1wJGjT7jmtUynn/ZflZSO9TJ07dP0F33OPlM9Tz79u+Drj9pnesa9bhqtJrL2tr3l1ZdXraq+eDr267vMzjapct2XFJRQdtN3L2RI7W0lMOhJ09/9GLVWs7GBrm3T+Si23ta0dWSOcGLjh+4612dfgtQ/KoDzVhB/YrT4cVHr79WeyNb5KHDImdcChQrO5+8esHvZn1Xxxa+qrqJzAzrcvl3PEes8IiTnUSLwhT+1xBb60u2rzv4Tf5s12vuh352p4eXOJessA+kq2NI8/2OTkjVj03Jfvm0j6l2vxlPb/csfWG7a11IRdi3LVgWptGQCuue7h2GAS0fwIz6Q8x

File diff suppressed because one or more lines are too long

View File

@@ -8,8 +8,8 @@ LangChain is a framework that consists of a number of packages.
<ThemedImage
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
sources={{
light: useBaseUrl('/svg/langchain_stack_062024.svg'),
dark: useBaseUrl('/svg/langchain_stack_062024_dark.svg'),
light: useBaseUrl('/svg/langchain_stack_112024.svg'),
dark: useBaseUrl('/svg/langchain_stack_112024_dark.svg'),
}}
title="LangChain Framework Overview"
style={{ width: "100%" }}

View File

@@ -17,7 +17,7 @@ Most conversations start with a **system message** that sets the context for the
The **assistant** may respond directly to the user or if configured with tools request that a [tool](/docs/concepts/tool_calling) be invoked to perform a specific task.
So a full conversation often involves a combination of two patterns of alternating messages:
A full conversation often involves a combination of two patterns of alternating messages:
1. The **user** and the **assistant** representing a back-and-forth conversation.
2. The **assistant** and **tool messages** representing an ["agentic" workflow](/docs/concepts/agents) where the assistant is invoking tools to perform specific tasks.

View File

@@ -2,13 +2,13 @@
## Overview
Large Language Models (LLMs) are advanced machine learning models that excel in a wide range of language-related tasks such as text generation, translation, summarization, question answering, and more, without needing task-specific tuning for every scenario.
Large Language Models (LLMs) are advanced machine learning models that excel in a wide range of language-related tasks such as text generation, translation, summarization, question answering, and more, without needing task-specific fine tuning for every scenario.
Modern LLMs are typically accessed through a chat model interface that takes a list of [messages](/docs/concepts/messages) as input and returns a [message](/docs/concepts/messages) as output.
The newest generation of chat models offer additional capabilities:
* [Tool calling](/docs/concepts/tool_calling): Many popular chat models offer a native [tool calling](/docs/concepts/tool_calling) API. This API allows developers to build rich applications that enable AI to interact with external services, APIs, and databases. Tool calling can also be used to extract structured information from unstructured data and perform various other tasks.
* [Tool calling](/docs/concepts/tool_calling): Many popular chat models offer a native [tool calling](/docs/concepts/tool_calling) API. This API allows developers to build rich applications that enable LLMs to interact with external services, APIs, and databases. Tool calling can also be used to extract structured information from unstructured data and perform various other tasks.
* [Structured output](/docs/concepts/structured_outputs): A technique to make a chat model respond in a structured format, such as JSON that matches a given schema.
* [Multimodality](/docs/concepts/multimodality): The ability to work with data other than text; for example, images, audio, and video.
@@ -19,7 +19,7 @@ LangChain provides a consistent interface for working with chat models from diff
* Integrations with many chat model providers (e.g., Anthropic, OpenAI, Ollama, Microsoft Azure, Google Vertex, Amazon Bedrock, Hugging Face, Cohere, Groq). Please see [chat model integrations](/docs/integrations/chat/) for an up-to-date list of supported models.
* Use either LangChain's [messages](/docs/concepts/messages) format or OpenAI format.
* Standard [tool calling API](/docs/concepts/tool_calling): standard interface for binding tools to models, accessing tool call requests made by models, and sending tool results back to the model.
* Standard API for structuring outputs (/docs/concepts/structured_outputs) via the `with_structured_output` method.
* Standard API for [structuring outputs](/docs/concepts/structured_outputs/#structured-output-method) via the `with_structured_output` method.
* Provides support for [async programming](/docs/concepts/async), [efficient batching](/docs/concepts/runnables/#optimized-parallel-execution-batch), [a rich streaming API](/docs/concepts/streaming).
* Integration with [LangSmith](https://docs.smith.langchain.com) for monitoring and debugging production-grade applications based on LLMs.
* Additional features like standardized [token usage](/docs/concepts/messages/#aimessage), [rate limiting](#rate-limiting), [caching](#caching) and more.
@@ -44,7 +44,7 @@ Models that do **not** include the prefix "Chat" in their name or include "LLM"
## Interface
LangChain chat models implement the [BaseChatModel](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface. Because [BaseChatModel] also implements the [Runnable Interface](/docs/concepts/runnables), chat models support a [standard streaming interface](/docs/concepts/streaming), [async programming](/docs/concepts/async), optimized [batching](/docs/concepts/runnables/#optimized-parallel-execution-batch), and more. Please see the [Runnable Interface](/docs/concepts/runnables) for more details.
LangChain chat models implement the [BaseChatModel](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface. Because `BaseChatModel` also implements the [Runnable Interface](/docs/concepts/runnables), chat models support a [standard streaming interface](/docs/concepts/streaming), [async programming](/docs/concepts/async), optimized [batching](/docs/concepts/runnables/#optimized-parallel-execution-batch), and more. Please see the [Runnable Interface](/docs/concepts/runnables) for more details.
Many of the key methods of chat models operate on [messages](/docs/concepts/messages) as input and return messages as output.
@@ -85,7 +85,7 @@ Many chat models have standardized parameters that can be used to configure the
| Parameter | Description |
|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `model` | The name or identifier of the specific AI model you want to use (e.g., `"gpt-3.5-turbo"` or `"gpt-4"`). |
| `temperature` | Controls the randomness of the model's output. A higher value (e.g., 1.0) makes responses more creative, while a lower value (e.g., 0.1) makes them more deterministic and focused. |
| `temperature` | Controls the randomness of the model's output. A higher value (e.g., 1.0) makes responses more creative, while a lower value (e.g., 0.0) makes them more deterministic and focused. |
| `timeout` | The maximum time (in seconds) to wait for a response from the model before canceling the request. Ensures the request doesnt hang indefinitely. |
| `max_tokens` | Limits the total number of tokens (words and punctuation) in the response. This controls how long the output can be. |
| `stop` | Specifies stop sequences that indicate when the model should stop generating tokens. For example, you might use specific strings to signal the end of a response. |
@@ -97,9 +97,9 @@ Many chat models have standardized parameters that can be used to configure the
Some important things to note:
- Standard parameters only apply to model providers that expose parameters with the intended functionality. For example, some providers do not expose a configuration for maximum output tokens, so max_tokens can't be supported on these.
- Standard params are currently only enforced on integrations that have their own integration packages (e.g. `langchain-openai`, `langchain-anthropic`, etc.), they're not enforced on models in ``langchain-community``.
- Standard parameters are currently only enforced on integrations that have their own integration packages (e.g. `langchain-openai`, `langchain-anthropic`, etc.), they're not enforced on models in `langchain-community`.
ChatModels also accept other parameters that are specific to that integration. To find all the parameters supported by a ChatModel head to the [API reference](https://python.langchain.com/api_reference/) for that model.
Chat models also accept other parameters that are specific to that integration. To find all the parameters supported by a Chat model head to the their respective [API reference](https://python.langchain.com/api_reference/) for that model.
## Tool calling
@@ -150,7 +150,7 @@ An alternative approach is to use semantic caching, where you cache responses ba
A semantic cache introduces a dependency on another model on the critical path of your application (e.g., the semantic cache may rely on an [embedding model](/docs/concepts/embedding_models) to convert text to a vector representation), and it's not guaranteed to capture the meaning of the input accurately.
However, there might be situations where caching chat model responses is beneficial. For example, if you have a chat model that is used to answer frequently asked questions, caching responses can help reduce the load on the model provider and improve response times.
However, there might be situations where caching chat model responses is beneficial. For example, if you have a chat model that is used to answer frequently asked questions, caching responses can help reduce the load on the model provider, costs, and improve response times.
Please see the [how to cache chat model responses](/docs/how_to/chat_model_caching/) guide for more details.

View File

@@ -29,7 +29,7 @@ loader = CSVLoader(
data = loader.load()
```
or if working with large datasets, you can use the `.lazy_load` method:
When working with large datasets, you can use the `.lazy_load` method:
```python
for document in loader.lazy_load():

View File

@@ -68,6 +68,7 @@ The conceptual guide does not cover step-by-step instructions or specific implem
- **[langchain](/docs/concepts/architecture#langchain)**: A package for higher level components (e.g., some pre-built chains).
- **[langgraph](/docs/concepts/architecture#langgraph)**: Powerful orchestration layer for LangChain. Use to build complex pipelines and workflows.
- **[langserve](/docs/concepts/architecture#langserve)**: Use to deploy LangChain Runnables as REST endpoints. Uses FastAPI. Works primarily for LangChain Runnables, does not currently integrate with LangGraph.
- **[LLMs (legacy)](/docs/concepts/text_llms)**: Older language models that take a string as input and return a string as output.
- **[Managing chat history](/docs/concepts/chat_history#managing-chat-history)**: Techniques to maintain and manage the chat history.
- **[OpenAI format](/docs/concepts/messages#openai-format)**: OpenAI's message format for chat models.
- **[Propagation of RunnableConfig](/docs/concepts/runnables/#propagation-of-runnableconfig)**: Propagating configuration through Runnables. Read if working with python 3.9, 3.10 and async.

View File

@@ -6,7 +6,7 @@
The **L**ang**C**hain **E**xpression **L**anguage (LCEL) takes a [declarative](https://en.wikipedia.org/wiki/Declarative_programming) approach to building new [Runnables](/docs/concepts/runnables) from existing Runnables.
This means that you describe what you want to happen, rather than how you want it to happen, allowing LangChain to optimize the run-time execution of the chains.
This means that you describe what *should* happen, rather than *how* it should happen, allowing LangChain to optimize the run-time execution of the chains.
We often refer to a `Runnable` created using LCEL as a "chain". It's important to remember that a "chain" is `Runnable` and it implements the full [Runnable Interface](/docs/concepts/runnables).
@@ -20,8 +20,8 @@ We often refer to a `Runnable` created using LCEL as a "chain". It's important t
LangChain optimizes the run-time execution of chains built with LCEL in a number of ways:
- **Optimize parallel execution**: Run Runnables in parallel using [RunnableParallel](#runnableparallel) or run multiple inputs through a given chain in parallel using the [Runnable Batch API](/docs/concepts/runnables/#optimized-parallel-execution-batch). Parallel execution can significantly reduce the latency as processing can be done in parallel instead of sequentially.
- **Guarantee Async support**: Any chain built with LCEL can be run asynchronously using the [Runnable Async API](/docs/concepts/runnables/#asynchronous-support). This can be useful when running chains in a server environment where you want to handle large number of requests concurrently.
- **Optimized parallel execution**: Run Runnables in parallel using [RunnableParallel](#runnableparallel) or run multiple inputs through a given chain in parallel using the [Runnable Batch API](/docs/concepts/runnables/#optimized-parallel-execution-batch). Parallel execution can significantly reduce the latency as processing can be done in parallel instead of sequentially.
- **Guaranteed Async support**: Any chain built with LCEL can be run asynchronously using the [Runnable Async API](/docs/concepts/runnables/#asynchronous-support). This can be useful when running chains in a server environment where you want to handle large number of requests concurrently.
- **Simplify streaming**: LCEL chains can be streamed, allowing for incremental output as the chain is executed. LangChain can optimize the streaming of the output to minimize the time-to-first-token(time elapsed until the first chunk of output from a [chat model](/docs/concepts/chat_models) or [llm](/docs/concepts/text_llms) comes out).
Other benefits include:
@@ -38,7 +38,7 @@ LCEL is an [orchestration solution](https://en.wikipedia.org/wiki/Orchestration_
While we have seen users run chains with hundreds of steps in production, we generally recommend using LCEL for simpler orchestration tasks. When the application requires complex state management, branching, cycles or multiple agents, we recommend that users take advantage of [LangGraph](/docs/concepts/architecture#langgraph).
In LangGraph, users define graphs that specify the flow of the application. This allows users to keep using LCEL within individual nodes when LCEL is needed, while making it easy to define complex orchestration logic that is more readable and maintainable.
In LangGraph, users define graphs that specify the application's flow. This allows users to keep using LCEL within individual nodes when LCEL is needed, while making it easy to define complex orchestration logic that is more readable and maintainable.
Here are some guidelines:

View File

@@ -8,11 +8,11 @@
Messages are the unit of communication in [chat models](/docs/concepts/chat_models). They are used to represent the input and output of a chat model, as well as any additional context or metadata that may be associated with a conversation.
Each message has a **role** (e.g., "user", "assistant"), **content** (e.g., text, multimodal data), and additional metadata that can vary depending on the chat model provider.
Each message has a **role** (e.g., "user", "assistant") and **content** (e.g., text, multimodal data) with additional metadata that varies depending on the chat model provider.
LangChain provides a unified message format that can be used across chat models, allowing users to work with different chat models without worrying about the specific details of the message format used by each model provider.
## What inside a message?
## What is inside a message?
A message typically consists of the following pieces of information:
@@ -39,6 +39,7 @@ The content of a message text or a list of dictionaries representing [multimodal
Currently, most chat models support text as the primary content type, with some models also supporting multimodal data. However, support for multimodal data is still limited across most chat model providers.
For more information see:
* [SystemMessage](#systemmessage) -- for content which should be passed to direct the conversation
* [HumanMessage](#humanmessage) -- for content in the input from the user.
* [AIMessage](#aimessage) -- for content in the response from the model.
* [Multimodality](/docs/concepts/multimodality) -- for more information on multimodal content.

View File

@@ -26,6 +26,7 @@ LangChain has lots of different types of output parsers. This is a list of outpu
| Name | Supports Streaming | Has Format Instructions | Calls LLM | Input Type | Output Type | Description |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------|-------------------------|-----------|--------------------|----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Str](https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) | ✅ | | | `str` \| `Message` | String | Parses texts from message objects. Useful for handling variable formats of message content (e.g., extracting text from content blocks). |
| [JSON](https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.json.JSONOutputParser.html#langchain_core.output_parsers.json.JSONOutputParser) | ✅ | ✅ | | `str` \| `Message` | JSON object | Returns a JSON object as specified. You can specify a Pydantic model and it will return JSON for that model. Probably the most reliable output parser for getting structured data that does NOT use function calling. |
| [XML](https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.xml.XMLOutputParser.html#langchain_core.output_parsers.xml.XMLOutputParser) | ✅ | ✅ | | `str` \| `Message` | `dict` | Returns a dictionary of tags. Use when XML output is needed. Use with models that are good at writing XML (like Anthropic's). |
| [CSV](https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.list.CommaSeparatedListOutputParser.html#langchain_core.output_parsers.list.CommaSeparatedListOutputParser) | ✅ | ✅ | | `str` \| `Message` | `List[str]` | Returns a list of comma separated values. |

View File

@@ -27,7 +27,7 @@ These systems accommodate various data formats:
- Unstructured text (e.g., documents) is often stored in vector stores or lexical search indexes.
- Structured data is typically housed in relational or graph databases with defined schemas.
Despite this diversity in data formats, modern AI applications increasingly aim to make all types of data accessible through natural language interfaces.
Despite the growing diversity in data formats, modern AI applications increasingly aim to make all types of data accessible through natural language interfaces.
Models play a crucial role in this process by translating natural language queries into formats compatible with the underlying search index or database.
This translation enables more intuitive and flexible interactions with complex data structures.
@@ -41,7 +41,7 @@ This translation enables more intuitive and flexible interactions with complex d
## Query analysis
While users typically prefer to interact with retrieval systems using natural language, retrieval systems can specific query syntax or benefit from particular keywords.
While users typically prefer to interact with retrieval systems using natural language, these systems may require specific query syntax or benefit from certain keywords.
Query analysis serves as a bridge between raw user input and optimized search queries. Some common applications of query analysis include:
1. **Query Re-writing**: Queries can be re-written or expanded to improve semantic or lexical searches.

View File

@@ -1,6 +1,6 @@
# Runnable interface
The Runnable interface is foundational for working with LangChain components, and it's implemented across many of them, such as [language models](/docs/concepts/chat_models), [output parsers](/docs/concepts/output_parsers), [retrievers](/docs/concepts/retrievers), [compiled LangGraph graphs](
The Runnable interface is the foundation for working with LangChain components, and it's implemented across many of them, such as [language models](/docs/concepts/chat_models), [output parsers](/docs/concepts/output_parsers), [retrievers](/docs/concepts/retrievers), [compiled LangGraph graphs](
https://langchain-ai.github.io/langgraph/concepts/low_level/#compiling-your-graph) and more.
This guide covers the main concepts and methods of the Runnable interface, which allows developers to interact with various LangChain components in a consistent and predictable manner.
@@ -42,7 +42,7 @@ Some Runnables may provide their own implementations of `batch` and `batch_as_co
rely on a `batch` API provided by a model provider).
:::note
The async versions of `abatch` and `abatch_as_completed` these rely on asyncio's [gather](https://docs.python.org/3/library/asyncio-task.html#asyncio.gather) and [as_completed](https://docs.python.org/3/library/asyncio-task.html#asyncio.as_completed) functions to run the `ainvoke` method in parallel.
The async versions of `abatch` and `abatch_as_completed` relies on asyncio's [gather](https://docs.python.org/3/library/asyncio-task.html#asyncio.gather) and [as_completed](https://docs.python.org/3/library/asyncio-task.html#asyncio.as_completed) functions to run the `ainvoke` method in parallel.
:::
:::tip
@@ -58,7 +58,7 @@ Runnables expose an asynchronous API, allowing them to be called using the `awai
Please refer to the [Async Programming with LangChain](/docs/concepts/async) guide for more details.
## Streaming apis
## Streaming APIs
<span data-heading-keywords="streaming-api"></span>
Streaming is critical in making applications based on LLMs feel responsive to end-users.
@@ -101,7 +101,7 @@ This is an advanced feature that is unnecessary for most users. You should proba
skip this section unless you have a specific need to inspect the schema of a Runnable.
:::
In some advanced uses, you may want to programmatically **inspect** the Runnable and determine what input and output types the Runnable expects and produces.
In more advanced use cases, you may want to programmatically **inspect** the Runnable and determine what input and output types the Runnable expects and produces.
The Runnable interface provides methods to get the [JSON Schema](https://json-schema.org/) of the input and output types of a Runnable, as well as [Pydantic schemas](https://docs.pydantic.dev/latest/) for the input and output types.
@@ -315,7 +315,7 @@ the `RunnableConfig` manually to sub-calls in some cases. Please see the
[Propagating RunnableConfig](#propagation-of-runnableconfig) section for more information.
:::
## Creating a runnable from a function
## Creating a runnable from a function {#custom-runnables}
You may need to create a custom Runnable that runs arbitrary logic. This is especially
useful if using [LangChain Expression Language (LCEL)](/docs/concepts/lcel) to compose

View File

@@ -77,13 +77,13 @@ When using `stream()` or `astream()` with chat models, the output is streamed as
[LangGraph](/docs/concepts/architecture#langgraph) compiled graphs are [Runnables](/docs/concepts/runnables) and support the standard streaming APIs.
When using the *stream* and *astream* methods with LangGraph, you can **one or more** [streaming mode](https://langchain-ai.github.io/langgraph/reference/types/#langgraph.types.StreamMode) which allow you to control the type of output that is streamed. The available streaming modes are:
When using the *stream* and *astream* methods with LangGraph, you can choose **one or more** [streaming mode](https://langchain-ai.github.io/langgraph/reference/types/#langgraph.types.StreamMode) which allow you to control the type of output that is streamed. The available streaming modes are:
- **"values"**: Emit all values of the [state](https://langchain-ai.github.io/langgraph/concepts/low_level/) for each step.
- **"updates"**: Emit only the node name(s) and updates that were returned by the node(s) after each step.
- **"debug"**: Emit debug events for each step.
- **"messages"**: Emit LLM [messages](/docs/concepts/messages) [token-by-token](/docs/concepts/tokens).
- **"custom"**: Emit custom output witten using [LangGraph's StreamWriter](https://langchain-ai.github.io/langgraph/reference/types/#langgraph.types.StreamWriter).
- **"custom"**: Emit custom output written using [LangGraph's StreamWriter](https://langchain-ai.github.io/langgraph/reference/types/#langgraph.types.StreamWriter).
For more information, please see:
* [LangGraph streaming conceptual guide](https://langchain-ai.github.io/langgraph/concepts/streaming/) for more information on how to stream when working with LangGraph.

View File

@@ -119,11 +119,11 @@ json_object = json.loads(ai_msg.content)
There are a few challenges when producing structured output with the above methods:
(1) If using tool calling, tool call arguments needs to be parsed from a dictionary back to the original schema.
(1) When tool calling is used, tool call arguments needs to be parsed from a dictionary back to the original schema.
(2) In addition, the model needs to be instructed to *always* use the tool when we want to enforce structured output, which is a provider specific setting.
(3) If using JSON mode, the output needs to be parsed into a JSON object.
(3) When JSON mode is used, the output needs to be parsed into a JSON object.
With these challenges in mind, LangChain provides a helper function (`with_structured_output()`) to streamline the process.

View File

@@ -128,7 +128,7 @@ For more details on usage, see our [how-to guides](/docs/how_to/#tools)!
[Tools](/docs/concepts/tools/) implement the [Runnable](/docs/concepts/runnables/) interface, which means that they can be invoked (e.g., `tool.invoke(args)`) directly.
[LangGraph](https://langchain-ai.github.io/langgraph/) offers pre-built components (e.g., [`ToolNode`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#toolnode)) that will often invoke the tool in behalf of the user.
[LangGraph](https://langchain-ai.github.io/langgraph/) offers pre-built components (e.g., [`ToolNode`](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.tool_node.ToolNode)) that will often invoke the tool in behalf of the user.
:::info[Further reading]

View File

@@ -6,7 +6,7 @@
## Overview
The **tool** abstraction in LangChain associates a python **function** with a **schema** that defines the function's **name**, **description** and **input**.
The **tool** abstraction in LangChain associates a Python **function** with a **schema** that defines the function's **name**, **description** and **expected arguments**.
**Tools** can be passed to [chat models](/docs/concepts/chat_models) that support [tool calling](/docs/concepts/tool_calling) allowing the model to request the execution of a specific function with specific inputs.
@@ -14,7 +14,7 @@ The **tool** abstraction in LangChain associates a python **function** with a **
- Tools are a way to encapsulate a function and its schema in a way that can be passed to a chat model.
- Create tools using the [@tool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.convert.tool.html) decorator, which simplifies the process of tool creation, supporting the following:
- Automatically infer the tool's **name**, **description** and **inputs**, while also supporting customization.
- Automatically infer the tool's **name**, **description** and **expected arguments**, while also supporting customization.
- Defining tools that return **artifacts** (e.g. images, dataframes, etc.)
- Hiding input arguments from the schema (and hence from the model) using **injected tool arguments**.

View File

@@ -1,9 +1,9 @@
# Why langchain?
# Why LangChain?
The goal of `langchain` the Python package and LangChain the company is to make it as easy possible for developers to build applications that reason.
The goal of `langchain` the Python package and LangChain the company is to make it as easy as possible for developers to build applications that reason.
While LangChain originally started as a single open source package, it has evolved into a company and a whole ecosystem.
This page will talk about the LangChain ecosystem as a whole.
Most of the components within in the LangChain ecosystem can be used by themselves - so if you feel particularly drawn to certain components but not others, that is totally fine! Pick and choose whichever components you like best.
Most of the components within the LangChain ecosystem can be used by themselves - so if you feel particularly drawn to certain components but not others, that is totally fine! Pick and choose whichever components you like best for your own use case!
## Features
@@ -17,8 +17,8 @@ LangChain exposes a standard interface for key components, making it easy to swi
[Orchestration](https://en.wikipedia.org/wiki/Orchestration_(computing)) is crucial for building such applications.
3. **Observability and evaluation:** As applications become more complex, it becomes increasingly difficult to understand what is happening within them.
Furthermore, the pace of development can become rate-limited by the [paradox of choice](https://en.wikipedia.org/wiki/Paradox_of_choice):
for example, developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost.
Furthermore, the pace of development can become rate-limited by the [paradox of choice](https://en.wikipedia.org/wiki/Paradox_of_choice).
For example, developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost.
[Observability](https://en.wikipedia.org/wiki/Observability) and evaluations can help developers monitor their applications and rapidly answer these types of questions with confidence.
@@ -72,11 +72,11 @@ There are several common characteristics of LLM applications that this orchestra
* **[Persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/):** The application needs to maintain [short-term and / or long-term memory](https://langchain-ai.github.io/langgraph/concepts/memory/).
* **[Human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/):** The application needs human interaction, e.g., pausing, reviewing, editing, approving certain steps.
The recommended way to do orchestration for these complex applications is [LangGraph](https://langchain-ai.github.io/langgraph/concepts/high_level/).
The recommended way to orchestrate components for complex applications is [LangGraph](https://langchain-ai.github.io/langgraph/concepts/high_level/).
LangGraph is a library that gives developers a high degree of control by expressing the flow of the application as a set of nodes and edges.
LangGraph comes with built-in support for [persistence](https://langchain-ai.github.io/langgraph/concepts/persistence/), [human-in-the-loop](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/), [memory](https://langchain-ai.github.io/langgraph/concepts/memory/), and other features.
It's particularly well suited for building [agents](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/) or [multi-agent](https://langchain-ai.github.io/langgraph/concepts/multi_agent/) applications.
Importantly, individual LangChain components can be used within LangGraph nodes, but you can also use LangGraph **without** using LangChain components.
It's particularly well suited for building [agents](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/) or [multi-agent](https://langchain-ai.github.io/langgraph/concepts/multi_agent/) applications.
Importantly, individual LangChain components can be used as LangGraph nodes, but you can also use LangGraph **without** using LangChain components.
:::info[Further reading]

View File

@@ -4,8 +4,8 @@ sidebar_class_name: "hidden"
# Documentation Style Guide
As LangChain continues to grow, the surface area of documentation required to cover it continues to grow too.
This page provides guidelines for anyone writing documentation for LangChain, as well as some of our philosophies around
As LangChain continues to grow, the amount of documentation required to cover the various concepts and integrations continues to grow too.
This page provides guidelines for anyone writing documentation for LangChain and outlines some of our philosophies around
organization and structure.
## Philosophy
@@ -18,9 +18,9 @@ Under this framework, all documentation falls under one of four categories: [Tut
### Tutorials
Tutorials are lessons that take the reader through a practical activity. Their purpose is to help the user
gain understanding of concepts and how they interact by showing one way to achieve some goal in a hands-on way. They should **avoid** giving
multiple permutations of ways to achieve that goal in-depth. Instead, it should guide a new user through a recommended path to accomplishing the tutorial's goal. While the end result of a tutorial does not necessarily need to
be completely production-ready, it should be useful and practically satisfy the the goal that you clearly stated in the tutorial's introduction. Information on how to address additional scenarios
gain an understanding of concepts and how they interact by showing one way to achieve a specific goal in a hands-on manner. They should **avoid** giving
multiple permutations of ways to achieve that goal in-depth. Instead, it should guide a new user through a recommended path to accomplish the tutorial's goal. While the end result of a tutorial does not necessarily need to
be completely production-ready, it should be useful and practically satisfy the goal that is clearly stated in the tutorial's introduction. Information on how to address additional scenarios
belongs in how-to guides.
To quote the Diataxis website:
@@ -53,8 +53,8 @@ Here are some high-level tips on writing a good tutorial:
### How-to guides
A how-to guide, as the name implies, demonstrates how to do something discrete and specific.
It should assume that the user is already familiar with underlying concepts, and is trying to solve an immediate problem, but
should still give some background or list the scenarios where the information contained within can be relevant.
It should assume that the user is already familiar with underlying concepts, and is focused on solving an immediate problem. However,
it should still provide some background or list certain scenarios where the information may be relevant.
They can and should discuss alternatives if one approach may be better than another in certain cases.
To quote the Diataxis website:
@@ -79,10 +79,10 @@ Here are some high-level tips on writing a good how-to guide:
### Conceptual guide
LangChain's conceptual guide falls under the **Explanation** quadrant of Diataxis. They should cover LangChain terms and concepts
in a more abstract way than how-to guides or tutorials, and should be geared towards curious users interested in
gaining a deeper understanding of the framework. Try to avoid excessively large code examples - the goal here is to
impart perspective to the user rather than to finish a practical project. These guides should cover **why** things work they way they do.
LangChain's conceptual guide falls under the **Explanation** quadrant of Diataxis. These guides should cover LangChain terms and concepts
in a more abstract way than how-to guides or tutorials, targeting curious users interested in
gaining a deeper understanding and insights of the framework. Try to avoid excessively large code examples as the primary goal is to
provide perspective to the user rather than to finish a practical project. These guides should cover **why** things work they way they do.
This guide on documentation style is meant to fall under this category.
@@ -137,14 +137,14 @@ be only one (very rarely two), canonical pages for a given concept or feature. I
### Link to other sections
Because sections of the docs do not exist in a vacuum, it is important to link to other sections as often as possible
to allow a developer to learn more about an unfamiliar topic inline.
Because sections of the docs do not exist in a vacuum, it is important to link to other sections frequently,
to allow a developer to learn more about an unfamiliar topic within the flow of reading.
This includes linking to the API references as well as conceptual sections!
This includes linking to the API references and conceptual sections!
### Be concise
In general, take a less-is-more approach. If a section with a good explanation of a concept already exists, you should link to it rather than
In general, take a less-is-more approach. If another section with a good explanation of a concept exists, you should link to it rather than
re-explain it, unless the concept you are documenting presents some new wrinkle.
Be concise, including in code samples.

View File

@@ -8,7 +8,7 @@ This tutorial will guide you through making a simple documentation edit, like co
---
## Editing a Documentation Page on GitHub**
## Editing a Documentation Page on GitHub
Sometimes you want to make a small change, like fixing a typo, and the easiest way to do this is to use GitHub's editor directly.

View File

@@ -13,7 +13,7 @@
"# How to split by HTML header \n",
"## Description and motivation\n",
"\n",
"[HTMLHeaderTextSplitter](https://python.langchain.com/api_reference/text_splitters/html/langchain_text_splitters.html.HTMLHeaderTextSplitter.html) is a \"structure-aware\" chunker that splits text at the HTML element level and adds metadata for each header \"relevant\" to any given chunk. It can return chunks element by element or combine elements with the same metadata, with the objectives of (a) keeping related text grouped (more or less) semantically and (b) preserving context-rich information encoded in document structures. It can be used with other text splitters as part of a chunking pipeline.\n",
"[HTMLHeaderTextSplitter](https://python.langchain.com/api_reference/text_splitters/html/langchain_text_splitters.html.HTMLHeaderTextSplitter.html) is a \"structure-aware\" [text splitter](/docs/concepts/text_splitters/) that splits text at the HTML element level and adds metadata for each header \"relevant\" to any given chunk. It can return chunks element by element or combine elements with the same metadata, with the objectives of (a) keeping related text grouped (more or less) semantically and (b) preserving context-rich information encoded in document structures. It can be used with other text splitters as part of a chunking pipeline.\n",
"\n",
"It is analogous to the [MarkdownHeaderTextSplitter](/docs/how_to/markdown_header_metadata_splitter) for markdown files.\n",
"\n",

View File

@@ -12,7 +12,7 @@
"source": [
"# How to split by HTML sections\n",
"## Description and motivation\n",
"Similar in concept to the [HTMLHeaderTextSplitter](/docs/how_to/HTML_header_metadata_splitter), the `HTMLSectionSplitter` is a \"structure-aware\" chunker that splits text at the element level and adds metadata for each header \"relevant\" to any given chunk.\n",
"Similar in concept to the [HTMLHeaderTextSplitter](/docs/how_to/HTML_header_metadata_splitter), the `HTMLSectionSplitter` is a \"structure-aware\" [text splitter](/docs/concepts/text_splitters/) that splits text at the element level and adds metadata for each header \"relevant\" to any given chunk.\n",
"\n",
"It can return chunks element by element or combine elements with the same metadata, with the objectives of (a) keeping related text grouped (more or less) semantically and (b) preserving context-rich information encoded in document structures.\n",
"\n",

View File

@@ -7,7 +7,7 @@
"source": [
"# How to use the MultiQueryRetriever\n",
"\n",
"Distance-based vector database retrieval embeds (represents) queries in high-dimensional space and finds similar embedded documents based on a distance metric. But, retrieval may produce different results with subtle changes in query wording, or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious.\n",
"Distance-based [vector database](/docs/concepts/vectorstores/) retrieval [embeds](/docs/concepts/embedding_models/) (represents) queries in high-dimensional space and finds similar embedded documents based on a distance metric. But, retrieval may produce different results with subtle changes in query wording, or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious.\n",
"\n",
"The [MultiQueryRetriever](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.multi_query.MultiQueryRetriever.html) automates the process of prompt tuning by using an LLM to generate multiple queries from different perspectives for a given user input query. For each query, it retrieves a set of relevant documents and takes the unique union across all queries to get a larger set of potentially relevant documents. By generating multiple perspectives on the same question, the `MultiQueryRetriever` can mitigate some of the limitations of the distance-based retrieval and get a richer set of results.\n",
"\n",
@@ -151,7 +151,7 @@
"id": "7e170263-facd-4065-bb68-d11fb9123a45",
"metadata": {},
"source": [
"Note that the underlying queries generated by the retriever are logged at the `INFO` level."
"Note that the underlying queries generated by the [retriever](/docs/concepts/retrievers/) are logged at the `INFO` level."
]
},
{

View File

@@ -7,11 +7,11 @@
"source": [
"# How to add scores to retriever results\n",
"\n",
"Retrievers will return sequences of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects, which by default include no information about the process that retrieved them (e.g., a similarity score against a query). Here we demonstrate how to add retrieval scores to the `.metadata` of documents:\n",
"[Retrievers](/docs/concepts/retrievers/) will return sequences of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects, which by default include no information about the process that retrieved them (e.g., a similarity score against a query). Here we demonstrate how to add retrieval scores to the `.metadata` of documents:\n",
"1. From [vectorstore retrievers](/docs/how_to/vectorstore_retriever);\n",
"2. From higher-order LangChain retrievers, such as [SelfQueryRetriever](/docs/how_to/self_query) or [MultiVectorRetriever](/docs/how_to/multi_vector).\n",
"\n",
"For (1), we will implement a short wrapper function around the corresponding vector store. For (2), we will update a method of the corresponding class.\n",
"For (1), we will implement a short wrapper function around the corresponding [vector store](/docs/concepts/vectorstores/). For (2), we will update a method of the corresponding class.\n",
"\n",
"## Create vector store\n",
"\n",

View File

@@ -22,7 +22,7 @@
":::\n",
"\n",
"By themselves, language models can't take actions - they just output text.\n",
"A big use case for LangChain is creating **agents**.\n",
"A big use case for LangChain is creating **[agents](/docs/concepts/agents/)**.\n",
"Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be.\n",
"The results of those actions can then be fed back into the agent and it determines whether more actions are needed, or whether it is okay to finish.\n",
"\n",

View File

@@ -7,7 +7,7 @@
"source": [
"# Caching\n",
"\n",
"Embeddings can be stored or temporarily cached to avoid needing to recompute them.\n",
"[Embeddings](/docs/concepts/embedding_models/) can be stored or temporarily cached to avoid needing to recompute them.\n",
"\n",
"Caching embeddings can be done using a `CacheBackedEmbeddings`. The cache backed embedder is a wrapper around an embedder that caches\n",
"embeddings in a key-value store. The text is hashed and the hash is used as the key in the cache.\n",

View File

@@ -21,7 +21,7 @@
"source": [
"# How to split by character\n",
"\n",
"This is the simplest method. This splits based on a given character sequence, which defaults to `\"\\n\\n\"`. Chunk length is measured by number of characters.\n",
"This is the simplest method. This [splits](/docs/concepts/text_splitters/) based on a given character sequence, which defaults to `\"\\n\\n\"`. Chunk length is measured by number of characters.\n",
"\n",
"1. How the text is split: by single character separator.\n",
"2. How the chunk size is measured: by number of characters.\n",

View File

@@ -15,7 +15,7 @@
"\n",
":::\n",
"\n",
"LangChain provides an optional caching layer for chat models. This is useful for two main reasons:\n",
"LangChain provides an optional caching layer for [chat models](/docs/concepts/chat_models). This is useful for two main reasons:\n",
"\n",
"- It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. This is especially useful during app development.\n",
"- It can speed up your application by reducing the number of API calls you make to the LLM provider.\n",

View File

@@ -7,13 +7,13 @@
"source": [
"# How to init any model in one line\n",
"\n",
"Many LLM applications let end users specify what model provider and model they want the application to be powered by. This requires writing some logic to initialize different ChatModels based on some user configuration. The `init_chat_model()` helper method makes it easy to initialize a number of different model integrations without having to worry about import paths and class names.\n",
"Many LLM applications let end users specify what model provider and model they want the application to be powered by. This requires writing some logic to initialize different [chat models](/docs/concepts/chat_models/) based on some user configuration. The `init_chat_model()` helper method makes it easy to initialize a number of different model integrations without having to worry about import paths and class names.\n",
"\n",
":::tip Supported models\n",
"\n",
"See the [init_chat_model()](https://python.langchain.com/api_reference/langchain/chat_models/langchain.chat_models.base.init_chat_model.html) API reference for a full list of supported integrations.\n",
"\n",
"Make sure you have the integration packages installed for any model providers you want to support. E.g. you should have `langchain-openai` installed to init an OpenAI model.\n",
"Make sure you have the [integration packages](/docs/integrations/chat/) installed for any model providers you want to support. E.g. you should have `langchain-openai` installed to init an OpenAI model.\n",
"\n",
":::"
]

View File

@@ -14,7 +14,7 @@
"\n",
":::\n",
"\n",
"Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n",
"Tracking [token](/docs/concepts/tokens/) usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n",
"\n",
"This guide requires `langchain-anthropic` and `langchain-openai >= 0.1.9`."
]

View File

@@ -15,7 +15,7 @@
"source": [
"# How to add memory to chatbots\n",
"\n",
"A key feature of chatbots is their ability to use content of previous conversation turns as context. This state management can take several forms, including:\n",
"A key feature of chatbots is their ability to use the content of previous conversational turns as context. This state management can take several forms, including:\n",
"\n",
"- Simply stuffing previous messages into a chat model prompt.\n",
"- The above, but trimming old messages to reduce the amount of distracting information the model has to deal with.\n",
@@ -185,7 +185,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
" We'll pass the latest input to the conversation here and let the LangGraph keep track of the conversation history using the checkpointer:"
" We'll pass the latest input to the conversation here and let LangGraph keep track of the conversation history using the checkpointer:"
]
},
{

View File

@@ -15,7 +15,7 @@
"source": [
"# How to add retrieval to chatbots\n",
"\n",
"Retrieval is a common technique chatbots use to augment their responses with data outside a chat model's training data. This section will cover how to implement retrieval in the context of chatbots, but it's worth noting that retrieval is a very subtle and deep topic - we encourage you to explore [other parts of the documentation](/docs/how_to#qa-with-rag) that go into greater depth!\n",
"[Retrieval](/docs/concepts/retrieval/) is a common technique chatbots use to augment their responses with data outside a chat model's training data. This section will cover how to implement retrieval in the context of chatbots, but it's worth noting that retrieval is a very subtle and deep topic - we encourage you to explore [other parts of the documentation](/docs/how_to#qa-with-rag) that go into greater depth!\n",
"\n",
"## Setup\n",
"\n",
@@ -80,7 +80,7 @@
"source": [
"## Creating a retriever\n",
"\n",
"We'll use [the LangSmith documentation](https://docs.smith.langchain.com/overview) as source material and store the content in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](/docs/how_to#qa-with-rag).\n",
"We'll use [the LangSmith documentation](https://docs.smith.langchain.com/overview) as source material and store the content in a [vector store](/docs/concepts/vectorstores/) for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](/docs/how_to#qa-with-rag).\n",
"\n",
"Let's use a document loader to pull text from the docs:"
]

View File

@@ -42,7 +42,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API Key: ········\n",
@@ -78,7 +78,7 @@
"\n",
"Our end goal is to create an agent that can respond conversationally to user questions while looking up information as needed.\n",
"\n",
"First, let's initialize Tavily and an OpenAI chat model capable of tool calling:"
"First, let's initialize Tavily and an OpenAI [chat model](/docs/concepts/chat_models/) capable of tool calling:"
]
},
{

View File

@@ -7,7 +7,7 @@
"source": [
"# How to split code\n",
"\n",
"[RecursiveCharacterTextSplitter](https://python.langchain.com/api_reference/text_splitters/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) includes pre-built lists of separators that are useful for splitting text in a specific programming language.\n",
"[RecursiveCharacterTextSplitter](https://python.langchain.com/api_reference/text_splitters/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) includes pre-built lists of separators that are useful for [splitting text](/docs/concepts/text_splitters/) in a specific programming language.\n",
"\n",
"Supported languages are stored in the `langchain_text_splitters.Language` enum. They include:\n",
"\n",

View File

@@ -7,13 +7,13 @@
"source": [
"# How to do retrieval with contextual compression\n",
"\n",
"One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.\n",
"One challenge with [retrieval](/docs/concepts/retrieval/) is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.\n",
"\n",
"Contextual compression is meant to fix this. The idea is simple: instead of immediately returning retrieved documents as-is, you can compress them using the context of the given query, so that only the relevant information is returned. “Compressing” here refers to both compressing the contents of an individual document and filtering out documents wholesale.\n",
"\n",
"To use the Contextual Compression Retriever, you'll need:\n",
"\n",
"- a base retriever\n",
"- a base [retriever](/docs/concepts/retrievers/)\n",
"- a Document Compressor\n",
"\n",
"The Contextual Compression Retriever passes queries to the base retriever, takes the initial documents and passes them through the Document Compressor. The Document Compressor takes a list of documents and shortens it by reducing the contents of documents or dropping documents altogether.\n",

View File

@@ -14,15 +14,15 @@
"\n",
":::\n",
"\n",
"In this guide, we'll learn how to create a custom chat model using LangChain abstractions.\n",
"In this guide, we'll learn how to create a custom [chat model](/docs/concepts/chat_models/) using LangChain abstractions.\n",
"\n",
"Wrapping your LLM with the standard [`BaseChatModel`](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface allow you to use your LLM in existing LangChain programs with minimal code modifications!\n",
"\n",
"As an bonus, your LLM will automatically become a LangChain `Runnable` and will benefit from some optimizations out of the box (e.g., batch via a threadpool), async support, the `astream_events` API, etc.\n",
"As an bonus, your LLM will automatically become a LangChain [Runnable](/docs/concepts/runnables/) and will benefit from some optimizations out of the box (e.g., batch via a threadpool), async support, the `astream_events` API, etc.\n",
"\n",
"## Inputs and outputs\n",
"\n",
"First, we need to talk about **messages**, which are the inputs and outputs of chat models.\n",
"First, we need to talk about **[messages](/docs/concepts/messages/)**, which are the inputs and outputs of chat models.\n",
"\n",
"### Messages\n",
"\n",
@@ -503,7 +503,7 @@
"\n",
"Documentation:\n",
"\n",
"* The model contains doc-strings for all initialization arguments, as these will be surfaced in the [APIReference](https://python.langchain.com/api_reference/langchain/index.html).\n",
"* The model contains doc-strings for all initialization arguments, as these will be surfaced in the [API Reference](https://python.langchain.com/api_reference/langchain/index.html).\n",
"* The class doc-string for the model contains a link to the model API if the model is powered by a service.\n",
"\n",
"Tests:\n",

View File

@@ -19,9 +19,9 @@
"\n",
"## Overview\n",
"\n",
"Many LLM applications involve retrieving information from external data sources using a `Retriever`. \n",
"Many LLM applications involve retrieving information from external data sources using a [Retriever](/docs/concepts/retrievers/). \n",
"\n",
"A retriever is responsible for retrieving a list of relevant `Documents` to a given user `query`.\n",
"A retriever is responsible for retrieving a list of relevant [Documents](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) to a given user `query`.\n",
"\n",
"The retrieved documents are often formatted into prompts that are fed into an LLM, allowing the LLM to use the information in the to generate an appropriate response (e.g., answering a user question based on a knowledge base).\n",
"\n",

View File

@@ -7,7 +7,7 @@
"source": [
"# How to create tools\n",
"\n",
"When constructing an agent, you will need to provide it with a list of `Tool`s that it can use. Besides the actual function that is called, the Tool consists of several components:\n",
"When constructing an [agent](/docs/concepts/agents/), you will need to provide it with a list of [Tools](/docs/concepts/tools/) that it can use. Besides the actual function that is called, the Tool consists of several components:\n",
"\n",
"| Attribute | Type | Description |\n",
"|---------------|---------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n",
@@ -20,7 +20,7 @@
"\n",
"1. Functions;\n",
"2. LangChain [Runnables](/docs/concepts/runnables);\n",
"3. By sub-classing from [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.BaseTool.html) -- This is the most flexible method, it provides the largest degree of control, at the expense of more effort and code.\n",
"3. By sub-classing from [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html) -- This is the most flexible method, it provides the largest degree of control, at the expense of more effort and code.\n",
"\n",
"Creating tools from functions may be sufficient for most use cases, and can be done via a simple [@tool decorator](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.tool.html#langchain_core.tools.tool). If more configuration is needed-- e.g., specification of both sync and async implementations-- one can also use the [StructuredTool.from_function](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.structured.StructuredTool.html#langchain_core.tools.structured.StructuredTool.from_function) class method.\n",
"\n",

View File

@@ -157,7 +157,7 @@
" temp_file_path = temp_file.name\n",
"\n",
"loader = CSVLoader(file_path=temp_file_path)\n",
"loader.load()\n",
"data = loader.load()\n",
"for record in data[:2]:\n",
" print(record)"
]

View File

@@ -26,7 +26,7 @@
"`Document` objects are often formatted into prompts that are fed into an LLM, allowing the LLM to use the information in the `Document` to generate a desired response (e.g., summarizing the document).\n",
"`Documents` can be either used immediately or indexed into a vectorstore for future retrieval and use.\n",
"\n",
"The main abstractions for Document Loading are:\n",
"The main abstractions for [Document Loading](/docs/concepts/document_loaders/) are:\n",
"\n",
"\n",
"| Component | Description |\n",

View File

@@ -9,7 +9,7 @@
"\n",
"[Portable Document Format (PDF)](https://en.wikipedia.org/wiki/PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems.\n",
"\n",
"This guide covers how to load `PDF` documents into the LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) format that we use downstream.\n",
"This guide covers how to [load](/docs/concepts/document_loaders/) `PDF` documents into the LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) format that we use downstream.\n",
"\n",
"Text in PDFs is typically represented via text boxes. They may also contain images. A PDF parser might do some combination of the following:\n",
"\n",
@@ -48,7 +48,7 @@
"\n",
"## Simple and fast text extraction\n",
"\n",
"If you are looking for a simple string representation of text that is embedded in a PDF, the method below is appropriate. It will return a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects-- one per page-- containing a single string of the page's text in the Document's `page_content` attribute. It will not parse text in images or scanned PDF pages. Under the hood it uses the [pypydf](https://pypdf.readthedocs.io/en/stable/) Python library.\n",
"If you are looking for a simple string representation of text that is embedded in a PDF, the method below is appropriate. It will return a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects-- one per page-- containing a single string of the page's text in the Document's `page_content` attribute. It will not parse text in images or scanned PDF pages. Under the hood it uses the [pypdf](https://pypdf.readthedocs.io/en/stable/) Python library.\n",
"\n",
"LangChain [document loaders](/docs/concepts/document_loaders) implement `lazy_load` and its async variant, `alazy_load`, which return iterators of `Document` objects. We will use these below."
]
@@ -250,7 +250,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"Unstructured API Key: ········\n"

View File

@@ -7,7 +7,7 @@
"source": [
"# How to load web pages\n",
"\n",
"This guide covers how to load web pages into the LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) format that we use downstream. Web pages contain text, images, and other multimedia elements, and are typically represented with HTML. They may include links to other pages or resources.\n",
"This guide covers how to [load](/docs/concepts/document_loaders/) web pages into the LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) format that we use downstream. Web pages contain text, images, and other multimedia elements, and are typically represented with HTML. They may include links to other pages or resources.\n",
"\n",
"LangChain integrates with a host of parsers that are appropriate for web pages. The right parser will depend on your needs. Below we demonstrate two possibilities:\n",
"\n",

View File

@@ -6,7 +6,7 @@
"source": [
"# How to combine results from multiple retrievers\n",
"\n",
"The [EnsembleRetriever](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.ensemble.EnsembleRetriever.html) supports ensembling of results from multiple retrievers. It is initialized with a list of [BaseRetriever](https://python.langchain.com/api_reference/core/retrievers/langchain_core.retrievers.BaseRetriever.html) objects. EnsembleRetrievers rerank the results of the constituent retrievers based on the [Reciprocal Rank Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) algorithm.\n",
"The [EnsembleRetriever](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.ensemble.EnsembleRetriever.html) supports ensembling of results from multiple [retrievers](/docs/concepts/retrievers/). It is initialized with a list of [BaseRetriever](https://python.langchain.com/api_reference/core/retrievers/langchain_core.retrievers.BaseRetriever.html) objects. EnsembleRetrievers rerank the results of the constituent retrievers based on the [Reciprocal Rank Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) algorithm.\n",
"\n",
"By leveraging the strengths of different algorithms, the `EnsembleRetriever` can achieve better performance than any single algorithm. \n",
"\n",

View File

@@ -17,7 +17,7 @@
"source": [
"# How to use example selectors\n",
"\n",
"If you have a large number of examples, you may need to select which ones to include in the prompt. The Example Selector is the class responsible for doing so.\n",
"If you have a large number of examples, you may need to select which ones to include in the prompt. The [Example Selector](/docs/concepts/example_selectors/) is the class responsible for doing so.\n",
"\n",
"The base interface is defined as below:\n",
"\n",
@@ -36,7 +36,7 @@
"\n",
"The only method it needs to define is a ``select_examples`` method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected.\n",
"\n",
"LangChain has a few different types of example selectors. For an overview of all these types, see the below table.\n",
"LangChain has a few different types of example selectors. For an overview of all these types, see the [below table](#example-selector-types).\n",
"\n",
"In this guide, we will walk through creating a custom example selector."
]

View File

@@ -23,7 +23,7 @@
"]} />\n",
"\n",
"\n",
"LangSmith datasets have built-in support for similarity search, making them a great tool for building and querying few-shot examples.\n",
"[LangSmith](https://docs.smith.langchain.com/) datasets have built-in support for similarity search, making them a great tool for building and querying few-shot examples.\n",
"\n",
"In this guide we'll see how to use an indexed LangSmith dataset as a few-shot example selector.\n",
"\n",

View File

@@ -7,7 +7,7 @@
"source": [
"# How to select examples by length\n",
"\n",
"This example selector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more."
"This [example selector](/docs/concepts/example_selectors/) selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more."
]
},
{

View File

@@ -7,7 +7,7 @@
"source": [
"# How to select examples by maximal marginal relevance (MMR)\n",
"\n",
"The `MaxMarginalRelevanceExampleSelector` selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples.\n"
"The `MaxMarginalRelevanceExampleSelector` selects [examples](/docs/concepts/example_selectors/) based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples.\n"
]
},
{

View File

@@ -9,7 +9,7 @@
"\n",
"The `NGramOverlapExampleSelector` selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive. \n",
"\n",
"The selector allows for a threshold score to be set. Examples with an ngram overlap score less than or equal to the threshold are excluded. The threshold is set to -1.0, by default, so will not exclude any examples, only reorder them. Setting the threshold to 0.0 will exclude examples that have no ngram overlaps with the input.\n"
"The [selector](/docs/concepts/example_selectors/) allows for a threshold score to be set. Examples with an ngram overlap score less than or equal to the threshold are excluded. The threshold is set to -1.0, by default, so will not exclude any examples, only reorder them. Setting the threshold to 0.0 will exclude examples that have no ngram overlaps with the input.\n"
]
},
{

View File

@@ -7,7 +7,7 @@
"source": [
"# How to select examples by similarity\n",
"\n",
"This object selects examples based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.\n"
"This object selects [examples](/docs/concepts/example_selectors/) based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.\n"
]
},
{

View File

@@ -9,7 +9,7 @@
"\n",
"The quality of extractions can often be improved by providing reference examples to the LLM.\n",
"\n",
"Data extraction attempts to generate structured representations of information found in text and other unstructured or semi-structured formats. [Tool-calling](/docs/concepts/tool_calling) LLM features are often used in this context. This guide demonstrates how to build few-shot examples of tool calls to help steer the behavior of extraction and similar applications.\n",
"Data extraction attempts to generate [structured representations](/docs/concepts/structured_outputs/) of information found in text and other unstructured or semi-structured formats. [Tool-calling](/docs/concepts/tool_calling) LLM features are often used in this context. This guide demonstrates how to build few-shot examples of tool calls to help steer the behavior of extraction and similar applications.\n",
"\n",
":::tip\n",
"While this guide focuses how to use examples with a tool calling model, this technique is generally applicable, and will work\n",

View File

@@ -7,7 +7,7 @@
"source": [
"# How to use prompting alone (no tool calling) to do extraction\n",
"\n",
"Tool calling features are not required for generating structured output from LLMs. LLMs that are able to follow prompt instructions well can be tasked with outputting information in a given format.\n",
"[Tool calling](/docs/concepts/tool_calling/) features are not required for generating structured output from LLMs. LLMs that are able to follow prompt instructions well can be tasked with outputting information in a given format.\n",
"\n",
"This approach relies on designing good prompts and then parsing the output of the LLMs to make them extract information well.\n",
"\n",

View File

@@ -27,7 +27,7 @@
"\n",
":::\n",
"\n",
"In this guide, we'll learn how to create a simple prompt template that provides the model with example inputs and outputs when generating. Providing the LLM with a few such examples is called few-shotting, and is a simple yet powerful way to guide generation and in some cases drastically improve model performance.\n",
"In this guide, we'll learn how to create a simple prompt template that provides the model with example inputs and outputs when generating. Providing the LLM with a few such examples is called [few-shotting](/docs/concepts/few_shot_prompting/), and is a simple yet powerful way to guide generation and in some cases drastically improve model performance.\n",
"\n",
"A few-shot prompt template can be constructed from either a set of examples, or from an [Example Selector](https://python.langchain.com/api_reference/core/example_selectors/langchain_core.example_selectors.base.BaseExampleSelector.html) class responsible for choosing a subset of examples from the defined set.\n",
"\n",

View File

@@ -27,7 +27,7 @@
"\n",
":::\n",
"\n",
"This guide covers how to prompt a chat model with example inputs and outputs. Providing the model with a few such examples is called few-shotting, and is a simple yet powerful way to guide generation and in some cases drastically improve model performance.\n",
"This guide covers how to prompt a chat model with example inputs and outputs. Providing the model with a few such examples is called [few-shotting](/docs/concepts/few_shot_prompting/), and is a simple yet powerful way to guide generation and in some cases drastically improve model performance.\n",
"\n",
"There does not appear to be solid consensus on how best to do few-shot prompting, and the optimal prompt compilation will likely vary by model. Because of this, we provide few-shot prompt templates like the [FewShotChatMessagePromptTemplate](https://python.langchain.com/api_reference/core/prompts/langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate.html?highlight=fewshot#langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate) as a flexible starting point, and you can modify or replace them as you see fit.\n",
"\n",

View File

@@ -7,7 +7,7 @@
"source": [
"# How to filter messages\n",
"\n",
"In more complex chains and agents we might track state with a list of messages. This list can start to accumulate messages from multiple different models, speakers, sub-chains, etc., and we may only want to pass subsets of this full list of messages to each model call in the chain/agent.\n",
"In more complex chains and agents we might track state with a list of [messages](/docs/concepts/messages/). This list can start to accumulate messages from multiple different models, speakers, sub-chains, etc., and we may only want to pass subsets of this full list of messages to each model call in the chain/agent.\n",
"\n",
"The `filter_messages` utility makes it easy to filter messages by type, id, or name.\n",
"\n",

View File

@@ -15,7 +15,7 @@
"source": [
"# How to construct knowledge graphs\n",
"\n",
"In this guide we'll go over the basic ways of constructing a knowledge graph based on unstructured text. The constructured graph can then be used as knowledge base in a RAG application.\n",
"In this guide we'll go over the basic ways of constructing a knowledge graph based on unstructured text. The constructured graph can then be used as knowledge base in a [RAG](/docs/concepts/rag/) application.\n",
"\n",
"## ⚠️ Security note ⚠️\n",
"\n",
@@ -44,6 +44,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.0\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.3.1\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
@@ -65,7 +68,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
" ········\n"
@@ -105,7 +108,7 @@
"os.environ[\"NEO4J_USERNAME\"] = \"neo4j\"\n",
"os.environ[\"NEO4J_PASSWORD\"] = \"password\"\n",
"\n",
"graph = Neo4jGraph()"
"graph = Neo4jGraph(refresh_schema=False)"
]
},
{
@@ -149,8 +152,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Nodes:[Node(id='Marie Curie', type='Person'), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='MARRIED'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='PROFESSOR')]\n"
"Nodes:[Node(id='Marie Curie', type='Person', properties={}), Node(id='Pierre Curie', type='Person', properties={}), Node(id='University Of Paris', type='Organization', properties={})]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='Pierre Curie', type='Person', properties={}), type='MARRIED', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='University Of Paris', type='Organization', properties={}), type='PROFESSOR', properties={})]\n"
]
}
],
@@ -191,8 +194,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Nodes:[Node(id='Marie Curie', type='Person'), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='SPOUSE'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='WORKED_AT')]\n"
"Nodes:[Node(id='Marie Curie', type='Person', properties={}), Node(id='Pierre Curie', type='Person', properties={}), Node(id='University Of Paris', type='Organization', properties={})]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='Pierre Curie', type='Person', properties={}), type='SPOUSE', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='University Of Paris', type='Organization', properties={}), type='WORKED_AT', properties={})]\n"
]
}
],
@@ -209,6 +212,44 @@
"print(f\"Relationships:{graph_documents_filtered[0].relationships}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To define the graph schema more precisely, consider using a three-tuple approach for relationships. In this approach, each tuple consists of three elements: the source node, the relationship type, and the target node."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Nodes:[Node(id='Marie Curie', type='Person', properties={}), Node(id='Pierre Curie', type='Person', properties={}), Node(id='University Of Paris', type='Organization', properties={})]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='Pierre Curie', type='Person', properties={}), type='SPOUSE', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='University Of Paris', type='Organization', properties={}), type='WORKED_AT', properties={})]\n"
]
}
],
"source": [
"allowed_relationships = [\n",
" (\"Person\", \"SPOUSE\", \"Person\"),\n",
" (\"Person\", \"NATIONALITY\", \"Country\"),\n",
" (\"Person\", \"WORKED_AT\", \"Organization\"),\n",
"]\n",
"\n",
"llm_transformer_tuple = LLMGraphTransformer(\n",
" llm=llm,\n",
" allowed_nodes=[\"Person\", \"Country\", \"Organization\"],\n",
" allowed_relationships=allowed_relationships,\n",
")\n",
"llm_transformer_tuple = llm_transformer_filtered.convert_to_graph_documents(documents)\n",
"print(f\"Nodes:{graph_documents_filtered[0].nodes}\")\n",
"print(f\"Relationships:{graph_documents_filtered[0].relationships}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -229,15 +270,15 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Nodes:[Node(id='Marie Curie', type='Person', properties={'born_year': '1867'}), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='SPOUSE'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='WORKED_AT')]\n"
"Nodes:[Node(id='Marie Curie', type='Person', properties={'born_year': '1867'}), Node(id='Pierre Curie', type='Person', properties={}), Node(id='University Of Paris', type='Organization', properties={}), Node(id='Poland', type='Country', properties={}), Node(id='France', type='Country', properties={})]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='Poland', type='Country', properties={}), type='NATIONALITY', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='France', type='Country', properties={}), type='NATIONALITY', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='Pierre Curie', type='Person', properties={}), type='SPOUSE', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='University Of Paris', type='Organization', properties={}), type='WORKED_AT', properties={})]\n"
]
}
],
@@ -264,12 +305,71 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"graph.add_graph_documents(graph_documents_props)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Most graph databases support indexes to optimize data import and retrieval. Since we might not know all the node labels in advance, we can handle this by adding a secondary base label to each node using the `baseEntityLabel` parameter."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"graph.add_graph_documents(graph_documents, baseEntityLabel=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Results will look like:\n",
"\n",
"![graph_construction3.png](../../static/img/graph_construction3.png)\n",
"\n",
"The final option is to also import the source documents for the extracted nodes and relationships. This approach lets us track which documents each entity appeared in."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"graph.add_graph_documents(graph_documents, include_source=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Graph will have the following structure:\n",
"\n",
"![graph_construction4.png](../../static/img/graph_construction4.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this visualization, the source document is highlighted in blue, with all entities extracted from it connected by `MENTIONS` relationships."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -288,7 +388,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.11.5"
}
},
"nbformat": 4,

View File

@@ -9,7 +9,7 @@
"source": [
"# Hybrid Search\n",
"\n",
"The standard search in LangChain is done by vector similarity. However, a number of vectorstores implementations (Astra DB, ElasticSearch, Neo4J, AzureSearch, Qdrant...) also support more advanced search combining vector similarity search and other search techniques (full-text, BM25, and so on). This is generally referred to as \"Hybrid\" search.\n",
"The standard search in LangChain is done by vector similarity. However, a number of [vector store](/docs/integrations/vectorstores/) implementations (Astra DB, ElasticSearch, Neo4J, AzureSearch, Qdrant...) also support more advanced search combining vector similarity search and other search techniques (full-text, BM25, and so on). This is generally referred to as \"Hybrid\" search.\n",
"\n",
"**Step 1: Make sure the vectorstore you are using supports hybrid search**\n",
"\n",

View File

@@ -74,6 +74,7 @@ These are the core building blocks you can use when building applications.
### Chat models
[Chat Models](/docs/concepts/chat_models) are newer forms of language models that take messages in and output a message.
See [supported integrations](/docs/integrations/chat/) for details on getting started with chat models from a specific provider.
- [How to: do function/tool calling](/docs/how_to/tool_calling)
- [How to: get models to return structured output](/docs/how_to/structured_output)
@@ -114,6 +115,7 @@ What LangChain calls [LLMs](/docs/concepts/text_llms) are older forms of languag
[Output Parsers](/docs/concepts/output_parsers) are responsible for taking the output of an LLM and parsing into more structured format.
- [How to: parse text from message objects](/docs/how_to/output_parser_string)
- [How to: use output parsers to parse an LLM response into structured format](/docs/how_to/output_parser_structured)
- [How to: parse JSON output](/docs/how_to/output_parser_json)
- [How to: parse XML output](/docs/how_to/output_parser_xml)
@@ -153,6 +155,7 @@ What LangChain calls [LLMs](/docs/concepts/text_llms) are older forms of languag
### Embedding models
[Embedding Models](/docs/concepts/embedding_models) take a piece of text and create a numerical representation of it.
See [supported integrations](/docs/integrations/text_embedding/) for details on getting started with embedding models from a specific provider.
- [How to: embed text data](/docs/how_to/embed_text)
- [How to: cache embedding results](/docs/how_to/caching_embeddings)
@@ -160,6 +163,7 @@ What LangChain calls [LLMs](/docs/concepts/text_llms) are older forms of languag
### Vector stores
[Vector stores](/docs/concepts/vectorstores) are databases that can efficiently store and retrieve embeddings.
See [supported integrations](/docs/integrations/vectorstores/) for details on getting started with vector stores from a specific provider.
- [How to: use a vector store to retrieve data](/docs/how_to/vectorstores)

View File

@@ -9,7 +9,7 @@
"\n",
"Here, we will look at a basic indexing workflow using the LangChain indexing API. \n",
"\n",
"The indexing API lets you load and keep in sync documents from any source into a vector store. Specifically, it helps:\n",
"The indexing API lets you load and keep in sync documents from any source into a [vector store](/docs/concepts/vectorstores/). Specifically, it helps:\n",
"\n",
"* Avoid writing duplicated content into the vector store\n",
"* Avoid re-writing unchanged content\n",

View File

@@ -7,7 +7,7 @@
"source": [
"# LangChain Expression Language Cheatsheet\n",
"\n",
"This is a quick reference for all the most important LCEL primitives. For more advanced usage see the [LCEL how-to guides](/docs/how_to/#langchain-expression-language-lcel) and the [full API reference](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html).\n",
"This is a quick reference for all the most important [LCEL](/docs/concepts/lcel/) primitives. For more advanced usage see the [LCEL how-to guides](/docs/how_to/#langchain-expression-language-lcel) and the [full API reference](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html).\n",
"\n",
"### Invoke a runnable\n",
"#### [Runnable.invoke()](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.invoke) / [Runnable.ainvoke()](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.ainvoke)"

View File

@@ -7,7 +7,7 @@
"source": [
"# How to cache LLM responses\n",
"\n",
"LangChain provides an optional caching layer for LLMs. This is useful for two reasons:\n",
"LangChain provides an optional [caching](/docs/concepts/chat_models/#caching) layer for LLMs. This is useful for two reasons:\n",
"\n",
"It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times.\n",
"It can speed up your application by reducing the number of API calls you make to the LLM provider.\n"

View File

@@ -7,7 +7,7 @@
"source": [
"# How to track token usage for LLMs\n",
"\n",
"Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n",
"Tracking [token](/docs/concepts/tokens/) usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n",
"\n",
":::info Prerequisites\n",
"\n",

View File

@@ -11,10 +11,11 @@
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [Chat models](/docs/concepts/chat_models)\n",
"- [Tokens](/docs/concepts/tokens)\n",
"\n",
":::\n",
"\n",
"Certain chat models can be configured to return token-level log probabilities representing the likelihood of a given token. This guide walks through how to get this information in LangChain."
"Certain [chat models](/docs/concepts/chat_models/) can be configured to return token-level log probabilities representing the likelihood of a given token. This guide walks through how to get this information in LangChain."
]
},
{

View File

@@ -7,7 +7,7 @@
"source": [
"# How to merge consecutive messages of the same type\n",
"\n",
"Certain models do not support passing in consecutive messages of the same type (a.k.a. \"runs\" of the same message type).\n",
"Certain models do not support passing in consecutive [messages](/docs/concepts/messages/) of the same type (a.k.a. \"runs\" of the same message type).\n",
"\n",
"The `merge_message_runs` utility makes it easy to merge consecutive messages of the same type.\n",
"\n",

View File

@@ -7,7 +7,7 @@
"source": [
"# How to retrieve using multiple vectors per document\n",
"\n",
"It can often be useful to store multiple vectors per document. There are multiple use cases where this is beneficial. For example, we can embed multiple chunks of a document and associate those embeddings with the parent document, allowing retriever hits on the chunks to return the larger document.\n",
"It can often be useful to store multiple [vectors](/docs/concepts/vectorstores/) per document. There are multiple use cases where this is beneficial. For example, we can [embed](/docs/concepts/embedding_models/) multiple chunks of a document and associate those embeddings with the parent document, allowing [retriever](/docs/concepts/retrievers/) hits on the chunks to return the larger document.\n",
"\n",
"LangChain implements a base [MultiVectorRetriever](https://python.langchain.com/api_reference/langchain/retrievers/langchain.retrievers.multi_vector.MultiVectorRetriever.html), which simplifies this process. Much of the complexity lies in how to create the multiple vectors per document. This notebook covers some of the common ways to create those vectors and use the `MultiVectorRetriever`.\n",
"\n",
@@ -207,7 +207,7 @@
"id": "cdef8339-f9fa-4b3b-955f-ad9dbdf2734f",
"metadata": {},
"source": [
"The default search type the retriever performs on the vector database is a similarity search. LangChain vector stores also support searching via [Max Marginal Relevance](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.max_marginal_relevance_search). This can be controlled via the `search_type` parameter of the retriever:"
"The default search type the retriever performs on the vector database is a similarity search. LangChain vector stores also support searching via [Max Marginal Relevance](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.base.VectorStore.html#langchain_core.vectorstores.base.VectorStore.max_marginal_relevance_search). This can be controlled via the `search_type` parameter of the retriever:"
]
},
{

View File

@@ -7,11 +7,11 @@
"source": [
"# How to pass multimodal data directly to models\n",
"\n",
"Here we demonstrate how to pass multimodal input directly to models. \n",
"Here we demonstrate how to pass [multimodal](/docs/concepts/multimodality/) input directly to models. \n",
"We currently expect all input to be passed in the same format as [OpenAI expects](https://platform.openai.com/docs/guides/vision).\n",
"For other model providers that support multimodal input, we have added logic inside the class to convert to the expected format.\n",
"\n",
"In this example we will ask a model to describe an image."
"In this example we will ask a [model](/docs/concepts/chat_models/#multimodality) to describe an image."
]
},
{

View File

@@ -7,9 +7,9 @@
"source": [
"# How to use multimodal prompts\n",
"\n",
"Here we demonstrate how to use prompt templates to format multimodal inputs to models. \n",
"Here we demonstrate how to use prompt templates to format [multimodal](/docs/concepts/multimodality/) inputs to models. \n",
"\n",
"In this example we will ask a model to describe an image."
"In this example we will ask a [model](/docs/concepts/chat_models/#multimodality) to describe an image."
]
},
{

Some files were not shown because too many files have changed in this diff Show More