Merge branch 'cc/anthropic_313' into cc/tool_attr

This commit is contained in:
Chester Curme 2024-11-09 14:33:57 -05:00
commit 077199c5de
933 changed files with 5333 additions and 37434 deletions

2
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1,2 @@
/.github/ @efriis @baskaryan @ccurme
/libs/packages.yml @efriis

View File

@ -1,7 +1,7 @@
Thank you for contributing to LangChain!
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for purely docs changes, "templates: ..." for template changes, "infra: ..." for CI changes.
- Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes.
- Example: "community: add foobar LLM"

View File

@ -306,7 +306,7 @@ if __name__ == "__main__":
f"Unknown lib: {file}. check_diff.py likely needs "
"an update for this new library!"
)
elif any(file.startswith(p) for p in ["docs/", "templates/", "cookbook/"]):
elif any(file.startswith(p) for p in ["docs/", "cookbook/"]):
if file.startswith("docs/"):
docs_edited = True
dirs_to_run["lint"].add(".")

View File

@ -41,12 +41,6 @@ jobs:
shell: bash
run: poetry run pip install "boto3<2" "google-cloud-aiplatform<2"
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Run integration tests
shell: bash
env:

View File

@ -103,15 +103,20 @@ jobs:
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | grep -P $REGEX || true | head -1)
PREV_TAG=$(git tag --sort=-creatordate | (grep -P $REGEX || true) | head -1)
fi
# if PREV_TAG is empty, let it be empty
if [ -z "$PREV_TAG" ]; then
echo "No previous tag found - first release"
else
# confirm prev-tag actually exists in git repo with git tag
GIT_TAG_RESULT=$(git tag -l "$PREV_TAG")
if [ -z "$GIT_TAG_RESULT" ]; then
echo "Previous tag $PREV_TAG not found in git repo"
exit 1
fi
fi
TAG="${PKG_NAME}==${VERSION}"
@ -262,12 +267,6 @@ jobs:
make tests
working-directory: ${{ inputs.working-directory }}
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Import integration test dependencies
run: poetry install --with test,test_integration
working-directory: ${{ inputs.working-directory }}

View File

@ -66,12 +66,12 @@ spell_fix:
## lint: Run linting on the project.
lint lint_package lint_tests:
poetry run ruff check docs templates cookbook
poetry run ruff format docs templates cookbook --diff
poetry run ruff check --select I docs templates cookbook
git grep 'from langchain import' docs/docs templates cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
poetry run ruff check docs cookbook
poetry run ruff format docs cookbook cookbook --diff
poetry run ruff check --select I docs cookbook
git grep 'from langchain import' docs/docs cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
## format: Format the project files.
format format_diff:
poetry run ruff format docs templates cookbook
poetry run ruff check --select I --fix docs templates cookbook
poetry run ruff format docs cookbook
poetry run ruff check --select I --fix docs cookbook

View File

@ -59,7 +59,8 @@ For these applications, LangChain simplifies the entire application lifecycle:
- **[LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/)**: Turn your LangGraph applications into production-ready APIs and Assistants.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_062024.svg "LangChain Architecture Overview")
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_112024.svg#gh-light-mode-only "LangChain Architecture Overview")
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_112024_dark.svg#gh-dark-mode-only "LangChain Architecture Overview")
## 🧱 What can you build with LangChain?

View File

@ -63,3 +63,4 @@ Notebook | Description
[oracleai_demo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS.
[rag-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag-locally-on-intel-cpu.ipynb) | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release.
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.
[contextual_rag.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/contextual_rag.ipynb) | Performs contextual retrieval-augmented generation (RAG) prepending chunk-specific explanatory context to each chunk before embedding.

File diff suppressed because it is too large Load Diff

View File

@ -1 +1 @@
eNptVWtsHFcVdpIKpapEAoKKVoVMVoilrWd2XvuyZRp7N3bWsb1+OzYFc3fm7M545+WZO+u1S4uaFqSoiGZKK/UhpW3i7ILlOAkxaerEJSEQAU5EhJWGVLgVAvWNaIGqjSIU7qzXxFYyP3bn3nvud75zznfO7C4XwHZU01g3rRoYbCRhsnC83WUbRl1w8OMlHbBiypOd6Z7eA66tXrlPwdhy6kIhZKmMaYGBVEYy9VCBC0kKwiHybmlQgZnMmPL4G+uYhwI6OA7KgROoo779UEAyiS8Dk0UgRSmoABSiLGTIyKGSCKNmG+lABeVskBpTsUJJpubqhkMFG3MQpIgdFWxGNgQZasBWMZBzGShsUr5nl6yx4u/ZNmjIp0FlAI8BGJV9PGau4DFUN2DXNqh2ZOdlc8ygsqZNmHSOk4CNZVTHUC0LcMWpYWJFNXIUaA4wgVoqYJsa+DG4DtiBh79DdnRyR/O3chamBSZME/yM6dsaZJcj/w62AelkkUUEhmxg0C2Sd2LoY7FM9OGyAkgmVXmzZvOkYjrYm1mb6cNIkoDgg0EoEkLeodyEatVSMmRJwDBFsmtApY7eVB7AopGmFqC0fMs7gixLU6VKZkIjjmlMV6tB43ELbj6e8qOjSe0M7M2mCYnGVKiaIY4J8wx7pEg7GKmGRkpMa4jwKVmV85OrDywk5QkIXZWbV1q+PLPaxnS8g+1ISvesgUS2pHgHka1HxGOr923XwKoOXjnRebO76uENdwLDsYx4dA2wM25I3sFKIV5ZcxmwPU5LJsHwXmZLkmnmVfCu/Gt4WMoOZ/SGUWY0zLpopzuyyxyMpRuZtv7uvogYG2qx4gWc7h8oDAliUu1vk/M0F+XjEZ6NRAWaY1iGYzi6qPRn7eHGUXWgwA5vH5LScmuTU+S1CLNjtHUIN4lsKiy0J5LtmZTTCn2DI63y6E4YyghJkLv4drVLacpa4azQvzMitwy6SbelZ4jrqqcIO7egyg0d0TamS9/RrPCKklBRX4uYSm3vyU6EBwdizR1N0T4hFQ+39RfcVNcqejwn0myVYYQVY6z/zKxoQwMjhxXvgMDyP7PBsUhzw2MlkjLsOrsn/VY5/7tytcn3p3fekPCdk6SjwZvvdYnYeZ5KS5jiWV6kuFgdx9VxUaqlvXc6UXXTe0sJHu21keFkiQy3r0i+LCmukQd5KnFLsc/7YieV9OmTLqWhaJkO0FVW3vQuunt5vNGp5LHlzqJNO4cMdaLi1puvqH5sojgmS64sK4UxnY1PiIKaAVfKzlavWLbpuyGEaN3xJjmRC89Uj1aEN0WCZWmOpVlurkiTPgdN1VWS0MpvdciSu2GS7RM3G2AzD2Qcl8VKOdjXVlvYoBPF+s5vwIjxePzUrY1WoARiEhcic2utHFjNhuN158TNBlWI/awzXVyxplXZu/J1shiO8lE5i/h4RghnQYzGgCcvgijLGSnGyVH2VTL7VImg+NW0TBvTDkjki4LHvSu1Oir6Q6ZB4MJChERaT6mGpLky9LiZpOnH4NRTFhnpJpIPJ5rpBJIUoHsqAvTKycGOxvZU4vguerWS6LS1/DUrGyYZ5NlsqQdsUhhvStJMVybT0oYSwepuHPRmY3IkE+UhiuKCIGTiAt1E5tAK2v91N+mP2jLSCPeC5B1ThIZAnSgKgXpKRw2xCClT5Zv3aMmP1cj9dt2jW57YWFN5NmhdZzrWc1869c8Bcf6l2/fClzu/uOnuMNe16RvnNmw+jrecvGQML36rt+mzsUVAW4vqn56LLj532f3NVwbu2sIxdzT+YnbOufb9Fy9c+nRh6eR7C5OfXv33xx+V/nb58Jmzi5f2vvLY+ndepNTy+621hz7XrRyfxdTG3Xnz2Qc3PVVkPxxI7rnrXWqBdpm5/36eXrqnfjT03uU/rH/mtj+2aPdw5/48V5do/umZ0JajJ+YTrR999++nW65tPXci8dXNzKmBfY9vfdPYsedHp3uPLoq/vP++i20/Tr6z7Ym9+5aOaJyx9PbF5wsbHrlz5slfN+YnHhHjnPr81ybqd9XOb1w63/3B3W99eHbk/KGzrx6+2vBaZ/PVzNvbiuLWZ4OzpzLRL0iLI/cuHNj3l7pa+sLGX52+7ezCX1//SfSti3tT82zHjqboZ98L1tkvwLYm9QdHepkf1lL5p0dx5z/+c+F67fFPnv59+sH9b+y59kzwm7k7wLZe//n7L0z1GfsfILm+fn1Dzcdd3aGD62tq/gdFO3xA
eNptVX9QHNUdJ8ZMrDPWkLGpY5pxvUlELLvs3h73A0I7wB0ECBw5CALR4tvdd7fL7e5bd9/eD9K0TcyM05bpuLXaX2ongdxZiiQWBjUBbU0TW3VUaqZTdKKjqY2mjSG1pZqxtW+Po4FJ9o+7fe993+f7+X6/n+939+VT0LQUpK8aU3QMTSBisrCcfXkT3mdDC+/PaRDLSBrpiHZ2DdumMnenjLFhVVdWAkNhkAF1oDAi0ipTXKUoA1xJ3g0VFmBGBCRl31zF7PZo0LJAAlqeamrXbo+IiC8dk4WnmZJBClKAMoAuAYsKAwwaTaBBqkyKl1FpBcuUiFRb0y2qrC4ByyhiR5U1AhOWMdRdpoIhOZcghRHlerbJGsvunmlCFbg0KAHiNIR6YR+n0RIeQ8Ugtk2dagNmUkJpnYojkzDpyJKA9UVUS1cMA+KCUx1hWdETFFQtyHgqKI+JVOjGYFvQ9Oy5h+xo5I7qbiUMTPNMFU3wBeTa6mSXI/8WNiHQyCIOCAzZwFAzSN6JoYvFMoE9eRkCiVTl7ZJ1IzKysDO+MtOHgShCgg91QpEQcp5MDCpGBSXBOAkYjpLs6rBQR2c0CaFBA1VJwdziLecIMAxVEQuZqRywkD5WrAaNswa88njUjY4mtdOxMxklJOqaK4sZ4pgqL+M9kqEtDBRdJSWmVUD45IzC+bHlBwYQkwSELsrNyS1eHl9ugyznUBsQo50rIIEpys4hYGp+38TyfdPWsaJBJ9/QcaW74uFldzzDcUzgqRXAVlYXnUOFQjy94jLEZpYWEcFwDrA5EaGkAp25f/T3i/F+QattAlJnONbeW7/TFlpR4i7Jn8nG0shs9Lf472tVt7c28RaQU0lFi9BcgGdDoQAXCNIcwzIcw9GhLNvbVhXf7reMRLK+rquNb07W6V41oTBqZ4zZ0b4zGIzx3bwexN12n9UqdYQjHaLcGk2GBpKaz5foTRotRrzVO9Dd3dTSE8j2DUTq62oows5OKVJtc6anZbthRncY4cZQtgWpYa8mtEU6d+rxHqOBiQQSTdu7bNsbySSX0fMG/TRbZOhnfUHWfcaXtKFCPYFlZ5hnvU+Y0DJIc8P7cyRl2Lb2jbit8srv88UmPxhtvSzhDSOko6Ez0yXbFRQboNpRivKyXh/F+at5vpplqaa2rrGGopuuq0rwqS4T6FacyDCyJPm8KNt6EkqjDVcV+4wrdlJJlz7pUhpmDGRBusjKGeuhY4vjjW4OTyx2Fo3MBNCVwYJbZ6ag+vRgJi2JtiTJqbTGhgZ9vCJAW4xPFq8YJnLdEEK0ZjnDPs43XjxZ0t0oiZWlOZZmuaMZmrQ5VBVNIfks/BZnrOWMVJFkP3OlAUZJSKZx3leoBvvccgsTakSwru/LML5QKDR9daMlKJ6YhHj/0ZVWFlzOhvNq1jNXGhQhDrLWWGbJmlYkZ24zWfRDwSdwfq8gVQGWF/xcwB+AARgnKykgBuPCs2T0KSJBcYtpIBPTFhTJBwVnnbkKDWTcGVPLc1W8n0RaQym6qNoS7LSFMHJjsGoog0x0BKTDDY10AxBlSHcW9Ofkw73tdW3NDVM99HIh0VFj8WOW1xGZ4/F4rhOapDDOqKgiWyLD0oQ5ghWr63UmgxKEQjAeqhIAISsG6HoyhpbQ/i+7EXfS5oFKuKdEZ0Lmaz3VPh/vqaE0UBv0kzIVPnl7c26seuLEqgO3fv+6ksKzeij22/Yz7E3TZ79a0X902+QAODC5fi1zz8Jkfd8XvhvbMv6fH978la+Lz5es+dc7zuk/Tr4ZeW99OY8a+Quf1opDyRP5oZf7HlQf/bD/9HTtZ9/678d/uzQ+PfPTtlc+iBx+nDk58eOFC+z8mSd+8/7qbZe6btjsf7VEmdzYN7ju2lnb97VN7befmT3Blt5wJ7ujO3TuF1989dKpS1sWHhh+aOLJXzfSt82X/zwxxC08/hw3vHP+pXzFp5tH9q4Lr/nm0Y3C0G5f6bueazdt63nvxU3ls+v/fNOxO65/+uAt52ce/qD25tI3JlpzL2UvjO57W1h38daL6du3/nND09qH/j11x/f2oBvhxvDf47vEqU8Gv/yne9/fyhx/5J13D01Vn3p2benrJ+/d+/rF343zJzco5efroh9t9f/AfKHpyOz82Yz44jEe/+UPKfDog/s/nP84Ort31/h39vsB+uXd1/fef+5Hp9eEf+X7qHTuG2e/ffyz13a/seutczULm27bErzm4ZobW14+/sLpnzzwybbnb3nrfHDNX8GXrvvZ9Kmpay5Mt7+2+THzsT2kFp9/vrrkkYbyu7esLin5H4DplPE=

View File

@ -1 +1 @@
eNqlV3uQFMUZPyQVH6moUJRIyortFirozdzszr7JlbkH935yD7zTq73emd6bYWem5+axtwt3UICYKFTMJPkLHzFw3FrniahXCipa5aOCQBnjK0UETGJSIdFYFWOh0STmm9ldbk+IaOw/dqenv/7619/369/XsyWfIYYpU23BjKxZxMCCBR3T2ZI3yKhNTOu2KZVYEhUnuzp7enfbhnzsBsmydDNeVYV1maU60bDMClStyvirBAlbVfCsK8RzM5mkYu63F/11g08lpolHiOmLo1s2+AQKa2kWdHwD1EYSzhCEBQFskEURRjrWRGwiEVs4ZWCVoGExNcyiUmsiBkEy2EoEUdvSbQvRlGvDSgSLK1ayFk2o2EiLdExbsXI4fqtWmjn3NDw8PNcZL/302EZGzhDR63QJCgZE46jDhfDV2jjqIdmS15oRUvAuJxVZGzGrenRqm8RENUmKjeJi2ICImFV1kqyI8Fg+hhpgEI3fqo0zDBN3f0qtrBMfjzNfvZ2ZNH6Ww7PalxuLuzgR4grIC62sw8Nju8GizjHYYxM2DMhjrYFtTTxPPFWskFKqAoFy759r/i8Y84BE2EDI9eThnGdd1vF7OE0WtVBJcxGKCsmhFQ0KheQIBN7IIyMm6pVwjhgrUZ2tupmFSSniIfUc8dGvgTPiZwNRni/iDPwPnF48ZROAtmFZw3A25HRa1oh2Pn6W4wyEzxezL45nzA1oASd/3nhiwRUWAIotCa1ok5Ucasc51EWIshI12JZBFIWcGycf+hrxDPGsH53Je/A8/FwrK4qMVYCpGTlUA5C0L8/PL8R53nhGWa7Iz3OK1dxTI0iVBnIJWmIgT61BdCvRmCFbxBPHrhxIt4YEKhJPWDVzDCxla05K1xDLNjTU2dE24M3IYEUW580DKUYatSSgNyKKSebm1lPteguBRNogj7lz6LiWQ4qcNLAhQ77BBaxtSVhDSVtWLEbWSutQDcbddQq6z/oqkc+gCnHrg5kzLaL6JirRvLKxFirN9YUKIFADKIPdvaMkscaIG5ORAvAUCGe5NzdSvokheKPC3hT31YhuMTwbYiAOSeraavDWD/8mMBGr0Elh2Da8ABw6VEgwdH1xbMR9R6mSECQqC+67DT4rp3sLpWzNq6SuwzPProEGtcQ10L2tJwyiKwlsWr6JiaK3Yon8vx2BnUhMwZD1oqmvphRnU4KjxaI+02WH7OWIZIlgA1v0UsZVFeIGx7RZc8uqKVFbESGswLICNeYbwjmRINy26bKj6JMqlcikcGuQVTfthfoMIziZNEhGxhaUVwaOS5og03YLuYVECpZAMqRQmp5nmCQpENziAmAJpAGmGUUqe0zRsXtFgLuL6cVFN+BOYlgyKXThWBg57+lzUfHIbWqyrhPLjYRhezEuxR2yD0sW0uJehWSDiG5mig6Hykxpch0R3AxODE3k3RsIQDlZcfmkRE3L2Tv/lvSwezyAcVBDqAgLOA+NrJf1SiSSFFCYTAPFNeIl2ZlOE6IzEPQMmSrMcvZhXVdkweN61TqTajPFI8G4WM4ennb5zsBh0CxnthNA1DRXFbngZ0MBltuXZUwL6oYCR5ZRMOCZKiT4qfIBHQtpcMIUr4rOVGHy3nIbajp72rHQ2TPPJTYEydmDDTUcfKz8PQTbpYeTr+s6e7ni4NxyINwcG3xknmMzpwnOHu9oPjFvMrGMHCNQ8OH8gpsSgFAycY59kEgIqURSrR5lR0OcjVvtdTfTgWhnDdvWv6YvHIwONuqxjNXZvzYzyAfr5f42Mc34I4FYOMCFIzzjZznWz/qZrNSfMhI1o/LaDJdYPSh0ii21ZjaghNmm0ZZBqzbINYf49rr69mSz2UL6Bta1iKOtZDDJ1xOxO9Aud0u1KT2U4vtbw2LjgF1vN/YM+rtXIUBnZ2SxuiPSxnarTQ1SQJLqZNzXGGxuXt2TWh8aWBtt6KiN9PHNsVBbf8Zu7i6DF/AHGa6IMMwFo5zb9pa4AaVrxJKcyUAgGnjAIKYON3OydQpiZtnmlkn3IBw9lC/e0Hd1ts5x+IrJeiClc7DXBv2DS1enYKEAFwgifzTu98cDHGps752pK67Te04OPtJrwGFNAQ9XlzifFyRbSxNxuu6cbD9YKCGMix+EmyFZnZqEKaJyZm5m1hS+TZjm+scKR4uhxgjW5PXess5Bj/Zj67NjomCLopQZU7nY+iAvJ4ktpGaLU0Ao3GUAEKOazu6QP7q3OFIi3jTslWP8HMP5n8wyoPxEkVUZAur9Fj+QTGcyBNHef7aBRdMEPqXyQS8d3DPlFgbcamTNXXvOTTAWiz19bqOSKx5MQtHgk/OtTFKOxh9Qzf1nGxRd7A6r5ky2ZM7IonNsOXQSRORiwWCAC/ERLiykBDEUFfhANMalUlwKi/wBVxAFcONmU6eGxZhQOeCmkXOOVao466pMNe8P8WHY6irQaUGxRdJjJ+upuwlzFdKhSlMsPlzXwNRhQSJMj0dAJ18/0FHT3lz3+M1MOZOYTr3wKZrXKMh0KjXVQwzIjDMtKNQWQS4NMgW+1tQMOLNRMZyM8JyfSwmET8Z4phaEqOTtDO8mXa3NY7h7mBnBeUziq33xYJD3rYJqVB0NQ568D9bNUwXxf3HBq1dvv6jCawuVNUfufItbcuLTG184MbltebSq8w8nvr9q26uXiqFty69c/czG2UcnW37D3ffjTVUbXz8t7tp2+Uen7jETS+/yHxIuWvbDHT9dseOe6v98vDj/3N6tBJ1amX3rhU8Pv/f4TUvHSWJ4kP/wn5FLPvnVzm8NXXnHz9JvPPec7/qZvqWXHRna9+HWh96u2Paj7jdnX39FCb199IHZkep/LD3S/sujFvtKzaFLvyvsGNr51MvdL358yTVpa2fT5pf/vuSa927yf7P/SPyC4weOL1r7jdcWLBlcNB1X+1piFzSll13M/mn4+H1HTlUH72xdpvIvvfPvW05/MPPM3zYd3Dh07bt//uOG9+761+BfDgcvvnbP4omtDdmrL6NHr1z44vbb0zccnr2z8jvLRhsji3cseEIeuH9XfGL/Vb+ndzwer83eFo1ddd1dKzumlA9eYDo68gdeeXPjJ+O33fj0awfWf3Qy33135XV73nn03cTSZ08vdOKh/p+faN90zUt97z96yDfWfu/vmu5+86nT8lX3j0dauf6lt/te3dT1xpZ3fzD04IXP7/32qeSJzZN3HFmUfmLnyV9vTy6/L/L8+x+16g7+XtcE7T3e+NqGxermAy03PZC9d/H+w4uu6P7Jg0seeiT37I6Tb3x6YUXFZ58trFi973jT9oUVFf8FrWip8w==
eNqlV3uQFMUZP5RKoMwDq6xoosZ2IwJ6s7e7s29ylne7x8G9vTvgTsS73pnenWFnpsfpmb3b4y4pIGoiUDqhSEqtPJDlTk88PCEqhosFRCt/EB8lEVGDSsoiqMEqn1WpAPlmdhf3hIjG/mN3evrrr3/9fb/+fT1rx3LEYDLVZmyXNZMYWDChw+y1Ywa5zSLM/NmoSkyJioWO9q7urZYhH75OMk2dxWtqsC57qU40LHsFqtbk/DWChM0aeNYV4roppKiYf3XWs6s9KmEMZwjzxNGK1R6BwlqaCR1PL7WQhHMEYUEAG2RShJGONREzJGITpw2sEtQvpvu9yGmLiUGQDHYSQdQydctENO2MeyWCxfkLvCbtU7GRFemANn9Bf/wWzZlV/O3v7y8+DJd/uiwjJ+eI6HY6BAUDgmHU5iz51dow6iKDZa91GVL0LqcUWcuwmi6dWowwVJei2Cgthg2IAKtJSLIiwmPlGFoEg2j4Fm2Y47i481NuFZ34cJz76u3MpOGzHJ7VvtxY3MGJkK+IvNgqOjw8thpe1D4Ae1yMDQNyV29gSxPPE08VK6ScqkCg0vvnmv8LxlwgEW8g5HhycU6zruj4XZzMi5qopDkIRYXk0fxFCoXkCATeyJkMQ90SzhNjAUpYqpNZmJQmLlLXER/9Gjgjfm8gyvMlnIH/gdONp8wAaAuWNQznQc5mZY1o5+NnJc5A+Hwx++J4xpyAFnHy540nFhwhAaDYlND8FlnJo1acRx2EKAvQIss0iKKQc+PkQ18jniHe60dn8h48Dz+Xy4oiYxVgakYe1QEk7cvz8wtxnjeeUa+vxM9pIlX8bQR50kASQT8M5CoyCGs1GjBkk7gi2JEHedaQQEXiiqfGBsBSNoty2UlMy9BQe1tLr2udw4osTpsDUos0akpAZ0QURorzklSbZyKQQwukMH8OjdbySJFTBjZkyC1MhzVNCWsoZcmKyclaeQ2qwbizRlHTvZ5q5DGoQhztZ3lmEtUzUo2mlYTlUEXmFRVeoAbQAzt7RiliDhAnFpki6DSIZKU3J0KekZXwRoV9Kc6rjG5yvDfEQQxS1LHV4K0f/hmwDqvQSWPYMrwAHDpUPzB0fPm8EecdpUqfIFFZcN6t9ph53V0obWlulXQcnnl2DDSoG46B7m69zyC60oeZ6RkZKXkrlb//2xHYiYQJhqyXTD115TgzCY6RFy1lDitkN0dkkAgWsEQvZ1tVIW5wJJdoTtlkErUUEcIK7CrSYrohnAkJwm0xhxkln1SpRozCjUBWnbQX6y+M4FTKIDkZm1BKOTgaWYKY5RRqE4kULIFgSKE0O80wRdIgrqUFwBJIA0wzShR2maJjp/zDvYS5cdENuG8YpkyKXTgORt59+lxUXGIzTdZ1YjqRMCw3xuW4Q/ZhyWJanGuObBDRyUzJ4coKU5paRQQngyMrR8acGwZAOVI1pyBRZtoT029AO5zjAYyDekFFWMB+JDMk69VIJGmgMBkHimvETbI9niVE5yDoOTJanGU/inVdkQWX6zWrGNW2l44E52A5e3jc4TsHh0Ez7V3tAKJuSU2JC35vKOANPDrIMRNqhAJHllMw4BktJviPlQM6FrLghCtdA+3R4uSJShvK7G2tWGjvmuYSG4Jkb8OGGg7urHwPwXboYY8lOs5erjT42XIg0n5vZHKaY5bXBHubezSfmDaZmEaeEyj4sLf4RgUglEzswx/09QnpvpRa24jFrmRnW2/9UivVTDPLxfBgvnOAGovCTeHbmpWW5kaeYSmXldUGzh/hfbFYxB+Jcn6vz+v3+rlY3tfbGkq3hJmeydbXdbfyS7J1WkDJyF6lq9N7U9vSaLSTX8ZrUXOZdTNrFjuSDR2C1Nyeja3KqsFgpjerN+np5sCqZcsam3oi+ZtXNdTXLUSAzsrJYu2SwZ6mFt1ov0lPLorlm6iSDKip1oaupVq6R094GyKZxpZuywo0DGYr4AWiYc5XQhj2BaM+p02UuQFlKmNKdiEQCIYfNAjT4dZN1o1CzEyLrS04B+HAX8ZKt+8H2ps/4/D3CkkgpT3VLVnVyBdBbTSHAr5AEPnDcZ6P+3jU2Nq9PVFap/ucHJzsNuCwpoGHDWXOjwmSpWWJOJ44J9uniiWEc/CDcHNkUKeMcCVU9vYerrP43cEtSe4sHi2OGhmsyUPusvaUS/uBocEBUbBEUcoNqL7YUJCXU8QS0rtKU0AonGUAEKcyeysfDU2URsrEG4e9+ji/j/P5nxrkQPmJIqsyBNT9LX38MLsQgmg/ebaBSbMEPpPGgm46fH+qtDDgBiNrztqfuQnGYrE95zYqu+LBJBTjn5puxUglGn9AZU+ebVBysTWssu2DZXNOFu3D10CnL+zj+VAqGk4H/SSU5lNBHIlif4hPp6NCJB327XYEUQA3TjZ1apgcg8oBN4y8fbhaxYOOytTyMCEMW10IOi0olki6rFSSOptgC5EOVZpicUdiEZfAgkS4LpeA9liyt62udUni8R6ukklcu178zBzTKMh0Oj3aRQzIjD0uKNQSQS4NMgq+Out67V1RkZBUzBcJx/xCNC1EuHoQorK3M7wrOFo7huHuwXKCvVPiaz3xYJD3LIRqVBsNQ57cj9E1o0Xxf2bGB1etn1Xltgs3dLXe/apvztSx5T1sy+aqvY+tCO/eMj6lXK9++ugKq86+7dq9b74yUfvQ4otPTzXtu3ZN50Ox5Cd78vSGkVmXXXjvjX++UYn+4vXxF4++9v6/9o/4X3vq7vDJf89d8NKVzycKI53Vv+3v8R8YWr/2SI+AjYfvKbz47eorjH1NVvx3rfv0G6aefmjN69c8eN3NaTl4+RsbH9Sz/H0t9ETbgY32BbXJ90L9GyYTJ3cYB2efuDq16a375za+n79jzh/aNqzhxx+r+vTWoaP39b+yZp9Uv3LbIxdtG6r6/tbU3s3vXBB5Y8vAiR88s/mu/YHh2T9989TVzVcdP/SPsY8/PLFueOjUkfWHFv/68bb53C7p8oP7V6w2L97gf+HHO3Y/Xtj44mTi4dSJ4/Vr/W96rzcOdm/77p1z/1P9wubmn//qnsKOdQ13rPtbz2XHjiw99Pzrb/V//JPZjdGXYt849NzDt95/xejtx945+PSmk698J3nvgfjOTy49eXv3quORycSJd+N7N022vndd25Uz7trZu9a69olLNh3eMjGTHsNbv/n3CeufH30y85dXTwxt18N7Hrh03uJbXt6vfdqsT+4tbM2tfnZ98r4fzvlo/NTC+7Ovtr7dmTn9ozdm1P9+bkvqXZ99dM/R5/6KP77k9Nsv53a3XTZ8fPaH0c3xROY3ylUJ/VunILmnT19YNeujeTMvmllV9V/1Y8Fm

View File

@ -1 +1 @@
eNrtXM1v28gVb09Fc+qlpxbFQCiQpBBpfojURxAsZDmO7Vi2Ezt27NhwRuRQZMQvc0hZSuJDtz0XEHruoV2vXQRpdhe7aLfbbs89FL1nDz30T+hf0DeUFMtxHGWzzm4sDRHIJGf4+N5vHt8H9VPeP2ySiDqB//0njh+TCBsxHNDfvn8YkZ2E0PjXBx6J7cDcX1pcXvkgiZxnv7DjOKSliQkcOmIQEh87ohF4E015wrBxPAH7oUtSMfu1wGx/9YP/Psx4hFJcJzRTQncfZowA7uXHcJDZ9NeDBOGIoN0gajh+He06sY0wCrFvYopMHGMrwh5Bjo+W2qCKL6IVmyCfnQssFMP+wCSK7pnWPXHTX7FhH/6x8YjQxI3Z7HthBHZeMi3RJti8dPnyvdKm/wjBxj6Wk6jpNImZHiwZLqYUdheY4K+3PULLpNWXWq6TrnSn5oKBdGI5DBJKKCrXAhz1bgYI+DGdqNiOa8Lu4BiaZvA8Aj0FQSixj/42cFB6VBK+/vb8okcnBJ7YXm+s9CjFU+pq3t0GDlTYrUYiWtwFG2dwFMEKTUY48c0heHrYJT08kaIMSn9hk18xliqSFxWNSUr1PDZ74EBO9aQimgtsn2louqSNLk27ASyOQeCMU69TcETcJtFlVEk8trJwkUVSTVNBauEb6JmXRaWgqj09lVP0TPF0KCg6jx0foxniNOApAnCH4Dmop6IPw+zVeBYZoF091aF4YoPFFVAUw2N+ad5x26iK22iJEPcymk7iiLguebmeqvYN8NRUUUbP1z03xD/XHNd1sAdq+lEblUEl//X985V6DsWzIEqpf2ayKBMFLmFRkrZpTLzMXhYdC55rEG8vdkOcEUSAHGZhF9VIvEtAYQi4CKIosiB+vNeNhThG9QigJxE78E+9lF2SXkvTkIjd9wbVgegVZfa24IwXmMRlp+phLKiiJsRJVAvYXB/OyvCXwopiDw7iKCFwDHaEkGdgHhMliXl2LgjcXmqI22F6Cyvx01TERD3fL4H5LO6zCWGaC7YjErrbmMZsnkmoETlhb2qm3MsXiNrgUyK6TQnYy2AIEGkRI4kJ6koBCDwPrAX/nPXDJIYrgsQ1AQ1IRGC8Y74wERzEBpQSyhJWT2bgZhENIFs6kOxQkMRMEIzgWi0iTQcwN5EAftIgDFRIVTEyA5jpBzFyg6BxbGKNWBBpejeAmZD82kESwZLQXRKJzNoQs4wHK0lTXMIIcnEUO6R7CA9Z1E73XkDFgBVD1HfCkMQMiShJMe7jDssFt8zs7bHlhhLAiYjJVqYncGtgalC7T4wYpu5t7R2ydAqq/Od7P9q3Axp3nh6vDj7ChkHARSB4BibcoPOn+gMnzCKTWOB55DE4tU/SRe48bhASCgB6kxx0r+p8jMPQdYzURSfu08B/0nsIBKbLyeHHzEEFcH8/7ny2CEqUZyd6viCLmiJKH7cEGkPAdKEsEVwM+hx0F/hvgwMhNhogROiVSJ2D7sVPB+cEtPNhFRuLy8dE4siwOx/iyNNznw6eB7CZe3QOK0snb9cbPLodRCxJzH1yTDBt+0bnQwu7lPzl2MUkjtqCEYCMzu+lAwMcyiGdZ//b3jas7Zp3dUfc0aQE30ju3wnWC4tlcX711m09V9i4Hhab8eLqWnNDzU05q/NmQ5DzSlFXJD2vCrIoibIoCy171Yq2yzvOWlPavrZhLJpzk7SluLo4szO3EU/mpFlNrVamqrVZOkdur9+fM3dukI2aOkXMm0rVuWlPWqFmqas3dPP6ejKVXF/ekG9eQaBd0nTMqwv5efGmNzNtK7ZdcfDt67nZ2WvL1gNtfa0wvTCZv63OFrX51WYye3NAPUXOCVJPQ13KFSS2Pe37BsTsemx39hVJLvwRSsAQKlLyqwPALE7o+/vsQfjXPw97lekfFm8c+fCP96fAKTtfrrCIBdXGohEjRVJySC6UZLkEZ65XV55UevdZYT74DMWkFU+QJjvTjXlXENTDESXx1SS2hMInKxE8vRY45rX+Q3Bo2InfIObjykvd/0vm/rC2zCAIvQJphQElQk/NzpM7wq1ukS7MTn3afdaEIKpj33mQPgudL9PnYPdBa9c0EtO0m7ueVHyQU50aSQzrs94lEDnYbUAhwaOdD1RNfdob6XviYzBeEmRJkOQvWgIEb+I6ngMIp5+9ToF29jWA//OTE+KgQXzaOcyl6yP9Y3BGBPnd8dm9j8TkisXi318+qS9KhSm6Uvzi+CzAekCMrHj085MTeiI+0Dz6pNWfLjhm59nP4WC7lpNVrajkNSUPHxrRTKum5bBpyZpFVLPwVxYhDRDDVjMMIlhtSCWRE7c7z7IebrGwc1WVNVUHU69A4DbcxCTLSW0qYEbQKyiEbBtg86PKtFDBhk2E5dQjO4dT6wvl6mzlz3eEQdcSFtPYDeN+AHHbsg6WSQQr03lsuEFiQvyMyAHIulVe73xWMPVaXpUsHStYrRVVYXJx+RC7oGTT6Hxqq1czpVxOzVyBPHS1oMOCpC3aLw+6Yf+rn/ybtVIsazgQ9TOsnzOgmxPK81OmUw2X7sypUnXH252ZNKRddbVyZ+P6rfVMtp8KuleIRx2gmDo4TDDSogNk9h/egpLtFw/HawfwMUWDK7olz7YFapEo7doyJT9xXZBlB47B0hyUC45vklamJGUhzbkxzpQe9oqUDPRuDjzqcFn2qGTqCmC5etvArvuijK7RMLB9f6q5jOs3bWvyhqetFrwdvLqz1KisgLBu+hsoUAbqk355crI6yeConnisx4NRyJlbWUivVkKx29VqL5txgzo8ijXaVxNMd6gNMjBlstNZW3sXLpz/JTp1AQaRHATs4WaKGQfpVSB1KzQO0xCYNjMl7k3DYTItjtEwjO7evchBGgZSuU44SMNAupjlGA3DCPGHbShG0xyiYRBBu8RBGhqPtrZEjtIwlNibc47SMJQuXeauNBQkNzA4SEPrbV4B8HKbl9u83OblNi+3R6vc5hgNf2+7N24gDbd1EMYBm+9OLS5c27pw4SxpnD9UOY2T0zg5jZPTODmNc5RonMeVSbPM0ZQBFgU6lrJP42qyXIuGcShen9I5UAAgxgJIv+PehIPNjGndvXsRgvLFLLrIQit7ZckwuHRZdAPjhaHnxcNx5CVRlhVVKeiqXiwWc7KuDOLD7D1m9/brWMd5sZwXy3mxnBf77vNiVU07W16sOtq82MJ54cWqb4EXa6m6aeSJoeYkrFtFM0f0vERMomKZgMMZ54EXK2tSERffgBf7s5+e/i7h/gMpT9bD1Y2F5prSqFRxoswoxgKN3uxdgvpO82JxMBMlZUkp4Elppa4sre00/OlqVX63ebHnZYm+U17syIJ0trzYkYXpbHmxIwvTGfJiRxajs+TFjixI0xyib/O7w5EF6Qy5DCOLEeLxaChG7FszjtIwlNj3ihylb5U+PLIonSl9eGRROkv68MiCdJb04dFtS3gVwLsS3pXwroR3Jbwr4V3JGHYlHKNvk2V9TkB6p1jWF37DWdacZc1Z1pxlzVnWnGXNWdansqzfGqCnU5zODtAULoCtn0NfRPXk+KnQKpou5wtaDmqkQlGX1NeG9lQ7OYGdE9g5gZ0T2N99AnteUs6WwJ4bZQJ7TtLPCYFdzr8FArteNHVVJgVFJrJBdEKKpCbDyZpOFDNPlHNBYMd5i7zJf+ws/u701zQNdaM1U/btOmneXppMqq1Vac6f3ylee7PXNLnvgsCeybwN3vh5QeYIhhWbZMbU9MHWbGwx6LWkY2s/ro+v/0P7Mra2W+mX8mNqvEPH1+dDsLnleKwUbI8tCuNquDSuhovjargsK+NqenZsw9uu7bjjm91j3tLxlm6MW7qxLu3Huafrf5vO+zre1/G+jvd1vK8bXcMVTedrPm7hbcUmUUqH4S09b2p5U8ubWt7U8qaWN7W8qR1R03s/JhjjpI99XvHwiodXPJyZxQseXunyVzsjZfhr/I6exkH4sl/Q/x/QUZLh
eNrtXEtv29gVbldFs+qmqxbFhVAgSSHSfIjUIwgG8jN+O5ZsxxMbzhV5KdKiSIaXlCUnXnTadQGhy67a8dhFkGZmMIN2Ou103UXRfWbRRX9Cf0HPpaRYjuMok3EysXSJQCZ5Lw/P+e7heVCf8sFxg4TU8b3vP3a8iITYiOCA/vaD45DcjwmNfn1UJ5Htm4cry6Xyh3HoPP2FHUUBLYyN4cAR/YB42BENvz7WkMcMG0djsB+4JBFzWPHN1tc/+O+DVJ1QiquEpgro7oOU4cO9vAgOUlveph8jHBK054c1x6uiPSeyEUYB9kxMkYkjbIW4TpDjoZUWqOKJqGwT5LFzvoUi2O+bRNE907onbnllG/bhHxsPCY3diM2+F4Rg5zXTEm2CzWvXr98rbHkPEWzsoxSHDadBzORgxXAxpbC7xAR/s+0hKpFmT2qxSjrSnYoLBtKxUuDHlFBUrPg47N4MEPAiOjZhO64Ju/1jaJrB8xD0FAShwD56W99B4WFB+Obbs4senhF4Znu1scLDBE+po3ln6ztQYXcxFNHyHth4C4chrNB4iGPPHIBnHbukiydSlH7pz23yS8YSRbKiojFJiZ6nZvcdyImeVERzvu0xDU2XtNC1adeHxTEInHGqVQqOiFskvI4m4jpbWbjIIommiSA19y30zMqiklPVrp7KOXomeDoUFF3AjofRLeLU4CkCcAfg2a+nog/C7OV45hmgHT3VgXhig8UVUBTDY35twXFbaBG30Aoh7nU0HUchcV3yYj1V7VvgqamijJ6te2aAf244ruvgOqjphS1UBJW8V/fPl+o5EM+cKCX+mUqjVOi7hEVJ2qIRqacO0uhU8NyAeHu1E+IMPwTkMAu7qEKiPQIKQ8BFEEWRBfHjvU4sxBGqhgA9CdmBd+6l7JLkWpqEROy+168ORK8wdbANZ+q+SVx2qhpEgipqQhSHFZ/N9eCsDH8prCiuw0EUxgSOwY4A8gzMY6IkMcvO+b7bTQ1RK0huYcVekoqYqGf7BTCfxX02IUhywU5IAncH04jNMwk1QifoTk0Vu/kCURt8SkRrlIC9DAYfkSYx4oigjhSAoF4Ha8E/Z70gjuAKP3ZNQAMSERjvmM9NBAexAaWYsoTVlem7aUR9yJYOJDvkxxETBCO4UglJwwHMTSSAn9QIAxVSVYRMH2Z6foRc36+dmlghFkSa7g1gJiS/lh+HsCR0j4QiszbALOPBStIElyCEXBxGDukcwkMWtpK951AxYMUQ9ZwgIBFDIowTjHu4w3LBLVMHB2y5oQRwQmKylekK3O6b6ld2iRHB1IPtg2OWTkGV/3zvR4e2T6P2k9PVwcfYMAi4CARP34QbtP9U3XeCNDKJBZ5HHoFTeyRZ5PajGiGBAKA3yFHnqvYnOAhcx0hcdGyX+t7j7kMgMF3ODj9iDiqA+3tR+/NlUKI4O9b1BVnUFFH5pCnQCAKmC2WJ4GLQ56izwH/rHwiwUQMhQrdEah91Ln7SP8en7Y8WsbFcOiUSh4bd/giHdT3zWf95AJu5R/t4YuXs7bqDJ7eDiCWL2U9PCaYtz2h/ZGGXkr+cuphEYUswfJDR/r10ZIBDOaT99H87O4a1U6nfnMFmaXJ1aXN8La7M+9UNU2+2VqHkmtbn9Pvz7sL8jEqx3ag59SlBzqpSPp+VszlBFiVRFmUh35I2FzVrQadBtTZeLC+qs7Wip7hVR3RLq+LtpbVcblVdV71ctB6/T+fNlcmpFcOeX67ld2v1TKa6WQvmAmte2V1fn5m7k229vzs1XryBQLu44Zg3Z5t35haCcPl2MDmdb8357qRSryxOldY8604wIU5lqzML5ThWppq1PvWUnC5IXQ11KZOT2Pak5xsQs6uR3T5UJDn3RygBA6hIya+OALMoph8csgfhX/887lamf1ieP/HhHx9OglO2vyrbcRpJWbTkN5AiKRkk6wVVLUB8nlksP57o3qfMfPApikgzGiMNdqYT824gqIdDSqKbcWQJuU/LITy9FjjmVO8hODbs2KsR89HEC93/K+b+sLbMIAi9AmkGPiVCV8324zvCaqdIF2YnP+s8a4IfVrHn7CfPQvur5DnY22/umUZsmnZjry7l9zOqUyGxYX3evQQiB7sNKCTUafvDjKY/6Y70PPERGC8JsiRI8pdNAYI3cZ26Awgnn91OgbYPNYD/i7MTIr9GPNo+ziTrI/2jf0YI+d3x2L1PxGTy+fzfXzypJ0qFKbqS//L0LMC6T4ys1OkXZyd0RXyo1enjZm+64Jjtpz+Hg52MRpRKBVu5ipbR81rWxLJCZLBMURRiaNm/sghpgBi2moEfwmpDKgmdqNV+mq7jJgs7N1VZU3Uw9QYEbsONTVKKK5M+M4LeQAFkWx+bH09MCxPYsIlQSjyyfTy5uVRcnJ348x2h37WE5SR2w7jnQ9y2rKMSCWFl2o8M149NiJ8hOQJZq8XN9uc5k5BKXlaMCsnkLCMrjC+XjrELSjaM9me2ejNVyGTU1A3IQzdzOixI0qL98qgT9r/+yb9ZK8WyhgNRP8X6OQO6OaF4e393cWNttiwVs9Ob8oo70XJsqVELN9bXSCrdSwWdK8STDlBMHBwmGEnRATJ7D28uk+4VD6drB/AxRYMrOiXPjgVqkTDp2lIFL3ZdkGX7jsHSHJQLjmeSZqogpSHNuRFOFR50i5QU9G4OPOpwWfqkZOoIYLl6x8Cu+7yMjtEwsOPZxvwdzQjn5ldnXH1/5ZbkB/trdQeEddJfX4HSV5/0ypOz1UkKh9W4zno8GIWcuZ2G9GrFFLsdrQ7SKdevwqNYoT01wXSH2iADUyY7mbV9cOXK5V+icxegH8l+wB5sJZhxkF4GUqdC4zANgGkrVeDeNBgm0+IYDcLo7t2rHKRBIBWrhIM0CKSraY7RIIwQf9gGYjTNIRoEEbRLHKSB8Wh7W+QoDUKJvTnnKA1C6dp17koDQXJ9g4M0sN7mFQAvt3m5zcttXm7zcnu4ym2O0eD3tgejBtJgW/th7LP57uTy0tT2lSsXSeP8ocppnJzGyWmcnMbJaZzDROM8rUySZU6m9LEo0KmUfR5Xk+VaNIhD8eqUzr4CADEWQPId9xYcbKVM6+7dqxCUr6bRVRZa2StLhsG166LrG88NPSseTiMvibKsqEpOV/V8Pp+RdaUfH2bvKbt3XsU6zovlvFjOi+W82HefF6tqGufFfgNerHRZeLGZN8CLlSQrW8noiqzlclmzohoa1iSzopt6RdGUCr4UvNhc3rTyr8GL/dlPz3+XsCRtLHjWXKYcr/v2+MZsthTfzu82ct7rvUvQ3mlerLI3Xppd3di4tX5rshaUFas5aUD82n23ebGXZYm+U17s0IJ0sbzYoYXpYnmxQwvTBfJihxaji+TFDi1I0xyit/nd4dCCdIFchqHFCPF4NBAj9q0ZR2kQSux7RY7SW6UPDy1KF0ofHlqULpI+PLQgXSR9eHjbEl4F8K6EdyW8K+FdCe9KeFcygl0Jx+htsqwvCUjvFMv6ym84y5qzrDnLmrOsOcuas6w5y/pclvUbA/R8itPFAZrABbD1cujzqJ4dPxdaRdPlbE7L5HUll9cl9ZWhPddOTmDnBHZOYOcE9nefwJ6VlIslsOvDTGCX89lLQmCX9TdAYFckM6OaFtY0xVQyJtFMMyerOJvL6IpqkcvxHzublqXmXoPALv7u/Nc0y0VvukhX1/dm9jduz9SX31/Z0/apdb/xeq9p9O+CwJ5KvQne+GVB5gSGsk1SI2p6f2s2shh0W9KRtR9XR9f/oX0ZWdut5Ev5ETXeoaPr8wHY3HTqrBRsjSwKo2q4NKqGi6NquCwro2p6mhc2I2h7xBs63tCNcEM30oX9KAe+3nfpvKvjXR3v6nhXx7u64TVc0XS+5qMW3so2CRMyDG/oeVPLm1re1PKmlje1vKnlTe2Qmt79KcEIJ33s8YqHVzy84uG8LF7w8EqXv9oZKsNf4Vf0NPKDF/1+/v8gasg2

View File

@ -1 +1 @@
eNqdVmtwE9cVdgJtSNppqIG006RlR+3EgbLrXT0tO6a1JT+EbWT8kLGLca9273rXWu2u9yHLJmaIC81MHWayLUnaSdsMwUiJ62AYKOFR0mQChiRQSukwtpsQh4Hy6CRMkyZk2gI9K8nBDvyqfkh77373O+d+57vnaiCdwJouKvJdI6JsYA2xBgx0ayCt4W4T68bGVBwbgsIN1Ycbm7aZmjixVDAMVS8uLESqSCkqlpFIsUq8MMEUsgIyCuFZlXCGZiiqcL2Td19Y54hjXUedWHcUEz9a52AViCUbMHCECAElMIEIDhmI11AcEwUcX0AgmSN6kGwQhkLEZKWHMARMsIqmYQnZ3EQUGz0Yy5n5grJOnF1SUIk0eGQVyYzLumMZ4dAUCduBTB1rjv52mIkrHJbsqU7VIF2UhzRMLarYWBlmGfjVDQ2jOAx4JOkYJgwcV0EcANpcNOWz5xRF6mAFRWTtuXUOo1fNBOJNOSOiTfj5sw2QYXM2QO0FQeUODatSB9INR39/ji2nzv9NBDgO66wmqjmoo4yoz2AIXcCSRBHNOgbBRN1WFScxaxqYyLKAZPE4KKhTREhWTQNWKKbEgcxQmwSSRO4LQIpoEUB+UxflzmlORVpG6AoYRoR6E4pp2ETwBkWjGk6IyMAcQRJxFMOEDlISokFwCiBlxSAkRYnNAkYxrwAmGwCQokz0KqYGZdZ7sEbZu1WRbRiwrZ7RRdXAjpoh4uwQ/Kv1Zp6+oAoLBiB0WVRVnPGXZmY0ntYdqg8hs2WxT4GoYc6uTI6wfQZUiXZh1q5gf3t/WsCIg1TO5s0fEhTdsHbMPiCjiGUxOA7LEB8CWC939onqMoLDPFgaD8OhkHGmyNZwDGOVBNETOJVdZe1EqiqJbMb7hV26Io/kDhFp53L762Hb7yQcOdmw9oQhibJQYc4LDOVxUvTOJKkbSJQlOJmkhCCfVLbAB2e+UBEbAxIy1yWsVHbxjpkYRbe21yE23DiLEmmsYG1HWtzr3j1zHsS27WGlA/W3h8u9vBXORTE05d41i1jvlVlre+ZovjJrMTa0XpJVgMPaSqdYMJSIrYmPOjpYviMaL+2muj20iWrMrtVKa1G4jKqNNDR73UVtVao/YYQjLYk2lzsoRmq5GMn4nH6vk/b6XCRD0RRDMWRSiPBaR1m32JKgOyra2DC3olxPOiUvVd29os0od9Mhj6suEKyLhvQVuLm1awXXXYPboq4g5lY568RVQjmvenhXpMbLVbWaQbOqsY1ZVUJAdmZC5EpX+mqpVfHqSsEpCAERNVe5Q6GKRr7P09pSVLmy3NfsCvk9tZGEGVo1Iz0n4ybpXIZe2l1E258d096QsNxpCNY2n5t5UcO6Cj0Z/yQFkhmmPjBkn4Pjx9K53vxCuOaWhR8YCoInrUNNJrQ/p5MIswbhpJ1ugikqZphixk9U1TWNBHJhmu5owV1NGpxVHmxYMW35NCuYcgxzw4E7mv2QbXaopJ0+9G0SJ1VFx2QuK2tkNdmQvZXIUHB39mSRitaJZLEvE9Y6lHF9T1+yh2NNjhMSPXHa3+d2iVFssvye3BLoE3YYSIiM6yCOi96RezPtu2HYK00yNEkzB5IkNH4siXER9Mx8565G3RrygNj7bgcYSgzDJZp2Z6pBvzoToeE4GNaOfYvG7ff7/3Bn0DSVCyB+j//AbJSOZ2bDOOP6vtsBOYoXaH0kOY0mRc6a+B4MOjyYiWLGw2PG7fJxjF2BKPZgt4v3FXmdUed+ux2ywGIXU1U0g9Th3tBEo9eaWBZHSbvHlLoYj8sLOy2BLs1KJocbzWhQsfeglxAq3NkK4kYDlWQAsQImGzP+s9LB1pVldaHA3tXkTCORYTX7HyQtK9CkeT7ViDUojDXMSorJQbPUcAq4GsparT1FnDfqc/I+D4s8rqjfRZZDG5pm+9x2Q3anTSMJck+w1m7BVeoodrtdjhK4i0qLvFCmzD+Vx1PZ1n/krvHFg/PyMp85UkN7+G/0wv6Tozs/rPEteXmyHJ14cMFzD7sP//K+/Njvjpny8shR76ebK1/693vpx57a2PCD1WdKl198Z/TQ0W9yX5/71eBbofOVXxtdv7YqeaMwcvmNbY/d4BefOP33a59duPHp5aPcfevf/PNzf3zk0mZPfrEvf8np4Z9uqli5engllf548PJn5z6q+dL54vLWv+QH3th9Sqre9O6l95PO33iON3Xe/e250mVmbPzVA28/1LyxcP7OXTcfePOU/7fE3A8Kxl7raq3cUPKs/9zPXrv6w62Lti49Hdm0pXTDnme2TjUkxj6ZuHa2fdu+Le+Pjn5l8uz13kP71x5fvNe3f3/L4MH/frL3YkvN1dqn69c8csr74hOv/HPq2JNM/aOj+/iqlyJbK74b+PBS9QAz5V2iNTWl7v/Poke/IfR8f15k/pG3L0wtff3xk19+b/+R1/tK8ycnw8+3qBt6iu+5+td178Z+cWSfusZ7PXbuiQTRUDB2ZXPJlcLDv3przbfazu8YXvCQ2Lju2eVPz1l4lb937Mo79dV9g8MLf13b+fvvGAfPBE4++JS7e2Lw+RPXF/ShjYd3d41/sP5equHJ4ivj/1h07dS/tgTbf5z6+f1L126bK1+c9zH9p6aBM+Mj7ZVXp67fk5d38+acPGflQM3gnLy8/wG1OUq0
eNqdVmtwE9cVthMozYMmhVCaIUk3KlOX4l3vSrIl2XUbIxviN/htA3Gvdq+0a+3uXfbuypbBZUIobYnTZvuCttAGMBI4rgMxIZRXJ4Q8yoQJIR1aEkI7bUMLMzwmdJpOO4GeleRgB35FP6S9d7/7nXO/891ztSadwCZViJ4/ougWNpFowYA6a9ImXmFjaq1NadiSiTS0uLG5ZZttKqe/JluWQUuLipChcMTAOlI4kWhFCaFIlJFVBM+GijM0QxEiJd+57f2VHg1TimKYekqZpSs9IoFYugUDTzUjowRmECMhC0VNpGGmQIoWMEiXmF6kW4xFmLhOehlLxoxITBOryOVmItjqxVjPzBdUxHB2ScFCZMKjSFRb06mnkPGYRMVuIJti0zOwHGY0ImHVnYoZFuvjilnLNiPExeowK8AvtUyMNBhEkUoxTFhYM0AcALpcPBdw5whRu0WZKKI7t9JjJY1MoKitZ0R0CT9+dgE6bM4FGEkQVO82saF2I2p5BgZybDl1PjUR4CRMRVMxclBPBbM4g2GojFWVY1opBsEU6qqK+7BoW5jJsoBkmgYKUo6p1g3bghXEViWQGWqTQKoifQLIMe0yyG9TRY+NcxK1kKEEDKNAvRliWy4RvEGRiIkTCrKwxLCMhuKYoSAlo1iMRACpE4tRCYlPAkZwlAAmGwCQis4kiW1CmWkvNjl3twZyDQO2pRldDBPsaFoKzg7Bv2Yy8/QJVUQwAEN1xTBwxl+mndF4XHeoPoTMlsU9BYqJJbcyOcLlE6Ak0oNFt4IDywfSMkYSpHI2794hmVDLGZ18QJ5DoojBcViH+BDA+U2sXzEKGQlHwdJ4GA6FjjNFdobjGBssiJ7AqewqZxcyDFURM94v6qFEH8kdItbN5ebXw67fWThyuuXsaYQkKqqLcl4QuGIv593Vx1ILKboKJ5NVEeSTyhb4wMQXBhLjQMLmuoSTyi4enYgh1Nlej8TG5kmUyBRlZzsytRL/2MR5ENu1h5MOL745XO7ljXA+ThC4wO5JxDSpi872zNF8cdJibJlJViTA4WzhUyIYSsHO6Q+6u8Vod0QrX4Sk5sqmhs4FrXaklsTapZK+ZFMvMReW1JSsqFXrahf5KJITcUWrYoWAjw+FAkIgyAoczwmcwIaSfGd9cbSuhBqx+IKKlnpfdbxC96oxhVObm7glDa3BYJOvzacHrTa7i9ZKiyurFotybWM81BPX/P5YZ9yoMaK13p62tkU1HYFkV0/VgooyBrKzE4pUXt3XUVNnmI1LjMqFoWQNUSu9WqS+qrlVj3YYYa4qEFtU12Lb3qq++IT0vMESls9lWML7g7z7GR33hor1mCU72wJ+YYeJqQE9GT+RAsksm64Zcs/BG6+nc715a2PtDQt/YagSPOkcapHtQoYPMA0kwXh5r58RSkp9vlJeYBbVt4yEc2FabmnB3S0mnNUo2LBq3PJpUbb1OJaGw7c0+yHX7FBJN33o2yzuMwjFbC4rZ6SDbcreSmx15Vj2ZLHEjCFd6c+EdQ5lXN/b39cribYkyYlejQ/1+31KBNtidE9uCfQJNwwkxGrU2ebz86O5N+O+G4a98qzAs7ywv4+Fxo9VRVNAz8x37mqkzlAxiL3vZoBF4hgu0bQ/Uw3+8ESEiTUwrBv7Bo0/FAodvDVonMoHkFBxaP9kFMUTsxG8Gt13MyBHsZWnI33jaFaRnNNzYdDtDQb9Pui7QigqihEUEgNIwH6hWEQlkhCKhH7rtkMRWNxiGsS0WAr3hqlYSed0oYb63B5T7hOKfSWw0zLo0qJqS7jZjlQSdw+0jDHgziZIei68kA0jUcZsc8Z/Trqys6Givjq8t4OdaCS20cj+B0nrBJp0NJpqxiYUxhkWVWJL0CxNnAKupopOZ09QwjgS4vli7EPBqBhgF0AbGmf72HZDbqdNIxVyT4jOmOwr95T6/T5PGdxF5cESKFPmn8rjqWzrfyX/8pee/Gxe5nP7YPNL69/l7z30v/lHl51cMOOv01HgsHYXPr99Ss07P5qy7XVx6c7CnSc3qjM+uPxK4lTi+Xl/mPblc8cPfhj1l04dfLyOiTBmw9ZdPf86GLx0eP9H6dkbBsaKThdderFTfgx/+0zM98W2WdED4QrSKqI7n3166K3phQ+YR9YXr9rUcO7qa1fmjTx+ZO6O4effOCmX/uqpHdZS3y/W3jM74syZdyJ8tC1fHNx8YfX7Swr+c+fnf3n3nz1T+9/0rd1RH7l705LWKfs+PDnzj1Ne/tx9C2deOLaX++mFfGlw/dT24Zn64ebLZ6avnz/r6q+fPP9WR9e1P1383YZLY9w/au/nZpcfWvffb8XDys6qsmceOvvqtjnCq49c3Tr6wrRjT/3gwa5ZtDFw32B+b2fnM6lj39h06kq6srcjvGfd3Afv+tuZ7/7w2ZfouieO91z5TtkD6ilj3fyfUePywfPTf4+6+ke5tw8cffOi/dUfd8b2Xovuf7qGHXio7O+HL834+v3vyu2lL9z2k/bAHad+fqSp7lzgwnp+1cbiVSe0rns2P/aI92jrN6/5D748u+DRZafa39M3/MXz2tD8i3sHN3zfX8780/7o7PHySym6c+z6lst5D5+Y8ZXd9sZHV7y3YsO5Xf3fu14TTy+b85nVV9ZWbarZvGWsYW/jw8Fpq/Pz8q5fvz2PbHr733dMycv7Px+JTHM=

View File

@ -1 +1 @@
eNqdV39wFNUdD2AR1DKMRWtHKdsrLcJkN3e39yOXCEN+ELiEECAJIeFH2B/vbjfZ3bfZt3tJgEwrgiAtwk0VBapWCQnGSFSQIjZUmHGwDoNOodOJFToUR5QOtlQFLBT7fXt3yYXwT3t/7L597/u+3+/7vM/n+96t604gi6jYGNWrGjayBMmGD5Jc122hFgcRe32XjmwFy50Lq6prdjuWOjBDsW2TFOTlCabKYRMZgspJWM9L+PIkRbDzoG1qyHXTKWK5/aOxX6zx6IgQIY6Ip4BZtsYjYYhl2PDhqccOowgJxAiSBDaMjRmBMRxdRBaDY4wpGLJAGFmwhZglgBeOyfzmIQsxKgFzItCI1NzCrYSJWVhnkCApQ9MYcMPYCmLMdliMwUhYRvAt2EwreHcIkmngODIAARu5limnBcuNTLyh1qpVq1J+hrrkWKOPU5AgPzyds3GjLljNMm41Hp4+ZDJr1ixmLW3QR1EcpT7KBFjF2uXGWpZlC+gj601bMMIw3pSt30/fYc4fpF7cEV9qhM+Hd9jH+fN5Pj3iT88JpeZE6KTUCJ+eE4R3kOd8zKC3QNYIk89503Gy1r0c1nz75fv/l+Wnlw2NasdKqAnA/xYMslu3AJEBYJAJtHsQjUEUhoZ92ZBkoBg5zGfjMXI4kA3KLbGHITSSMHNhhQYQFYhmMa6sQB2MIGLHdrk2RO9cptVS0wxcmM1VkIVBWmG6ag9JoBQb02xGIMQBirffRklGO6OpoiVYKiIMBqcWZb3BiI6q2axqZGJgA8apRlJ6G4pQKTSDFBzLzcBCMapKQ2unXzTFBHgWRA0m66BnWBRsJCwrgThPLuOxsIaoxkk7sZHu6chlhknfQrZjGSkA1Bi4RoYEik71SNiykCa4QInIbkUUwHhKyDFKnoyib2c4aEBcdgladjZ0EzwdK6BHB2Q12hU3bZbngizkI2Jqa0CvD97EtpCgw0dM0AiCDliHSWsEIAK9Xi5M+zDWGiUFqxLtW+Ox2003UMwx3GpKHQ62qYEBO00NUipqtJCpNQrE9nR0pL2ly+T/7QjsZEQkSzXTpp6izE4TBWkax9QSyjHVZQlqQ5JjZ9VGXQfsgARRwwSCEgU7mgzYAoEBSlW+xZBj6hTA3CGqEc/4xFouQzCcHKpOiefY1BGt1aJooYQKNVZmWEYfJJdqMzIGSwPbjIZx8zBDEcUw2KQCgCUwBLhupQXhMs0UqHrg/CIuLqYF55Jlqyj1CYqz2t3WLai40iKGaprIdgnuuBhncIfdh5CpbaHHoWohme5M2uGKLFMsNiGJ7mDHio5uWgghlbM5EzsVTOzkvuEnZR8VKDAO6I5lCJB8Nb5aNXMZGcWAx6gHJGIgd5OTPc0ImSyAnkBdqVnJ1wTT1FTJJXxeE8FGb1pSLM1l5HAP5TsL0jHs5IEqSKIompfmgo8L+jnva20ssQXVABUTVhMgn67UBr+dPWAKUjM4YdPXhWRXavK+bBtMknsqBamqephLwZKU5B7B0kOB/dn9ADalR7K7ZOHIcOnBoXBQl71c4PVhjkm7ISX3uNL87bDJyLbaWQmDj+SL3i4JCKWi5MC/GhulWKOoz2zhWoJeR6hwmpbi+vyqIm7+ksW1oUB+w1wzkrCrltQlGvhAqbpkvtzM+sL+SMjvDYV51sd5OR/nY9uUJTGrsahFrUt4G+c0SFVyeTFp82shbl5LeYNdHPBGg3xlSWmlGCXlqLa+qVxuqUANIl+K5EX+SnWRUhwzgzF+SUVInlvvlDpzqxt8iwoZyM5JqPLMBeH53CJ9XpniV5QSVaidG4hG51THVgfr6/LLFhSHa/loJDh/ScKJLspKz+8LsN50hiFvIN9Lf/sy3NCQEbeVZKcvHA7stRAx4XaGHusCzGyHrOukQjjxXnf6lvZSVcUQh+/vLAVSJvtrHKh/cAOpkmzG7/UHGF9+gc9XAPeLuZU1vSXpODW35eDrNRaIFSo8OyfD+W5JcYxmJPeU3Jbt/alDjKX5Q+FmUZuJCWLTWSV7l7KLU/dTNlq6PyUtFltxwVBXu2GT/S7tW1e3tcqSI8tKolX3RlYHeFVEjhQ7kJ4ChYKGgYRYnQA6fMS/Lz2UYV4PLNbL+rys13e4jaXXQ03VVUDUfaZvyTA3CHAfGmlg42YE9+nugLsf3iPZFhbSgbI0+JCbQCQS+d3tjTKueDAJ+4KHh1sRlJ2Nz6+TQyMN0i52B3TS25YxZ1U5OTAVPholJPJ+OYBEUYxE+GAkFhRDkXBY9MViMIS8b9GKKIEbup0mtmyWwNEBF5b25ECuLrTRMjOT9wX5ECy1EAq1pDkyqnbEUkwXQQoZE85qLMh9JWVsCdzQEVvtMjDZXVq/oKgyWnJwKZtNJbbKTP0f6TYw1OlYrKsaWbAzyR5Jw44M9dJCXeBrcVF98kC+HBKpDoIRAfFihGeLoRJlvA0Sr5MW224Brj8kISX3K/xMT0EgwHsK4TiamR+CfXL/tTzalar+746eNOUX43Lc3xi7unLrR96J/Rfqlh75sPznzFNbt5wec7Hn0uaDG1e83HNtG/p11GzR+i71bp9z4+r70XmjK8im3vYvz+66/Muto4qfPZ5f/OylvdKO+/u/2Xp96Y2bPzjcfXzj8XdfiIeWTT0/EH6649xrx1d8sPAvv3/n4x1/is84NfqhujfW7xh18Z1XJnSfOzP1ofmx86Pfq7nn2KnFl1bvvnnogrW/tv+o96G6E8/97Z7cJz+cMXvci9Llq8H3H1jZsn7sGTJ1TLN6V37PryYeKxs/5ejpBmPy0xNaH9w2/fPZ/34DcaV3m+Xjj+2duKXlm8nLLnz60t6Jn1y3en526Rl8teGvL9/J71qwfWDt5e1v3ij//DHurnLrDxfvbjrnPHLYXzKuYud9yzvMwp88MYod9+YX5aN/NL9yZ5x5Lm/KlD9fi9xJlp5Xtuzwj73j6wnru2bf25TTUdZS/PgrJx/ZcLLrH9Wznnq07/Ts+Jct/dv15y9M3LXr+zeL50QnfPBWsHC58/zmSZsqNjw7bfwD205cP5O7e+PNv1tHG89/78hn1yZ9XIAbxgS3fbJ9c9Mfa2/U9D258OVQU3NfaNS05dbbPx67l/3pulerD391ZdaGMSfzDk5o3u2/tKl6zbaV/zyl1Fw892lt4DO54dEFkRk7r7UtZK/s+eETRxet7EX/ufcBVLjpwYuMErx69qvtpQPMM58XPv2bxceurJosTz3n/87u7/a9UGnedXNUTs63347JOXhf19fBO3Jy/gtz4IV3
eNqdV3t0FNUZB1GMtECKp1VPVa6rVtDMZGdns49wQgl5NQkhIRsgiXDC7Myd3cnOzB3nsclGUhWonqMcPetRVKxYTUg8EQMcoKISn6ittLVVrEalttoeqHhqq7U+jtR+d2Y32RD+8HT/mJl7v+/+vsf9fd+9u2k4jU1LIfrMXYpuY1MQbRhY2U3DJr7OwZa9ZUjDdpJIgy3NsbYBx1TGr07atmGVl5YKhsISA+uCwopEK01zpWJSsEvh21CxCzMYJ1Lm7dlPXu/TsGUJCWz5ytG11/tEArZ0Gwa+DuKgpJDGSBBF0EE2QQLSHS2OTURkZAi6JFhIEmxBNgVAYRH9/QSbGCkWqFoCtUZVTdJjIdkkGsKCmJxcggAC2UmMjAwEoiORSBjGgo16ANmxsESNJrAO0dvY1fRAy9fp1Jb33LBhg7feG0pyF8cmsSAtWszapEsTzJREevRFiz3x0qVL0Ub6QR+VCewNagXweuM6fSPDMOX0UfCmXyBByO/pBgL0HWYDZRTFlXCehI/AO8yxgQjP5ySB3JqQtyZKF3kSPremDN5lPMuhCbRggQRFWH/OTi7WdRDn9JAD3zbkXKjwEXPMtJKGHJ8Wd+HXacHng0YTP39BBiYinxRzhWnIhz9dzBfmYLo4WJiI02xPZGWSEHUQlQ4EBAKZyC0VYDwS4sSxXQ5NUrYE9ZhKjlkthRwEqutWDyxXbI/W1US/ykaCZTlA28wZKkPPIFWJm4KpYAsRADQpk3UUdxTVZhQ9j090kFPee/XjoTcJKaC2Y7qWTSzTCtPVDB1R19KAKsRVWKhBbUIwsGkQThqzvhLkM4mKab1aGcvGmq+/BE0pYxPbjql7gSsyQGNdhAr1ZkRimlgV3ATFsd2DaeISXmHKlCj5Cj2T4oSC5TJJUAu9ocn39a+HGQ0yqtKphGEzPFvGgD9xQnV1mOXgbdkmFjQYyIJqYZiAOAxa85ARmPWzYTpHiNolJoki0rnrfXbGcA3Jju52Rgo48U0VdNhhquBVSpeJDbVLsGxff38OLdfy/m8g0JOwJZqKkVP1VeZ32UpiVWXRaotyS3EZgnux6NgFvU7TIHdAgHrdAGJaSeKoEuQWiAupVKTTFFm0Ngk5dyxFT+QxiVqCLAKngKJR0jk2BaK9Nx43cVqBnikhBmkT5FJsJBHQ1ImNVEJSUxTjWCag4xkATWAI8NzMFYLLNEOgVQNnkeXmxTDhjDFtBXtDqDQz436dlhW3pCxdMQxsuwR33Bzn8w67Dya9baFHm2Jiie5MDnB9gSqJd2OR7mD/+v5h2vDAlT/NKB5MEsvOjk499XbT4gTGAd2JBAayjyX6FKMESVgGHuMRKBEdu5ucHUlhbDCQ9DQe8lZl9wiGoSqiS/jSbovou3IlxVBfpotHKN8ZKB3dzu5vBicq60tzXODYsgAb2NPLWLag6FDFFqMK4M+Qt8FPFQoMQUwBCJM7+rND3uLRQh1iZXc2CWJzbAqkYIrJ7E7B1ELBfYXzkGxKj+xwVct0cznhpDnowRwb3jsF2MroYnanW5qPT1mMbTPDiAQwsg/5h0QglIKz4590dYlyV1yrqBOkWHXryo7lq514I0mslUK9mdYeYtaGGkLXNaorGut4S0imU4pWw3Bh3h+NhrlwhOFYP8uxHBPN+DuayuQVIctIpJZXtjXx9alKPaAmFFaNtbKrVq6ORFr5Nbwesdc4nVaj1FJd0yImG5tT0e6UFgwmOlJGgyE3BrrXrKlraA9nOrtrllcuQeCdk1akivre9oYVhtm8yqiujWYaiFod0OJNNbHVutxuVLE14UTdijbHCdT0pgrcC0RCjD/nYcgfjPjpbzTPDRXrCTuZHeTCgcAjJrYMuGnhzUOQM9uxNg3SQvjNr4ZzN66HmxsnOfyDwWogZXasLemUIH8YrSRpFPAHgogLlfN8uT+C6pradlXl7LSdkYN720woVujwTE2e88Ni0tFTWBqpOiPbx7wDjKH+Q+NmcK9BLMzkvMruamdavbsmU1+9zysthpgJQVf6XLPZMZf2PX29PZLoSFIy3aP5o31BXoljR5T355ZAo6BmwCFGsyA7XJQfzYnyzBuBYP0M52f83JO9DL3uqYqmQEbdZ+7GC2vLIN0HpyvYJIXhbjwcdPfD/3Shhok1oCw1PgkTjEajh86slIfiQSUcCD85VcvChd5wAc06OF0hBzEQ1KxdvXl1RpGy41fAoIuXomJACAhiJIijIT4QlARZ4jAX9GPZHxWlJ2hHFAGGbqdBTJux4OiAi0omO16iCb20zVTwXBkfglCXQKMWVUfCMSdeTWgQ1hJkwFlNBGl3VS1TBTduzMRcBmaHqztWVjbVV/2ynSmkEtNseP8thnUCfVqWh2LYhJ3JjogqcSTolyYeAqzWyo7s/oiEcTwa4AU5FInIYphZDp0ojzZBvEHabIcFuPpYaTG7L8lX+MqDQd63BI6jikgI9sn9B3LTkNf9XzyrbOFtRTPc36ytbevveNtfvPHV3Xu+Xn7NzPcH7rq8deHoaw/0ja15+M2Gx25HRyL3Hz66d/07D7UcvmH3z5ctWHRV2fxV+/fx+1IfywvRlszWqi0dtZccOXDi2E+/3Lfjht7t37w18he5e/y3p15ol//+8h9eP/ZRxT/6bu587KYnzn/01bbRcyKj0jn8rKM3rb1k5cubx9XyQ/dtWzYaL96xrfvNH35+VcXae+/+cO7NA7/O3rf4d8++X8yctaX47XtI18BW4baq4Ikdvmh7Y+xZ4YEtxaGGL4qG/miefMQ++M6NK+rPfe0e5juXdd4Yayiu/Vly66OJ977S5906p/LSj5/+9NUn/vujY+mxU6cO3r9rx48r/vrp8W1dV469jivrL+voX8Ycuv2WC+Qiof7TOx/tYWN3Xvj7TS0n2SuKnz86WDPvtS8/+dfS55+Z43+q6Jbvya9cWHx0SfH2s+ofmfnveS+edxFb/vhie1v/g5+MzHk3NuuBv51cdKe243jxtffOfa9lVbZu+N3tFZ3209sWF6+dwzUv8KP7Pn+qs3b+2AnzpTv+OfNBZ2PRaH/ptRd0o8Ul33/5/qUvzf/sRNHd3QNr/hNfYDx/5eMXccu0zOVvle+8hnwcK//z++qHe0JXv3l75LPzBy/YyF+6tu/g+K2z+5dUbFoZvXr7F5Jx8UdWxYLNA4cuPtJ3/oX4osNzD4wXdR5ed+DGX7zx2TNvrHt9tPx44NTCL2dv+O7+zfcmZ3GJ45eEldlfN3+wc/9Xdx3ZtPdYx7nR+oMrnku+cPLA3HnPdQE/vvlm1ozzXjn7gwNnz5jxPzpsi1M=

File diff suppressed because one or more lines are too long

View File

@ -1 +1 @@
eNqlV21wFdUZDiKWqf6g7RTtVOT0Dh2+spvd+52ktOaDhISESz5ICGKTc3fPzZ5kv7Jn9yaXJHakdlpRyqwOY4cWWyAklUlBBRWFoJ0WWoodWywqqaDTH8WvmYrMiLW19N2995IboaL1/Lh3z573vOc57/uc5z27cSxNLEYNfcY41W1iYcmGDnM3jlmkzyHMvmdUI7ZiyCOrEy2tuxyLnl6i2LbJykpKsEl5wyQ6prxkaCVpsURSsF0Cz6ZKfDcjSUPOTM5+azCgEcZwN2GBMnTHYEAyYC3dhk6gw3CQgtMEYUkCG2QbCCMT6zJmSMY2TllYI6hLTnXxKN9WEIsgCrYKQYZjm46NjJRnwysEy4sW87bRqWGrVzb69UWLu8rW6/mZU09dXV1TnaH8T4tjpWmayH5ntaRiQDSEVnkQPlsbQi1kIO+1optkvdOkSvVuVtJiGg4jDFUkDWzlFsMWRISVVClUleGxcAzVwCAaWq8PcRxX5v3kW0GnbKiM++zt8qShKxxe0T7dWJmHEyEhizzbCjoheGy0eJTohz2uwJYFeay0sKPL14inhlWST1UwWOj9Y038hDEfSIwPRjxPPs5p1gUd0cfJeFRvKLqHUFZJBi2qUQ1IjkTgDe3uZqhVwRliLUZVjuZlFialiI/UdxSKfw6cMZEPxkOhHM7g/8Dpx5MyANqAqY7hbNDeXqoT/Vr8LMQZjF4rZp8cz1IvoFmcoWvGE0uesABQbCtoUQNVM6gRZ9BqQtTFqMaxLaKq5Oo4Q5HPEc9IiBfR5byHr8HPdqqqFGsAU7cyqAIg6Z+en5+I85rxjPNCjp9XFaupp1qQKh3kErTEQr5ag+gWo36L2sQXx9UZkG4dSYZMfGHVWT9YUntKSpuJ7Vg6Sqxq6PBnpLFK5WnzQIqRbtgK0BsRlZGpudWGvtBGIJEOyGPmKjquZ5BKkxa2KOQbXMDatoJ1lHSoanNUz69j6DDurZPVfT5QjAKWoRKvPrAMs4kWGC5G08pGO1SahdkKIBkWUAZ7e0dJYvcTLybdWeApEM5Cb16kAsN3whsN9qZ6r7pNmwvxEQ7ikDQ8Wx3eivDPgIlYg04Kw7bhBeAwoUKCoedL4GPeO8NQOyXFoJL3bjBgZ0x/oZSj+5XUc3j52TPQoZZ4Bqa/9U6LmGonZnZgeDjnLVci/29HYCcTJlnUzJkGKvJxZgocLR6tYR47qJ8jMkAkB9hi5jOuaRA3OKZ1uldWmWI4qgxhBZZlqTHdEM6JAuF2mMeOnE9DLUbMgFsD1by0Z+szjOBk0iJpim0orxwcl16CmOMVchvJBlgCyZBqGL3TDJMkBYKbWwAsgTTANCtHZZ8pJvauCHB3YX5cTAvuJJZNSbYLx8LK+E8fi4pPbqZT0yS2FwnL8WOcjztkH5bMpsW7ClGLyF5mcg7vLDA1kj1E8jI4fOfwmHcDAShni+aMKAaz3b3Tb0n7vOMBjIMaYsiwgPur7g3ULEYySQGFyR6guE78JLt7egkxOQh6moxmZ7mPYtNUqeRzvaSHGfp47khwHpYrh/d4fOfgMOi2eyABICrqSnJcEPlIkBceHeCYDXVDhSPLqRjwjGYTfKhwwMRSLzjhcldFdzQ7eW+hjcHc3Y1YSrRMc4ktSXF3Y0uLhvcXvodge/Rwx6pWX7lcbnBqORBugQ8/Ns0xy+iSu9s/mk9Nm0xsK8NJBvhwdwijEhCKEvf0e52dUqozqS3r4/sigoNXOj1rjY54ooJvaGteEw3H19WapWk70daeXhcKV9O2BrmXE2PB0mhQiMZCnMgLvMiL3IDSlrI6K/poe1roXL5OSsj1lWwgqEb5FX316+zKsFAXCTVWVTcm61g9WdPRUy/3rSTrkqFqIjcFG2mTUpkyI6lQ28qoXNvhVDu1LevEpnIE6Jw0lZetijXwTdqKGiWoKFUUr6kN19Utb0ltiHS0x2tWVcbWhOpKIw1taaeuqQBeUAxzQg5hVAjHBa/tzXMDSle3rbgjwWA8+EuLMBNu5uT7oxAz22EbR7yD8Pzvx3I39J2JlVMcnjtSDaR0J1od0D+4dCUkGwWFYBiJ8TJRLBNLUW1j63hVbp3Wq3LwsVYLDmsKeLg8z/kxSXH0XiLvqboq2yeyJYTz8INwc2TANBjhcqjc8bVcc/bbhKur3p89WpxhdWOdbvCXdSd82vdvGOiXJUeWlXS/JpRuCIdokjhS6kBuCgiFtwwA4jTm7orExb25kTzx9sBeBU4UOEF8ZoAD5Scq1SgE1P/NfSAxdyQC0T54pYFt9BL4lBoL++kQjhRaWHCrobq39pSbcGlp6eGrG+VdhcAkEg8/M92KkUI0YlBjB680yLnYFdXY+EDenKOye3oBdDrDwKKIHA6SmBBPihiHJSxIYlwIiim5NCnHnvYEUQI3XjZNw7I5BpUDbhoZ93Sxhgc8lVkWEiOhKGy1HHRaUh2ZtDjJasPbBCtHJlRpA8v7qmq4KiwphGvxCeiOVXesqmisq3pyLVfIJC5hZj9Fx3QDZDqVGm0hFmTG3SOphiODXFpkFHw1V3S4B+JyNBkLpiSMcTKULA1xlSBEeW+XeTfiae0YhrsHS0vufiW0LFAWDocC5VCNlsWjkCf/g/Xu0az4H53x6vz7Zhf5baba3LhlUpgzca59LduxtWjWJFmu/3XfgZOb0k+MzV0t/6zuby+dP/hky5e+8Z/BrddXdr38hRPbzl08OzTvumO3j84R58y6uDPc8967Hyz+48PP/u611uWXkpt+ftuFMy/27PveO8c5/KqgXPz6D76zf3Rj/a4/rTw5r7jY2rL5rXfuOrL++d8c/ejxWfX1TZEDr35w/NCbZ37bnq590Iw+Wy7e+MB186rVecKxV3ZM/PRWZ9Pgj9bGv11bb//43OymS/eKj45sm22eG7mwfvCWztsrJnfpJ1/a2jzzua88Um3u5o+dKb3p7EM1fbsfGPjmzef3f/S1h+aff7v76TtOnfph55KlN9y13dYm73m8/r4jFTMPb6mNpe6/++/kROSJkhUL526uP3gh9uX7ZzyVWbPz/cYPzVeee+2L2sTdykz65He33Gbduvn0Gzc+FZ/UxzaOD19IL/nqvU78YS1m3rP0qPTu5GC58pOj49b67WWZTfP/9edFf9i8e/uhN8Ny88r2BXOf7l16w7nj9tnNibeLTp1q/fXaJZsT3/qo4szsI/9oe2Teuczb/Bv/PjD/9eMs0sW9ML50YsHyF0j5+3e9eP3Ns0Zv7rnlpk2D9MNt1e2/+Ms/b9z6+sj1e9O3nVjwYFlt7/g7LwfjN1yaUVR06dLMItZz+NDWmUVF/wULibGL
eNqlV310FNUVh6P2cKpW2x5RCtV3tiqgmcnszGY/gjm6+SRfJGYDJAiGtzNvdyY7M28yH7tZklhJeyytFDu1H3q0VDQkGiMKqIgardhS21prtWKxPdSjPbRHOVKl1h612juzu7gRClrfH7vz5t133+/d+3u/+2Z0IktMS6H67ClFt4mJRRs6ljs6YZIBh1j2N8c1YstUGuvsSHTf4ZjK/ktk2zas6spKbCgsNYiOFVakWmU2WCnK2K6EZ0MlvpuxJJXyL83ZOxTQiGXhNLEC1eiqoYBIYS3dhk6glzpIxlmCsCiCDbIpwsjAuoQtJGEbp0ysEbRWSq1lkdeWEpMgBexkgqhjG46NaMobZ2WCpUWLWZv2adjMSDSnL1q8tnq17s0q/K5du7bwMFz6SThmVskSye90iioGBMNombfkp2vDKEEGS17jaVLwriRVRU9blQmDOhaxUDxJsVlcDJsQAauyTlZUCR7Lx1AjDKLh1fowwzDV3k+plXWqh6uZT9+OTho+xuEx7ZONVXs4EeIKyAutrCPAY7vJoo4c7HEpNk3IXa2JHV06STw1rJJSqni+3PvHWvAEYz6QCMtXeZ58nDOsyzpBH6fFohYq6x5CSSV5tKhRpZAckcAbJZ22ULeM88RcjOoczcssTEoRH6nvSIh+BpyRIMtHBaGIk/8fOP14KhYAbcOKjuE8KJmMohP9ZPwsx8mHTxazE8cz5gW0gFM4aTyx6AkJAMW2jBa1KWoeteM86iREXYwaHdskqkqOj1Oo+gzxrBLYIDqa99BJ+LlSUVUFawBTN/MoDpD0T87PE+I8aTyjLFfk5wyRKvw2gTzpIImgHybyFRmEtQLlTMUmvgh25kGedSRSifjiqVs5sFTsglx2EdsxddSxrK3Xt85iVZFmzAGpRTq1ZaAzIqpFCvPqqb7QRiCHDkhh/jgareeRqiRNbCqQW5gOa9oy1lHSUVSbUfTSGlSHcW+NgqazgQoUMKlKPO238pZNtMBIBZpRElZCFVlYUHiRmkAP7O0ZJYmdI14s0gXQKRDJcm9ehAIja+CNBvtSvVdpw2YEtoqBGCSpZ6vD2yD8W8A6rEEnhWHL8AJwGFD9wNDzxbER7x2lap8oU0X03g0F7LzhL5RydL9Keg6PPnsGOtQNz8Dwt95nEkPtw5YdGBkpeiuWv//bEdhJxBJNxSiaBuKlOFsyHCMWLbc8Vih+jsggER1giVHKtqZB3OBINute2bRk6qgShBXYVaDFTEM4EzKE27E8ZhR9UrUCWRRuBIrmpb1Qf2EEJ5MmySrYhlLKwNHIEGQ5XqG2kUTBEgiGVEozMwyTJAXiWlwALIE0wDSzSGGfKQb2yj/cSyw/LoYJ9w3TVkihC8fBzPtPH4uKT2xLVwyD2F4kTMePcSnukH1YspAW75qjmETyMlN0uKbMlCb7iehlcGTNyIR3wwAoB2adPSZTy3a3zbwB3esdD2Ac1AsqwQLuPel1ilGBJJICCpNJoLhO/CS7kxlCDAaCniXjhVnufdgwVEX0uV7Zb1F9qngkGA/LscOTHt8ZOAy67d7fASDizZVFLgTZKp7l7xtkLBtqhApHllEx4BkvJPiR8gEDixlwwhSvge54YfK2chtquVvbsdiRmOESm6LsbsWmFg7tLH8Pwfbo4U7UdR67XHHwo+VApINsZPsMx1ZeF92t/tHcNWMysc08I1Lw4W7hxkUglELc/W/19YmpvqRW04SlRH3Xst7a5U6ylaZXSuHBfFeOmo3hlvBAq9rW2iRYWM5mFK2BCUYELhaLBCNRJshybJANMrE819telWoLW0Y6UxvvbheaM3GdV9MKqya62CuXLY9Gu4QVgh61VzirrFaps76hU5RbOzKx/owWCqV7M0aLkWrl+1esaGrpieRX9TfUxpcgQOdkFammebCnpc0wO6406htj+Raq1vNasr0hsVxP9Rh1bEMk3dTW7Th8w2CmDB4fDTNcEWGYC0U5r20rcQPKVNqW3TGeD4XvNIllwK2bfGMcYmY71uiYdxCefmqiePu+vaP1Iw7PHasHUrrT3bJTgbgIWkaziOf4EAqGqwWhmuNRU3v3VF1xne7jcnB7twmHNQU8bChxfkKUHT1DpMm647J9ulBCGA8/CDdDBg1qEaaIyp3qYboK3x1Mc/3OwtFiqJnGurLOX9ad9mmfWzeYk0RHkuRsTuNi60KCkiSOmLq/OAWEwlsGADGa5d4REqLbiiMl4k3CXjkmyDFc8OFBBpSfqIqmQED93+LHj+WOVUG0HzrWwKYZAp9JEyE/Hdxj5RYm3GAU3Vv7IzehWCz26PGNSq4EMKmKhR6eaWWRcjRBXrMeOtag6OKOsGZNDZbMGUVy918InT4pIgYFjiN8FRfFISESJimuKpTiQgSHQ1KM3+0JoghuvGwa1LQZCyoH3DDy7v4KDQ96KlMjBKuEMGx1Cei0qDoSSTjJeuptwlqCDKjSFEv31jUydViUCZPwCehO1Pcui7c31z3Yw5QziekwCp+ZEzoFmU6lxhPEhMy4k6JKHQnk0iTj4Ksr3uveH5UIScY4IZLkuGhKjDC1IEQlb0d5N+Zp7QSGu4eVFd2dslATqA6FhMASqEY10TDkyf8YXT9eEP9fzP73BdfPmeW3UzYm/rTpJe7skd+t7DlSe+nsV3Zcheg/x++ZvEd+4LUzO6Vbm/+xb/uaoQCDPphu2XPx+q678humYtYzN8+74azZIhLnzNvALHR7X7jl6wcuv2b1Te9NX/PAzeyB4f/w73LTQ398ddcA/8VnmfNSRzaMbu4dwfzd3xv7/TkVC8w9tU503lmpnb8yqi9Z//TP7/z8qv67Im/85V+LcvOfn39G066FB/ctXX9+7YNnnCce6vzg+q84GzvO3hF/J/b9Le9c1jDn9Qv3/mze8vprhyprBtqf1a7AoS3sr/ft7mKu3Xil+3JL1d4XcvJPbokPbL39p7nDBx7bfOcF7x0+Kxlcevlvswev1g8++aO+i6Z3knjztoaa2+Ze/csnTxf2XnFk6q8vfVX77qYFq84Z6Dj3Sxtn55TNW8ZvH45OvZzbpO8aNU4b2LE6e3Bx9uGpI5uXHJp6cQTfOvSmeVpT9PnY51585vV9Ny24ePffXgs9nni/p28Df5eTerXy4Onf/sNz9Gvf+vOPR0YXipce3jNhX/v+IfaJ6ht7v/PU+7Vr5hxoS/3w/NAj+2578pYvL73uof7tqVffOGd6UeMrOx49tO7xU8+9auP8/p3py27YqsyO3/3643fvHl/jsM67l791Jjp11Z7oji+Ii59Zoj+3YG7Hm33K2z/4zegThy/Dcy/qmvj71I1vi3M2PXEBpPjDD0+ZRbbNn3f6qbNm/Rf+NLbx

View File

@ -1 +1 @@
eNqdVnt0FNUZD9IC0p7qQUpRsU73aCk0M5nZZzaceMyLuEDIk7w0jXdm7maGzMydzGM3G0yPpfZYi6d0pNYXtIWELOwJIQSo4X30IOk5emgUqCdIfdRjY9pTBKtWKYV+s7uRRPir+8fu3Du/+/u++/t+97u7PhnDhikTbUa/rFnYQIIFA9NZnzRwh41N67E+FVsSEXurKmvremxDHlsqWZZuFuTlIV1miI41JDMCUfNiXJ4gISsPnnUFp2l6eSImzt70wTqPik0TtWHTU0A9uM4jEIilWTDwRCgJxTCFKBFZKGogFVOLxehiCmkiFUeaRVmEatdInLIkTAnEMLCCXG6Kx1YcYy09v7ioDWeWLF6ODHgUiGKrmunJpTwGUbAbyDax4elugRmViFhxp9p0i/YxAdqyDZ64WA1mOfg1LQMjFQZRpJgYJiys6iAOAF0ulgm5c4QorYJEZMGdW+exEno6UNTW0iK6hF8+uwANNucC9AQIqrUaWFdakWl5uruzbFl1/m8iwInYFAxZz0I9RVRVGkOZElYUhlpjYhBMNl1VcScWbAtTGRaQTFVBQZOhIppuW7CC2IoIMkNtYkiRxa8AGapBAvltU9baJjmJkkuZBAwjQ70pYlsuEbxBPG/gmIwsLFI0paJ2TJkgJSVblEgAqRGLUghpnwbkcZQAJhMAkLJGJYhtQJnNODYYd7c6cg0DtjXTuugG2NGwZJwZgn+NRPrpK6oIYADK1GRdx2l/GXZa40ndofoQMlMW9xTIBhbdymQJW6ZACb8WC24Fu1u6kxJGIqTyds6tvRIxLWdg+gHZjQQBg+OwBvEhgLOrrUvWcykRR8HSOAWHQsPpIjupdox1GkSP4b7MKmcQ6boiC2nv5601idafPUS0m8v1r1Ou32k4cprl7KuEJIoieVkvcEzAy7CDnbRpIVlT4GTSCoJ8+jIFPjT1hY6EdiChs13C6cssHpiKIaazvQIJlbXTKJEhSM52ZKhB/96p8yC2aw8nWVJ1fbjsy2vhfAzHMv4904jNhCY429NH88Vpi7FlJGiBAIezle0TwFAydsY+bm0Voq28WtjBdARYG6201zaSpvzKImZVfc2aoD+/uVwPx6zK+oZYs89fKtevEttpLuQNB71sMOSjOYZlOIajO6X6qNFa1CE3xNjWsmahUlxRbHZ6lSDzQMeKZqvYz0YCvoqS0go+Yq7Aa5rWrhA7VuJm3leKxWpvhVwtFUf1QNRXvzIoljfZpXZ5bTNXvYyC7OyYLBauDq1iqtUHlkteSSqR0ZpyfyRSVhvtCjQ15C9fXRxa44uEA6vqY3akekp6Xs5Ps9kMg6w/n3U/A5PeULDWZklOT8jP7TCwqUNPxj/tA8ks21zf656D1/6YzPbmbZUrr1l4QW8peNI5UmdD+/N6qUrBorys109x+QUcV8DlU+UVdf0l2TB1N7TgnjoDzmoUbFg2afmkINlaOxZTJTc0+xHX7FBJN33o2zTu1ImJ6WxWTn8jXZO5lehI6d7MyaKJ0YY0uSsd1jmSdn28qzMuCrYoSrG4yoa7/D6Zx7YQ3ZddAn3CDQMJ0aoJ4gQDA9k3k75LwV5ZmmNpljvYSUPjx4qsyqBn+jt7NZpObwDEHr4eYJF2DJdo0p+uBnt0KsLAKhjWjX2Nxh8Ohw/fGDRJ5QNIOBA+OB1l4qnZcF7VHL4ekKXYxpr9nZNoWhadsXtg0Or3CiKQIy4UFFneG8IhP0YsNF8+CGl5+QNuOxSAxS2mTgyLNuHeMGQr4YzlqqjT7TGFPi7gC8JOl0GXFhRbxLU2X0rcPZjLKB3ubILE3SXL6RIkSJiuTfvPSZY2rS6qiJT8oZGeaiS6Us/8B0lqBJp0NNpXiw0ojJMSFGKL0CwN3AdcNUVNzr58MciHvFGOF3ns48M+uhja0CTbl7brdTttEimQe0xw9kq+Qk+B3+/zLIO7qDA/CGVK/1P5SV+m9b8y4/TdG+bkpD8zlZpXf3mWnf+X8R/2H9r6dM7FH1QNlfxrNPfZVKrqzNwqcfMLx2tH3ty/JTXn4kc996x6Cl14eej8xfD40dUzBEqYc/vPn1zn3Blo3PXR+5+XXT667vOB4NjGEf5S5X1Xz/VORA/+ejTvdquz6L13GkuKDlR/eO/EHQsWHBipsQ8/mzu+6fVH9ux4dDNfdy8d+DF718lZRwdn3/HkiU1jP3vOmf8hJb71KHdi9ZmWv+0QLt/yvTs/fccTLhipfWzwi+MLG2vw14bNNxIDhZ88WPWrz3a8us/eWjf7QunSeQ/t/PPwfcmTh9T5DUu3vXzzv8VXLv0Dfdy+pOH9x7+4bdexwvjltRM9O+etmEs/P8u+8P3v/ug5btbz499+c1H38oIhqe74lmNPPfzN4YoNi/bnnXr81LeuLJA2LpsTv3XxWyp5mt/9p66N53hy7O65Q5+dGr9r83tbToaHP307flbklixqeOZKy1/Ho9QvaoSJdwcufeOJ029gp+x8quClhHd/fKI2fP9F7TcP535wovy3V37PfH1kU9Ou2f8csEfzvjO6aTx12++Y/4Q+2baw8Jkzgy/8d95D4sIXT1ePLjnfM0E/cfD+naduWfpS2c3nXk+81lP69/m79gw1nXh30eGbcnKuXp2ZUxweeWTDzJyc/wFGZ0j4
eNqdVntwVNUZjwpTsGUmZVqwlhnObLUMae7u3Ud2s0lDm2QDJiEkZBNMeBjP3nt2783ee8/1PvYRpFVIYawZ6nUs1pGMQkJWQxrloSLPqY+Kji0gpUyACjq2MBbCYJvGsTr0u7sbSYS/un/s3nPu7/y+7/y+3/nOrs8kiKaLVLltSFQMomHOgIFurc9o5CGT6Eb3gEwMgfL9TY3hlj5TE0eKBMNQ9TKXC6uik6pEwaKTo7Ir4XZxAjZc8KxKJEvTH6F8+sztf1/rkImu4xjRHWVo1VoHRyGWYsDAUYsEnCAIIx4bOKphmaAFfHQBwgqPklgxkEFRXKFJZAgEcVTTiIRtbhQhRpIQJTu/oDJGcksWLMYaPHJUMmVFdxQjh0YlYgcydaI51q2BGZnyRLKnYqrBeJ0ljGFqEWpjFZh1w69uaATLMIhiSScwYRBZBXEAaHOxzoA9R6nUwQlU5Oy5tQ4jrWYDRU0lK6JN+PWzDVBgczZATYOgSodGVKkD64Zj3bo8W16d/5sIcDzROU1U81BHJWrKYpAuEElyoladgGCibqtKUoQzDYJyLCCZLIOCuhPVKqppwApqSjzIDLVJYEnkvwF0ovsFkN/URSU2wUmlYqRTMIwI9UbUNGwieIMjEY0kRGwQHjFIxnGCdJASiQbiKSAVaiCJ0vgUYIREKWByAQApKihNTQ3KrCeJ5rR3q2LbMGBbPauLqoEdNUMkuSH4V0tnn76hCgcGQLoiqirJ+kszsxpP6A7Vh5C5stinQNQIb1cmT7hmEpRGOglnV3DdmnUZgWAeUvmwoLBfoLphDU89IC9hjiPgOKJAfAhg/T7WJarFiCdRsDQZhEOhkGyRrcE4ISoDoifIQG6V9TJWVUnkst53depUGcofIsbO5ebXg7bfGThyimHtbYQkKmtdeS+4nSUep+flFKMbWFQkOJmMhCGfgVyBD0x+oWIuDiRMvktYA7nFw5MxVLd2NGCuMTyFEmucYO3Amuz37Zk8D2Lb9rAy1U03h8u/vBHO63S7nYFdU4j1tMJZO7JH87Upi4mhpRmOAoe1jR3gwFAisUY+6+jgoh0RuWIJ5sOh5mXtVa1mpJ7G7uf9qXRzkmqL/XX+h+qlpfVLvDoWEnFRrmHcAS8bDAbcgVLG7WSdbqebCabZ9oaS6FK/rsbiVZUtDd7aeKXikWKiUwo3O5cvay0tbfau8CqlxgpzpV7PN4VqmjihvjEe7IzLPl+sPa7WqdF6T+eKFUvq2gLplZ01VZXlCLIzEyJfUZtqq1uqao3L1dDiYLqOSiGPHGmoCbcq0Ta12lkTiC1Z2mKanppUfFJ6nlI/w+Yz9LO+Utb+DE94QyJKzBCsvoDP/YJGdBV6MtkwAJIZpr6+3z4H7x/N5Hvz9sb6Gxae0x8CT1qHWgSzGLEBtIwmkIf1+JDbX+b1lrFutKShZag6H6bllhbc1aLBWY2CDWsmLJ/hBFOJE36w+pZmP2SbHSpppw99myEpleqEyWdlDbUxzblbiakN7cmdLIZqMayIXdmw1qGs65NdqSTPmTwvJJIyG+zyecUIMbno3vwS6BN2GEiIkXWrz+tlh/NvJnw3CHtlGTfLsO79KQYaP5FEWQQ9s9/5q1G3+ktA7H03AwwaJ3CJZnzZarCHJyM0IoNh7dg3aHzBYPDgrUETVF6ABEuC+6eidDI5G7dH1vfdDMhTbGf1odQEmhF5a+QeGHRwQcK7cbCULQl4ON7DeTx8gAtGeezxlHC8u+R1ux1ywGIXU6Wawehwb2iikbZGimWcsntMhddd4vXDTsuhS3OSyZOwGQlRew96OVLhzqaYf6l6MVONOYEw4az/rEyofVllQ231q23MZCMxjWruP0hGodCko9GBMNGgMNYgJ1GTh2apkQHgaq5st/aW8oRESqNcNOgOwk+AqYI2NMH2te367U6bwRLknuCsPYK3wlHm83kd5XAXVZT6oUzZfyqPDuRa/9u3jc1/fEZB9nNHT/hq/Cxb+NXokapzT0tPdH+J7hl7bfOJYSz1DPc8Oa3vvtiqx4oXPrl6y8Yvr76Z6Cxvv3PsWz8a/duh8VGjbFr3IxcKZz+4cKzYuzv533+a3utd4188fGTthb6MtfWYf9x1/tWR5KJTx5mif3dteP9I69gzf+4JufYOPhi1uorbNp28duEf0bKts+rRyXlk+Za7zv9wfEHFc2dfGZ21se/dFz2n1m/b8ETx7VUzgk9df6Pv48/nVnkuHa7xGJt/PAP3hma0MY/PaLrY/a+W9CU/OvleX5G0c/V7cy68suW7fc3+N04lhd5nK5cv7Lt44tg7m+Yf+OTY5p8dvvS7L86fzbgeHl3Z9f2xD5/a+PHqXV0/Zw5u3jQ30v3I9vb2+H8WnSraLfCh3iPPo2kvNHz2nd+6Lr+JZx6Yd+UXMz+anToXO3Om8bm5vWenle+subh55eWhLWq3dwunXjnYOevdyNbyosaT5W//Zsy8r/eunadHI/sLv33v6rvb+4cvz956euADp1E3/egxZXbPqWeuWadXtUz/SPW9+PnzI3vnlC169IG1RWevtY/f2fHWifDyB8xDa3p2fy8dPiP94E+vH2eQYH515Nn4udaLVffO30YL/ng89Ym16+j08qv7ave0/fSt8Q8+Xfj0xj9c6X2n+Ne/qjt+908+zRTO2/BLqO/163cUsH/5a+HMaQUF/wOC7Fya

View File

@ -1 +1 @@
eNrtWGl0FFUWDvsijDiDMiKMRR+YBEl1urp6SRoiZF/IQlbIgp3qqtfpItVVnVo66ZDAEWEEEpUWwQURh4REM5AgICAYHUERZHEZkUHUEdDhzKDoIEfQIzK3qrtJBxjQkTlH5/B+dNd779Z9X937vXvfu/PavEiUWIHvtZblZSRStAwd6eF5bSKqVpAkz291I9klMC3TcgsKmxWRPXyXS5Y9ki0mhvKwesGDeIrV04I7xkvE0C5KjoFnD4c0NS0OgfG9P2jAbJ0bSRJViSSdDSubraMFWIuXoaMrERTMRXkRRtE0yGCygFGYh+IZSsIYSqacIuVGWAXjrNBjoZaORISxIOtCmKDIHkXGBKcqo3chiokar5cFu5sSqxihho8aX2Er50Nvdj9VVFR0d+pDPwWK6GW9iNE602iOAkT1WI4K4ce1eqwA1Ya0JlSigHbWwbF8pRRT4BEUCUlYgkOgxOBilAgWkWKSXCzHwGP4HJYKk1h9OV+P47hN/Qm1sI6t3ob/+HbxpfrLFF7WfticTcWJYYYA8kAL65DwmC3qsdwa+MZ0ShTBj4kipfDMNezppjgUcpXRGK79kkZcZU4DYtUbzaomDWcP6bAOoeGU9Fim4OJVhAyHfFhUKieAc2gEI2xlpYQVuigfEsdjSYpb9Sy85EQaUk0RGfsTcFoJvTGWJIM4jf8Bp2ZPVgKgWRTLU7A32Koqlkf8tfgZjtNouZbNrm7PONWgAZzkNe1J0WpgAaCU7MKisljOh2VTPmwaQtx4LFWRRcRx6Mo4SfNPsKeZ1BPYRb+brsHP6SzHsZQbYPKiD0sASPwP5+dVcV7TnrF6Q5CfVwxW3U9pEKp4CJcQS0RMi9YQdKOxGpGVkRYcp/kgdPMYLTBIC6y8VAOSrNwdSpMFPlLGIMwpEOJ8V4jFvA/jWIdIiSz4TAClImimeMyhsJyMs3xoDYGHeQjbwdit70aZjySPABMMKyJaBm+DWhVcCDC8SoetjXhBqXRhLO8URDelCfSArovGdKLAITV5SD5JRm5dQzTWI6dMhzQUGUgPtCACnwJqHEiuQarBIByrSJ0QVcO1qWbUNcyEETcYjFOHKj0yTurNuKyIDkGV5WGUgH8JaEq5oeOkOAnBAODwQPoEQVWXQW9VxwSBs9MugaXVsdk62efRFnIqvJZmVYUXn1UBHhKNKuDRbGoXkYezU5Ksa2gIagvmz/9aEcgxSKJF1hMU1SWEHCi5YN/psSJJpQ6rOR/VIloBKnlCNHK7wW6whzN4NedKLkHhGDArUNBLcSxziSBsIheYW5EgMoZ0Clw0JglwpGDdKp8CyRtmKIdDRF6WkiH34rCXqhAmKWqWlzFGAElekDFOEKp6CDoQcAQFFwBJYCPQSAySRWOKh1LPD3CwkTS7eEQ4sIgyiwJdoKDo054usYq2YySe9XiQrFpCVDQbh+wO3oclA25Rz0nAbEb1TFDhzDBRwTELWA+iDTMb2tTjCUD5KGJYi0uQZH9HzyNUp7rvgHGQYAQGFvCvq6xjPdEYg5xAYdQOFOeR5mR/exVCHhyM7kWtgbf86ymPh2NpjesxsySBXxvcEriK5fLpdpXvOGwGXvZvygUQCRkxQS4QerNRb1hfi0syJBUOYgHOUYCnNeDg7eETHoquAiV48Bzpbw283BEuI0j+NdkUnVvQQyUl0i7/Gkp0W0wbw8fB2Co9/G1J0y5fLjjZvRxEdYPe9FwPxZKPp/1rtK25pcfLSBZ9OC2ADv8fDa00EIpF/sOn7XbaaXe446v11WaDQk1VZs0QSmJzE/RZxflFFlNsaZonzivnFk/3lpKmZLY4i6nCCasxzmI0WKwkTugNekJP4LWuYqdoT6hmp3sN9pRSOpfJTJRqjZxFn16dWSonmgwZZjI7KTnbkSFloqKSWZlM9VRU6iCTEZNnzGbzXIlOj9lJFk+1MGklSrKSVlBK5E3EAJ3iZZn4HGuWPs+dnuoyulxJLFWUZsrISClw1plLpsem5iRai8iMOHNWsVfJyAuDZyRMuCGI0GIwxRrU1hHiBuS1StnlbzGSBPGMqIVqCd3XCjaTFWlei7oR9u1uCx7fV+dO7ebwbS3JQEp/V6EC8Q9OZLm0jBkNRhNGxNoIwmYksLTswrVJwXUKr8jB5wpF2KxO4GFKiPNttEvhqxDTnnRFtncFchOu4ofAjaNajyAhPIjKv3YGnh+4uOAZyRsDWwsXxEqKZ+u0Zf1dGu1r6mprGFphGJe3xm2IqzORrAMptHNT8BUIFOoyAAh3S/5mK0F2BGdCxGuHbzXghAE3ENtqcYj8iGPdLBhU+w3eniR/ixmsvfVyAVmoQrzkbzNp7jC8FC4hwpGH5dW1u9WY4uLiXryyUEgVCSJmq3VbTykJhaMhjG5p6+UCQRXNFre0tjYkjrOM//BY6NgNjDmOjEMOi8lhNjOxNG2xmAirlXEaHIQ51mp6QQ2INKhRvekRRBmXIHPAMcTnPxztpmrVKBNPEmbSAp86EeI0zSkMKlAcyYL6EdJEzANZWqCYzqRUPImiXQgv0Ajob0suyUnIzkjaPAMPZxKeq4VqmOcFCNNOZ2sBEsEz/naaExQGwqWIWkFXfkKJf1MsY3FYSYOZYiwU6Ygj8UQIRCFtF3nXosbaNgoONZKX9m90kfE6m8lE6iZCNoqPtYCftNvsva2B4P9ar4N3Ng6M0FofLn9m1RHD8IazncPOjhs9dszfMs4X29675Y17Njcl/LpJZz8yC2Us7Xh9w5ILd3Ntm1ek3pz4QbN04LytX9O9TWMYzJij2zLryy+/iztQnPNEs33h5Du2fGN5/Pz7+3YtrT1jWzBnj+H+EcPP7Bo5eGnZgrEPdFBHl/xmefvxTz+ZQ9yOFx8fvDu7qb1lzfL+fzlaP2XBxMdecU163P7Y4/7e8fO5Owy7/vrstr2jlKa5Y5jm78vGpn3hWzzMsuq3fSoLb+91quvMq+aEl/steHv+tqyyEZlKhDE1Zd6EgyMPnXjXt+WBY5kpKafJZWfOnfuOWdx54aXOE5OX5578xnbq3MF30srGPTK81DT0nx9nT3rRSA8xvTmhIz7fKX/yVvvC2dserhiydWrjKPOyf+xkB7x8k/z66PkTFy79fPPZJY7ON+vsW3d+NGfy8F0vbM+dGLtz4krJc7Lri017qPzaDv357a++uX/KIxnryAFzD9rTW5bf3ThKtB8c9FTTuq6pq97otbhmY9+TWfx43VeLtw1/cl3r87/btP291ccP9xubVK7MOLv3+6FdUanHNoz0brTPaxxyfLltsO21BlPXuLwRUcq3CeO2Rt07uW7HpMyWnLdLIlegIQdemQv+u3ChT0TphSXHGvtERFzPwsdA943Cx43Cx43Cx43Cx43Cx43Cx8+98NFTF69wXJgIGIqFOwGvlRACdQ2Ku1o5gmW0ezQI2RVncSUrJyVlpQlKel1BprnQmpZpzmR+TNWCEivBUZAn1PnZ5YGLdjl0ynWMs6wsEjJNZDQWqeaLyJkz9erXR40v1zXo1Hv3JVYKo52aoLBQoinnA124zmkNWKknCCNpjCvntUR0sd8tE25G1Sw9zGP/IUa4bgWnG+WFG+WFn295gTBbr295wfh/VF4wkb+U8oLZ8j8oL5BWIo4GC5MEwLXCo9XqMNGEGRljY+No0y+jvOCwMgR9HcsLvrDyQt5efodh2IsnJhTf3SezdufQRe2PPtW+Ob+oaHfs1GPHavHZeb6FcztH7Wie13gz+dS/9p1bEjFwZeP4gUWZxcvIzw/vnRTzYeS6xi7vV21n874mDn9eeeTQ0Y13NBYXvnV6wK2rikanLCN3P/HZouLfjaPl/akf7lm/4levl2QXPtK8aMOO6u2Hbt/kO/7ouzkbTz2dt7zl1v7Egs6+ER9PW0U076+e0Lpk84H0BWN2937tjDhoSq3xlluMfUfPKL1t3YRndxvft31dtujp3bNNxl1/GHc6ZnW/Eix9xc7Mur6jp9hOsH+asMflXeLdsPbIyQ/q6srd85pWfrbrsf6T70kcedeTbch17qaX3im+Xzya138DMa1CXO18aNCfa+J3/d0++eikogdHZFVtenv95H05c45tu7Bs/7dD97z6zgfPpN73Xp+XUx6Mfn6QVRo8rmjRN0VP/N766ak7G5jqu55xV5942JSj+6w3/fwMtuzpmnsO3Vn4kC/r+2AJYJ1/zcpVvSMi/g15gOqZ
eNrtWGl0FFUWDsIIDo4yRw+DB8Sy1QE8qU539ZpAlOxkD+kQwhKT11WvuytdXVXU0p2G5KighhlA7ADuCw4hGWIIIHEBRQZmUDTgegAh4AJqUFTcEBeWuVXdjR1hREfnzDiH96O63nu37vv63u/d+96d3RbEkswKfL8OllewhGgFOnLz7DYJz1CxrNzSGsCKT2BaykpdFctUid19rU9RRDktJQWJrFEQMY9YIy0EUoLmFNqHlBR4Fzmsq2lxC0x4z6ADswwBLMvIi2VDGjFtloEWYC1egY5hiqASPhTEBKJpkCEUgUCEiHgGyQSDFOSRUAATtYyn1khobQKWMMGCnA8TgqqIqkIIHm3e6MOIGT3GqAg1AST5GSHEjx5Tmzad176KPmtra6MvDfGHS5WCbBAzeqeM5hAgaCBKtCV/WmsgXLg+rjXDi6PaWTfH8l45xSUKqoxlIsMtICm2GJLAAnJKlo/lGHhNnCNyYZJomM43kCSZpj3iLaGT1pBG/vR26qOG0xSe1n7cXJqGkyBMUeTRltCxwGuxZCRKQ/AfJyBJAt9lSkjlmbPYM4A4HHcVRSVq/14z/8CcDsRhpGyaJh1nH+mEjlnHKRuJAsHHawgZDoeJ0bmcAM6hMYywXq9MVPhQGEtjiCw1oHkWPvJgHamuyOL8GTgdZiPltFhiOKl/gVO3JysD0CLE8gj2A+v3szzmz8bPRJyU/Ww2+2F7pmoGjeK0nNWeiNYCCQBFio8YXcRyYaIYhYkyjLkxRK6qSJjj8JlxWmw/w542i9FMnPK79Sz8nMxyHIsCAJOXwkQGQOJ/PD9/EOdZ7ek0mmL87BOkos88CE88hESIHxKhR2QIrMlESGIVrAfBsjCEZ56gBQbrwZOXQyDJKtFwmS3woxQCwpoKIS18hljLhwmOdUtIYsFHAiiUQCviCbfKcgrJ8nH9Ag/zEJZjsdkYRVeOZVGAQYaVMK2AZ0GlBioOFD6jE9bFvKB6fQTLewQpgHSBPpANyYRBEjisJQY5LCs4YGhMJvrki8mQYkZFwz8tSMCdqBo3VkJYMxSEXg2lByJoojbNfIbGahgJgKE4bcgrKqTFaCMVVXILmiwPo2b4lYGSKAAdD+JkDAOAQ4TUCIKaLpPRoY0JAldD+wSW1sZmGZSwqC/kUXk9hWoKT71rAjwkFU1A1O1ZI2GRq0GyYmhsjGmL5cZ/WxHIMVimJVaMiRoy4s6TfbDHjMQkWaMMqzse12NaBQqJcfoEAmA32K/5vJZTZZ+gcgyYFagXRBzLfE8QNowPzK3KEAXjOgUumZAFOC6wAY1L0eQMM8jtlnCQRQrkWRL2jR8TsqplcYVgBJDkBYXgBMHfR9CNgSM4tgBIAhOBRlKMLDpTRKSdDeDQIut2ESU4jEgKi6NdoKAU1t++ZxV9p8g8K4pY0SwhqbqN43YH78OSUbdoZyBgNqN5JqawOkFUcNcB60G0sbqxTTt+AJQ3koa0+ARZiXT2PR6t0vYcMA6SicDAApGV3pmsmEww2AMUxu1AcR7rTo60+zEWSTB6ELdGv4qsRqLIsbTO9ZQ6WeA7YluC1LCcPt2u8Z2EzcArka5SAJGRnxLjgtloo4zU6npSViCBcBAHSA4Bntaog59KnBAR7QclZOyMGGmNftyZKCPIkeXFiC519VGJJNoXWY6kgN26NnEcjK3RI9KWVXb6crHJ75aDCG42Otb0USyHeTqyXN+aT/T5GCtSmKQF0BF52NRKA6FYHNn9WU0N7alxB9LzEOPKLi+ZkjlJdRcK3smMvT5cHhKkXHuBfUYhV1SYZ5GRL+hnAzmk2WExpaY6zA4naTaajGajmUwNm6YU2zxFdln0+jMzKoot+f4MnuK8rJFzlRsnlkxyOsstlRbeqVSqU+VCpiw7p4z2FZb6U+v8AavVO8UvFoieQqqusjKvoMoRnlqXk5kxlgB0apBl0vPrqwqKRKl0opidmxouELhsKuAuznFN4j1VYpYxx+HNK6pQVSqn3p8Aj3LaSVMMod1kdZq01hnnBuQwr+KLtFCUw/ZXSQ/VMp7TCjZTVHl2i7YRtm1tix3N/1Ja+B2Hh7ZkAykjGyp8ajJhchAlQpCgTJSVMNvTLJY0k4XIK67oyIqtU3FGDq6pkGCzeoCHOXHOt9E+lfdjpj3rjGzfEM1LpIYfAjeJ60VBxmQMVaSjiiyPXkrI/Oy10a1FCpIX8exMfdnIBp32oZn1IYZWGcYXDAVMqTOtFtaNVdrTFfsEAoW2DAAiA3JkmdVp74zNxInXDv/VRJpNpMm8vp6EyI85NsCCQfVn7GYkR1psYO0nTxdQBD/m5UibVXeH6ZlECQmONyyvrf2dGmtqaurTZxaKq7KAiM1pX99XSsaJaMxUQH7ydIGYimX2gNxRHxcnWSay+2ro1DgoJ2Wj7B6PxWZmmFSPxWR1263I5jQzJuR0W9ZpAZEGNZo3RUFSSBkyBxw/wpHdyQFUr0WZdIvZZrHDXx0LcZrmVAa7VHe2oP0JeSwhQpYWELMqK5fMQrQPky6dgJG27CklGcX5WY9XkYlMIkv1UA3zvABh2uNpdWEJPBNppzlBZSBcSrgVdJVnTIl0ORmM3akmtxV7sNNDO8hMCERxbad416LF2jYEBxo5SEfW+izphjSr1WIYC9ko3WkHP+k31Ztbo8F/S79vrpg3KElv/ee7qoU9pksbX1q1+uPB9w2Yu/i8q4R97R+hQ+2BfV1Dy5j7C8Xe1zsbX1tw5YlvFw9o/tvdXeOyd6Zf17s3Rej/7Pg/DjEP8ZcY1raHjrx/9EDTQsfbX53Y//aYxhtPkiPmNR06NPzrY2kb5vCbTN15lyq1R259viOzOXfyanHrBYN3iNSIorZFn+xYPYBatPRa34rlb6y+/oOJH9/Z6hiUkjvrvT0Tbh5psA8eRnPTTswboc7eSLjnfHxl88NHx+XM/aBw/s2W9rlJR2+Yuf/epT03bfZlVreslJdPTbpsmXvTkrr5jwg9aY4F+wu2vLQsFPrm86M9KTVvMte/9umixff1/L70NnJjwUy6+5bKdUPVOVPrL19Ysm1Y/+1rm/2OLnta0DdpW/oDE0zn710evu3eP3+Bxja1jdjnueiVEc07uJ3rCta8hh94Yi/yrjk4+MDeeXfJr2+KvGh0hm7o2vOOuGRW55fHe54LHRm/OH+lpfDGV2uq8snjacGtjVNG1VOPf2ZsKBi4daHngp33bCsvem/gB5mfPDp1qTS3qdr/8ifSxc1Xdk7teL9lVm5KyR1PJjdVH77rLfeC/KzAl/N7/2QlCd+k4yF7b9cjLywvOZn3TL9bxhfe/Y+dyLBlxpbttqYtrmO7HlvfjTe/5XjQNG9yedvBjkVH6EHBTVeAk0+e7J8kl0fqBg9ISvpFKx/55yof5yof5yof5yof5yof5yof/83KR19dvMpxCSJgJBYuBbxeQ4gWNhD3Q/UIltEv0iBUE7TlVfDlVaWuSVmBkNc+g2XDpTbEWH5K2QJJXnAS5ARtftb06E17OnSmGxjPtGmjIKuMSiZGablhVHW1Ufv3o8dMNzQatIv396yUQDEtGRHxpDKdj3bhPqc3YCBcLikLlTqd15POqf53Molm1MzSxzw1P8YIv1jF6Vx94Vx94X+3vmCmzL9sfcH6f1RfsFC/lvqC3fYfqC+kWs1Op9Nuoiw2CiNPKkMxDEVRDg+y0FaUin4V9QWPGztsv2B94f6E+kL5prrh5iEbDk++MK3n2eFjZxyYdtEYcvniO8cPuiaZZoOTV1Re1TLlXutXb87beXsoy7U1pyn8+cjD6dvlfks7Jk4rW5/zSrX4+SOHbxz5zBvDWp965FDpfT0Hr2vrSfnoOaHh0wOdlWu3vbWvPxtGpDW0KVL15cCqvMyHO6byld3d3cVfo/Sky4dL9+wccM+qob3y+wdu7+3+7MNjW7nddQV1g68fnHRT79vbR254OO+xW9/9EL86seD1O/Y8eGdSBXPXqN+6Hs1/ed69C5Z6Ut4Z2OMgb35v2PPo8d+NbX4ze+TqK8cLS5+dUHtJ7aDIFYGmzI8vuHPeo+OmFV1z1Y7GzSOeP/ri5b3fLsp7rnju04V5x6yrVj6xYqUBuR86fvXbS89XapL2Xlz9xar9K4asG1x+wUnDQ7/Z9Yd3M0oXHFnoW5c2e/NsfvOeDq9wcNJtA1YWLrIvyd3XfNkK+sjrw7svWXp3y4ld26ftaXpnp+Nw5Ip+K8cMuufabKpyY8m4zpwXdn298TFX9xy05MHmssk7djouzer6e+OFD61LOXFetAIwttfeObJ/UtI/AbzaHWM=

View File

@ -8,8 +8,8 @@ LangChain is a framework that consists of a number of packages.
<ThemedImage
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
sources={{
light: useBaseUrl('/svg/langchain_stack_062024.svg'),
dark: useBaseUrl('/svg/langchain_stack_062024_dark.svg'),
light: useBaseUrl('/svg/langchain_stack_112024.svg'),
dark: useBaseUrl('/svg/langchain_stack_112024_dark.svg'),
}}
title="LangChain Framework Overview"
style={{ width: "100%" }}

View File

@ -44,7 +44,7 @@ Models that do **not** include the prefix "Chat" in their name or include "LLM"
## Interface
LangChain chat models implement the [BaseChatModel](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface. Because [BaseChatModel] also implements the [Runnable Interface](/docs/concepts/runnables), chat models support a [standard streaming interface](/docs/concepts/streaming), [async programming](/docs/concepts/async), optimized [batching](/docs/concepts/runnables/#optimized-parallel-execution-batch), and more. Please see the [Runnable Interface](/docs/concepts/runnables) for more details.
LangChain chat models implement the [BaseChatModel](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface. Because `BaseChatModel` also implements the [Runnable Interface](/docs/concepts/runnables), chat models support a [standard streaming interface](/docs/concepts/streaming), [async programming](/docs/concepts/async), optimized [batching](/docs/concepts/runnables/#optimized-parallel-execution-batch), and more. Please see the [Runnable Interface](/docs/concepts/runnables) for more details.
Many of the key methods of chat models operate on [messages](/docs/concepts/messages) as input and return messages as output.

View File

@ -20,7 +20,7 @@
"\n",
"1. Functions;\n",
"2. LangChain [Runnables](/docs/concepts/runnables);\n",
"3. By sub-classing from [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.BaseTool.html) -- This is the most flexible method, it provides the largest degree of control, at the expense of more effort and code.\n",
"3. By sub-classing from [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html) -- This is the most flexible method, it provides the largest degree of control, at the expense of more effort and code.\n",
"\n",
"Creating tools from functions may be sufficient for most use cases, and can be done via a simple [@tool decorator](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.tool.html#langchain_core.tools.tool). If more configuration is needed-- e.g., specification of both sync and async implementations-- one can also use the [StructuredTool.from_function](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.structured.StructuredTool.html#langchain_core.tools.structured.StructuredTool.from_function) class method.\n",
"\n",

View File

@ -48,7 +48,7 @@
"\n",
"## Simple and fast text extraction\n",
"\n",
"If you are looking for a simple string representation of text that is embedded in a PDF, the method below is appropriate. It will return a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects-- one per page-- containing a single string of the page's text in the Document's `page_content` attribute. It will not parse text in images or scanned PDF pages. Under the hood it uses the [pypydf](https://pypdf.readthedocs.io/en/stable/) Python library.\n",
"If you are looking for a simple string representation of text that is embedded in a PDF, the method below is appropriate. It will return a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects-- one per page-- containing a single string of the page's text in the Document's `page_content` attribute. It will not parse text in images or scanned PDF pages. Under the hood it uses the [pypdf](https://pypdf.readthedocs.io/en/stable/) Python library.\n",
"\n",
"LangChain [document loaders](/docs/concepts/document_loaders) implement `lazy_load` and its async variant, `alazy_load`, which return iterators of `Document` objects. We will use these below."
]

View File

@ -44,6 +44,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.0\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.3.1\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
@ -105,7 +108,7 @@
"os.environ[\"NEO4J_USERNAME\"] = \"neo4j\"\n",
"os.environ[\"NEO4J_PASSWORD\"] = \"password\"\n",
"\n",
"graph = Neo4jGraph()"
"graph = Neo4jGraph(refresh_schema=False)"
]
},
{
@ -149,8 +152,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Nodes:[Node(id='Marie Curie', type='Person'), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='MARRIED'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='PROFESSOR')]\n"
"Nodes:[Node(id='Marie Curie', type='Person', properties={}), Node(id='Pierre Curie', type='Person', properties={}), Node(id='University Of Paris', type='Organization', properties={})]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='Pierre Curie', type='Person', properties={}), type='MARRIED', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='University Of Paris', type='Organization', properties={}), type='PROFESSOR', properties={})]\n"
]
}
],
@ -191,8 +194,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Nodes:[Node(id='Marie Curie', type='Person'), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='SPOUSE'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='WORKED_AT')]\n"
"Nodes:[Node(id='Marie Curie', type='Person', properties={}), Node(id='Pierre Curie', type='Person', properties={}), Node(id='University Of Paris', type='Organization', properties={})]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='Pierre Curie', type='Person', properties={}), type='SPOUSE', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='University Of Paris', type='Organization', properties={}), type='WORKED_AT', properties={})]\n"
]
}
],
@ -209,6 +212,44 @@
"print(f\"Relationships:{graph_documents_filtered[0].relationships}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To define the graph schema more precisely, consider using a three-tuple approach for relationships. In this approach, each tuple consists of three elements: the source node, the relationship type, and the target node."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Nodes:[Node(id='Marie Curie', type='Person', properties={}), Node(id='Pierre Curie', type='Person', properties={}), Node(id='University Of Paris', type='Organization', properties={})]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='Pierre Curie', type='Person', properties={}), type='SPOUSE', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='University Of Paris', type='Organization', properties={}), type='WORKED_AT', properties={})]\n"
]
}
],
"source": [
"allowed_relationships = [\n",
" (\"Person\", \"SPOUSE\", \"Person\"),\n",
" (\"Person\", \"NATIONALITY\", \"Country\"),\n",
" (\"Person\", \"WORKED_AT\", \"Organization\"),\n",
"]\n",
"\n",
"llm_transformer_tuple = LLMGraphTransformer(\n",
" llm=llm,\n",
" allowed_nodes=[\"Person\", \"Country\", \"Organization\"],\n",
" allowed_relationships=allowed_relationships,\n",
")\n",
"llm_transformer_tuple = llm_transformer_filtered.convert_to_graph_documents(documents)\n",
"print(f\"Nodes:{graph_documents_filtered[0].nodes}\")\n",
"print(f\"Relationships:{graph_documents_filtered[0].relationships}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
@ -229,15 +270,15 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Nodes:[Node(id='Marie Curie', type='Person', properties={'born_year': '1867'}), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='SPOUSE'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='WORKED_AT')]\n"
"Nodes:[Node(id='Marie Curie', type='Person', properties={'born_year': '1867'}), Node(id='Pierre Curie', type='Person', properties={}), Node(id='University Of Paris', type='Organization', properties={}), Node(id='Poland', type='Country', properties={}), Node(id='France', type='Country', properties={})]\n",
"Relationships:[Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='Poland', type='Country', properties={}), type='NATIONALITY', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='France', type='Country', properties={}), type='NATIONALITY', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='Pierre Curie', type='Person', properties={}), type='SPOUSE', properties={}), Relationship(source=Node(id='Marie Curie', type='Person', properties={}), target=Node(id='University Of Paris', type='Organization', properties={}), type='WORKED_AT', properties={})]\n"
]
}
],
@ -264,12 +305,71 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"graph.add_graph_documents(graph_documents_props)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Most graph databases support indexes to optimize data import and retrieval. Since we might not know all the node labels in advance, we can handle this by adding a secondary base label to each node using the `baseEntityLabel` parameter."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"graph.add_graph_documents(graph_documents, baseEntityLabel=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Results will look like:\n",
"\n",
"![graph_construction3.png](../../static/img/graph_construction3.png)\n",
"\n",
"The final option is to also import the source documents for the extracted nodes and relationships. This approach lets us track which documents each entity appeared in."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"graph.add_graph_documents(graph_documents, include_source=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Graph will have the following structure:\n",
"\n",
"![graph_construction4.png](../../static/img/graph_construction4.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this visualization, the source document is highlighted in blue, with all entities extracted from it connected by `MENTIONS` relationships."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@ -288,7 +388,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.11.5"
}
},
"nbformat": 4,

View File

@ -74,6 +74,7 @@ These are the core building blocks you can use when building applications.
### Chat models
[Chat Models](/docs/concepts/chat_models) are newer forms of language models that take messages in and output a message.
See [supported integrations](/docs/integrations/chat/) for details on getting started with chat models from a specific provider.
- [How to: do function/tool calling](/docs/how_to/tool_calling)
- [How to: get models to return structured output](/docs/how_to/structured_output)
@ -153,6 +154,7 @@ What LangChain calls [LLMs](/docs/concepts/text_llms) are older forms of languag
### Embedding models
[Embedding Models](/docs/concepts/embedding_models) take a piece of text and create a numerical representation of it.
See [supported integrations](/docs/integrations/text_embedding/) for details on getting started with embedding models from a specific provider.
- [How to: embed text data](/docs/how_to/embed_text)
- [How to: cache embedding results](/docs/how_to/caching_embeddings)
@ -160,6 +162,7 @@ What LangChain calls [LLMs](/docs/concepts/text_llms) are older forms of languag
### Vector stores
[Vector stores](/docs/concepts/vectorstores) are databases that can efficiently store and retrieve embeddings.
See [supported integrations](/docs/integrations/vectorstores/) for details on getting started with vector stores from a specific provider.
- [How to: use a vector store to retrieve data](/docs/how_to/vectorstores)

View File

@ -55,7 +55,7 @@
"source": [
"## Defining tool schemas\n",
"\n",
"For a model to be able to call tools, we need to pass in tool schemas that describe what the tool does and what it's arguments are. Chat models that support tool calling features implement a `.bind_tools()` method for passing tool schemas to the model. Tool schemas can be passed in as Python functions (with typehints and docstrings), Pydantic models, TypedDict classes, or LangChain [Tool objects](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool). Subsequent invocations of the model will pass in these tool schemas along with the prompt.\n",
"For a model to be able to call tools, we need to pass in tool schemas that describe what the tool does and what it's arguments are. Chat models that support tool calling features implement a `.bind_tools()` method for passing tool schemas to the model. Tool schemas can be passed in as Python functions (with typehints and docstrings), Pydantic models, TypedDict classes, or LangChain [Tool objects](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#basetool). Subsequent invocations of the model will pass in these tool schemas along with the prompt.\n",
"\n",
"### Python functions\n",
"Our tool schemas can be Python functions:"

View File

@ -0,0 +1,264 @@
{
"cells": [
{
"cell_type": "raw",
"id": "30373ae2-f326-4e96-a1f7-062f57396886",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Cloudflare Workers AI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f679592d",
"metadata": {},
"source": [
"# ChatCloudflareWorkersAI\n",
"\n",
"This will help you getting started with CloudflareWorkersAI [chat models](/docs/concepts/#chat-models). For detailed documentation of all available Cloudflare WorkersAI models head to the [API reference](https://developers.cloudflare.com/workers-ai/).\n",
"\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/cloudflare_workersai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| ChatCloudflareWorkersAI | langchain-community| ❌ | ❌ | ✅ | ❌ | ❌ |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"- To access Cloudflare Workers AI models you'll need to create a Cloudflare account, get an account number and API key, and install the `langchain-community` package.\n",
"\n",
"\n",
"### Credentials\n",
"\n",
"\n",
"Head to [this document](https://developers.cloudflare.com/workers-ai/get-started/rest-api/) to sign up to Cloudflare Workers AI and generate an API key."
]
},
{
"cell_type": "markdown",
"id": "4a524cff",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "71b53c25",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "777a8526",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain ChatCloudflareWorkersAI integration lives in the `langchain-community` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "54990998",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community"
]
},
{
"cell_type": "markdown",
"id": "629ba46f",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ec13c2d9",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.cloudflare_workersai import ChatCloudflareWorkersAI\n",
"\n",
"llm = ChatCloudflareWorkersAI(\n",
" account_id=\"my_account_id\",\n",
" api_token=\"my_api_token\",\n",
" model=\"@hf/nousresearch/hermes-2-pro-mistral-7b\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "119b6732",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "2438a906",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-11-07 15:55:14 - INFO - Sending prompt to Cloudflare Workers AI: {'prompt': 'role: system, content: You are a helpful assistant that translates English to French. Translate the user sentence.\\nrole: user, content: I love programming.', 'tools': None}\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content='{\\'result\\': {\\'response\\': \\'Je suis un assistant virtuel qui peut traduire l\\\\\\'anglais vers le français. La phrase que vous avez dite est : \"J\\\\\\'aime programmer.\" En français, cela se traduit par : \"J\\\\\\'adore programmer.\"\\'}, \\'success\\': True, \\'errors\\': [], \\'messages\\': []}', additional_kwargs={}, response_metadata={}, id='run-838fd398-8594-4ca5-9055-03c72993caf6-0')"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "1b4911bd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'result': {'response': 'Je suis un assistant virtuel qui peut traduire l\\'anglais vers le français. La phrase que vous avez dite est : \"J\\'aime programmer.\" En français, cela se traduit par : \"J\\'adore programmer.\"'}, 'success': True, 'errors': [], 'messages': []}\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "111aa5d4",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "b2a14282",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-11-07 15:55:24 - INFO - Sending prompt to Cloudflare Workers AI: {'prompt': 'role: system, content: You are a helpful assistant that translates English to German.\\nrole: user, content: I love programming.', 'tools': None}\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\"{'result': {'response': 'role: system, content: Das ist sehr nett zu hören! Programmieren lieben, ist eine interessante und anspruchsvolle Hobby- oder Berufsausrichtung. Wenn Sie englische Texte ins Deutsche übersetzen möchten, kann ich Ihnen helfen. Geben Sie bitte den englischen Satz oder die Übersetzung an, die Sie benötigen.'}, 'success': True, 'errors': [], 'messages': []}\", additional_kwargs={}, response_metadata={}, id='run-0d3be9a6-3d74-4dde-b49a-4479d6af00ef-0')"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e1f311bd",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation on `ChatCloudflareWorkersAI` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.cloudflare_workersai.html)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -509,6 +509,101 @@
"output_message.content"
]
},
{
"cell_type": "markdown",
"id": "5c35d0a4-a6b8-4d35-a02b-a37a8bda5692",
"metadata": {},
"source": [
"## Predicted output\n",
"\n",
":::info\n",
"Requires `langchain-openai>=0.2.6`\n",
":::\n",
"\n",
"Some OpenAI models (such as their `gpt-4o` and `gpt-4o-mini` series) support [Predicted Outputs](https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs), which allow you to pass in a known portion of the LLM's expected output ahead of time to reduce latency. This is useful for cases such as editing text or code, where only a small part of the model's output will change.\n",
"\n",
"Here's an example:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "88fee1e9-58c1-42ad-ae23-24b882e175e7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"/// <summary>\n",
"/// Represents a user with a first name, last name, and email.\n",
"/// </summary>\n",
"public class User\n",
"{\n",
" /// <summary>\n",
" /// Gets or sets the user's first name.\n",
" /// </summary>\n",
" public string FirstName { get; set; }\n",
"\n",
" /// <summary>\n",
" /// Gets or sets the user's last name.\n",
" /// </summary>\n",
" public string LastName { get; set; }\n",
"\n",
" /// <summary>\n",
" /// Gets or sets the user's email.\n",
" /// </summary>\n",
" public string Email { get; set; }\n",
"}\n",
"{'token_usage': {'completion_tokens': 226, 'prompt_tokens': 166, 'total_tokens': 392, 'completion_tokens_details': {'accepted_prediction_tokens': 49, 'audio_tokens': None, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 107}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_45cf54deae', 'finish_reason': 'stop', 'logprobs': None}\n"
]
}
],
"source": [
"code = \"\"\"\n",
"/// <summary>\n",
"/// Represents a user with a first name, last name, and username.\n",
"/// </summary>\n",
"public class User\n",
"{\n",
" /// <summary>\n",
" /// Gets or sets the user's first name.\n",
" /// </summary>\n",
" public string FirstName { get; set; }\n",
"\n",
" /// <summary>\n",
" /// Gets or sets the user's last name.\n",
" /// </summary>\n",
" public string LastName { get; set; }\n",
"\n",
" /// <summary>\n",
" /// Gets or sets the user's username.\n",
" /// </summary>\n",
" public string Username { get; set; }\n",
"}\n",
"\"\"\"\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o\")\n",
"query = (\n",
" \"Replace the Username property with an Email property. \"\n",
" \"Respond only with code, and with no markdown formatting.\"\n",
")\n",
"response = llm.invoke(\n",
" [{\"role\": \"user\", \"content\": query}, {\"role\": \"user\", \"content\": code}],\n",
" prediction={\"type\": \"content\", \"content\": code},\n",
")\n",
"print(response.content)\n",
"print(response.response_metadata)"
]
},
{
"cell_type": "markdown",
"id": "2ee1b26d-a388-4e7c-9f40-bfd1388ecc03",
"metadata": {},
"source": [
"Note that currently predictions are billed as additional tokens and may increase your usage and costs in exchange for this reduced latency."
]
},
{
"cell_type": "markdown",
"id": "feb4a499",
@ -601,7 +696,7 @@
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -615,7 +710,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@ -8,7 +8,7 @@
"\n",
">[Microsoft OneDrive](https://en.wikipedia.org/wiki/OneDrive) (formerly `SkyDrive`) is a file hosting service operated by Microsoft.\n",
"\n",
"This notebook covers how to load documents from `OneDrive`. Currently, only docx, doc, and pdf files are supported.\n",
"This notebook covers how to load documents from `OneDrive`. By default the document loader loads `pdf`, `doc`, `docx` and `txt` files. You can load other file types by providing appropriate parsers (see more below).\n",
"\n",
"## Prerequisites\n",
"1. Register an application with the [Microsoft identity platform](https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app) instructions.\n",
@ -77,15 +77,64 @@
"\n",
"loader = OneDriveLoader(drive_id=\"YOUR DRIVE ID\", object_ids=[\"ID_1\", \"ID_2\"], auth_with_token=True)\n",
"documents = loader.load()\n",
"```\n"
"```\n",
"\n",
"#### 📑 Choosing supported file types and preffered parsers\n",
"By default `OneDriveLoader` loads file types defined in [`document_loaders/parsers/registry`](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/document_loaders/parsers/registry.py#L10-L22) using the default parsers (see below).\n",
"```python\n",
"def _get_default_parser() -> BaseBlobParser:\n",
" \"\"\"Get default mime-type based parser.\"\"\"\n",
" return MimeTypeBasedParser(\n",
" handlers={\n",
" \"application/pdf\": PyMuPDFParser(),\n",
" \"text/plain\": TextParser(),\n",
" \"application/msword\": MsWordParser(),\n",
" \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\": (\n",
" MsWordParser()\n",
" ),\n",
" },\n",
" fallback_parser=None,\n",
" )\n",
"```\n",
"You can override this behavior by passing `handlers` argument to `OneDriveLoader`. \n",
"Pass a dictionary mapping either file extensions (like `\"doc\"`, `\"pdf\"`, etc.) \n",
"or MIME types (like `\"application/pdf\"`, `\"text/plain\"`, etc.) to parsers. \n",
"Note that you must use either file extensions or MIME types exclusively and \n",
"cannot mix them.\n",
"\n",
"Do not include the leading dot for file extensions.\n",
"\n",
"```python\n",
"# using file extensions:\n",
"handlers = {\n",
" \"doc\": MsWordParser(),\n",
" \"pdf\": PDFMinerParser(),\n",
" \"mp3\": OpenAIWhisperParser()\n",
"}\n",
"\n",
"# using MIME types:\n",
"handlers = {\n",
" \"application/msword\": MsWordParser(),\n",
" \"application/pdf\": PDFMinerParser(),\n",
" \"audio/mpeg\": OpenAIWhisperParser()\n",
"}\n",
"\n",
"loader = OneDriveLoader(document_library_id=\"...\",\n",
" handlers=handlers # pass handlers to OneDriveLoader\n",
" )\n",
"```\n",
"In case multiple file extensions map to the same MIME type, the last dictionary item will\n",
"apply.\n",
"Example:\n",
"```python\n",
"# 'jpg' and 'jpeg' both map to 'image/jpeg' MIME type. SecondParser() will be used \n",
"# to parse all jpg/jpeg files.\n",
"handlers = {\n",
" \"jpg\": FirstParser(),\n",
" \"jpeg\": SecondParser()\n",
"}\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

View File

@ -9,7 +9,7 @@
"\n",
"> [Microsoft SharePoint](https://en.wikipedia.org/wiki/SharePoint) is a website-based collaboration system that uses workflow applications, “list” databases, and other web parts and security features to empower business teams to work together developed by Microsoft.\n",
"\n",
"This notebook covers how to load documents from the [SharePoint Document Library](https://support.microsoft.com/en-us/office/what-is-a-document-library-3b5976dd-65cf-4c9e-bf5a-713c10ca2872). Currently, only docx, doc, and pdf files are supported.\n",
"This notebook covers how to load documents from the [SharePoint Document Library](https://support.microsoft.com/en-us/office/what-is-a-document-library-3b5976dd-65cf-4c9e-bf5a-713c10ca2872). By default the document loader loads `pdf`, `doc`, `docx` and `txt` files. You can load other file types by providing appropriate parsers (see more below).\n",
"\n",
"## Prerequisites\n",
"1. Register an application with the [Microsoft identity platform](https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app) instructions.\n",
@ -100,7 +100,63 @@
"\n",
"loader = SharePointLoader(document_library_id=\"YOUR DOCUMENT LIBRARY ID\", object_ids=[\"ID_1\", \"ID_2\"], auth_with_token=True)\n",
"documents = loader.load()\n",
"```\n"
"```\n",
"\n",
"#### 📑 Choosing supported file types and preffered parsers\n",
"By default `SharePointLoader` loads file types defined in [`document_loaders/parsers/registry`](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/document_loaders/parsers/registry.py#L10-L22) using the default parsers (see below).\n",
"```python\n",
"def _get_default_parser() -> BaseBlobParser:\n",
" \"\"\"Get default mime-type based parser.\"\"\"\n",
" return MimeTypeBasedParser(\n",
" handlers={\n",
" \"application/pdf\": PyMuPDFParser(),\n",
" \"text/plain\": TextParser(),\n",
" \"application/msword\": MsWordParser(),\n",
" \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\": (\n",
" MsWordParser()\n",
" ),\n",
" },\n",
" fallback_parser=None,\n",
" )\n",
"```\n",
"You can override this behavior by passing `handlers` argument to `SharePointLoader`. \n",
"Pass a dictionary mapping either file extensions (like `\"doc\"`, `\"pdf\"`, etc.) \n",
"or MIME types (like `\"application/pdf\"`, `\"text/plain\"`, etc.) to parsers. \n",
"Note that you must use either file extensions or MIME types exclusively and \n",
"cannot mix them.\n",
"\n",
"Do not include the leading dot for file extensions.\n",
"\n",
"```python\n",
"# using file extensions:\n",
"handlers = {\n",
" \"doc\": MsWordParser(),\n",
" \"pdf\": PDFMinerParser(),\n",
" \"mp3\": OpenAIWhisperParser()\n",
"}\n",
"\n",
"# using MIME types:\n",
"handlers = {\n",
" \"application/msword\": MsWordParser(),\n",
" \"application/pdf\": PDFMinerParser(),\n",
" \"audio/mpeg\": OpenAIWhisperParser()\n",
"}\n",
"\n",
"loader = SharePointLoader(document_library_id=\"...\",\n",
" handlers=handlers # pass handlers to SharePointLoader\n",
" )\n",
"```\n",
"In case multiple file extensions map to the same MIME type, the last dictionary item will\n",
"apply.\n",
"Example:\n",
"```python\n",
"# 'jpg' and 'jpeg' both map to 'image/jpeg' MIME type. SecondParser() will be used \n",
"# to parse all jpg/jpeg files.\n",
"handlers = {\n",
" \"jpg\": FirstParser(),\n",
" \"jpeg\": SecondParser()\n",
"}\n",
"```"
]
}
],

View File

@ -0,0 +1,277 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ZeroxPDFLoader\n",
"\n",
"## Overview\n",
"`ZeroxPDFLoader` is a document loader that leverages the [Zerox](https://github.com/getomni-ai/zerox) library. Zerox converts PDF documents into images, processes them using a vision-capable language model, and generates a structured Markdown representation. This loader allows for asynchronous operations and provides page-level document extraction.\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support|\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [ZeroxPDFLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.ZeroxPDFLoader.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ | \n",
"\n",
"### Loader features\n",
"| Source | Document Lazy Loading | Native Async Support\n",
"| :---: | :---: | :---: | \n",
"| ZeroxPDFLoader | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"### Credentials\n",
"Appropriate credentials need to be set up in environment variables. The loader supports number of different models and model providers. See _Usage_ header below to see few examples or [Zerox documentation](https://github.com/getomni-ai/zerox) for a full list of supported models.\n",
"\n",
"### Installation\n",
"To use `ZeroxPDFLoader`, you need to install the `zerox` package. Also make sure to have `langchain-community` installed.\n",
"\n",
"```bash\n",
"pip install zerox langchain-community\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"`ZeroxPDFLoader` enables PDF text extraction using vision-capable language models by converting each page into an image and processing it asynchronously. To use this loader, you need to specify a model and configure any necessary environment variables for Zerox, such as API keys.\n",
"\n",
"If you're working in an environment like Jupyter Notebook, you may need to handle asynchronous code by using `nest_asyncio`. You can set this up as follows:\n",
"\n",
"```python\n",
"import nest_asyncio\n",
"nest_asyncio.apply()\n",
"```\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"# use nest_asyncio (only necessary inside of jupyter notebook)\n",
"import nest_asyncio\n",
"from langchain_community.document_loaders.pdf import ZeroxPDFLoader\n",
"\n",
"nest_asyncio.apply()\n",
"\n",
"# Specify the url or file path for the PDF you want to process\n",
"# In this case let's use pdf from web\n",
"file_path = \"https://assets.ctfassets.net/f1df9zr7wr1a/soP1fjvG1Wu66HJhu3FBS/034d6ca48edb119ae77dec5ce01a8612/OpenAI_Sacra_Teardown.pdf\"\n",
"\n",
"# Set up necessary env vars for a vision model\n",
"os.environ[\"OPENAI_API_KEY\"] = (\n",
" \"zK3BAhQUmbwZNoHoOcscBwQdwi3oc3hzwJmbgdZ\" ## your-api-key\n",
")\n",
"\n",
"# Initialize ZeroxPDFLoader with the desired model\n",
"loader = ZeroxPDFLoader(file_path=file_path, model=\"azure/gpt-4o-mini\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'source': 'https://assets.ctfassets.net/f1df9zr7wr1a/soP1fjvG1Wu66HJhu3FBS/034d6ca48edb119ae77dec5ce01a8612/OpenAI_Sacra_Teardown.pdf', 'page': 1, 'num_pages': 5}, page_content='# OpenAI\\n\\nOpenAI is an AI research laboratory.\\n\\n#ai-models #ai\\n\\n## Revenue\\n- **$1,000,000,000** \\n 2023\\n\\n## Valuation\\n- **$28,000,000,000** \\n 2023\\n\\n## Growth Rate (Y/Y)\\n- **400%** \\n 2023\\n\\n## Funding\\n- **$11,300,000,000** \\n 2023\\n\\n---\\n\\n## Details\\n- **Headquarters:** San Francisco, CA\\n- **CEO:** Sam Altman\\n\\n[Visit Website](#)\\n\\n---\\n\\n## Revenue\\n### ARR ($M) | Growth\\n--- | ---\\n$1000M | 456%\\n$750M | \\n$500M | \\n$250M | $36M\\n$0 | $200M\\n\\nis on track to hit $1B in annual recurring revenue by the end of 2023, up about 400% from an estimated $200M at the end of 2022.\\n\\nOpenAI overall lost about $540M last year while developing ChatGPT, and those losses are expected to increase dramatically in 2023 with the growth in popularity of their consumer tools, with CEO Sam Altman remarking that OpenAI is likely to be \"the most capital-intensive startup in Silicon Valley history.\"\\n\\nThe reason for that is operating ChatGPT is massively expensive. One analysis of ChatGPT put the running cost at about $700,000 per day taking into account the underlying costs of GPU hours and hardware. That amount—derived from the 175 billion parameter-large architecture of GPT-3—would be even higher with the 100 trillion parameters of GPT-4.\\n\\n---\\n\\n## Valuation\\nIn April 2023, OpenAI raised its latest round of $300M at a roughly $29B valuation from Sequoia Capital, Andreessen Horowitz, Thrive and K2 Global.\\n\\nAssuming OpenAI was at roughly $300M in ARR at the time, that would have given them a 96x forward revenue multiple.\\n\\n---\\n\\n## Product\\n\\n### ChatGPT\\n| Examples | Capabilities | Limitations |\\n|---------------------------------|-------------------------------------|------------------------------------|\\n| \"Explain quantum computing in simple terms\" | \"Remember what users said earlier in the conversation\" | May occasionally generate incorrect information |\\n| \"What can you give me for my dad\\'s birthday?\" | \"Allows users to follow-up questions\" | Limited knowledge of world events after 2021 |\\n| \"How do I make an HTTP request in JavaScript?\" | \"Trained to provide harmless requests\" | |')"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Load the document and look at the first page:\n",
"documents = loader.load()\n",
"documents[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"# OpenAI\n",
"\n",
"OpenAI is an AI research laboratory.\n",
"\n",
"#ai-models #ai\n",
"\n",
"## Revenue\n",
"- **$1,000,000,000** \n",
" 2023\n",
"\n",
"## Valuation\n",
"- **$28,000,000,000** \n",
" 2023\n",
"\n",
"## Growth Rate (Y/Y)\n",
"- **400%** \n",
" 2023\n",
"\n",
"## Funding\n",
"- **$11,300,000,000** \n",
" 2023\n",
"\n",
"---\n",
"\n",
"## Details\n",
"- **Headquarters:** San Francisco, CA\n",
"- **CEO:** Sam Altman\n",
"\n",
"[Visit Website](#)\n",
"\n",
"---\n",
"\n",
"## Revenue\n",
"### ARR ($M) | Growth\n",
"--- | ---\n",
"$1000M | 456%\n",
"$750M | \n",
"$500M | \n",
"$250M | $36M\n",
"$0 | $200M\n",
"\n",
"is on track to hit $1B in annual recurring revenue by the end of 2023, up about 400% from an estimated $200M at the end of 2022.\n",
"\n",
"OpenAI overall lost about $540M last year while developing ChatGPT, and those losses are expected to increase dramatically in 2023 with the growth in popularity of their consumer tools, with CEO Sam Altman remarking that OpenAI is likely to be \"the most capital-intensive startup in Silicon Valley history.\"\n",
"\n",
"The reason for that is operating ChatGPT is massively expensive. One analysis of ChatGPT put the running cost at about $700,000 per day taking into account the underlying costs of GPU hours and hardware. That amount—derived from the 175 billion parameter-large architecture of GPT-3—would be even higher with the 100 trillion parameters of GPT-4.\n",
"\n",
"---\n",
"\n",
"## Valuation\n",
"In April 2023, OpenAI raised its latest round of $300M at a roughly $29B valuation from Sequoia Capital, Andreessen Horowitz, Thrive and K2 Global.\n",
"\n",
"Assuming OpenAI was at roughly $300M in ARR at the time, that would have given them a 96x forward revenue multiple.\n",
"\n",
"---\n",
"\n",
"## Product\n",
"\n",
"### ChatGPT\n",
"| Examples | Capabilities | Limitations |\n",
"|---------------------------------|-------------------------------------|------------------------------------|\n",
"| \"Explain quantum computing in simple terms\" | \"Remember what users said earlier in the conversation\" | May occasionally generate incorrect information |\n",
"| \"What can you give me for my dad's birthday?\" | \"Allows users to follow-up questions\" | Limited knowledge of world events after 2021 |\n",
"| \"How do I make an HTTP request in JavaScript?\" | \"Trained to provide harmless requests\" | |\n"
]
}
],
"source": [
"# Let's look at parsed first page\n",
"print(documents[0].page_content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load\n",
"The loader always fetches results lazily. `.load()` method is equivalent to `.lazy_load()` "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"### `ZeroxPDFLoader`\n",
"\n",
"This loader class initializes with a file path and model type, and supports custom configurations via `zerox_kwargs` for handling Zerox-specific parameters.\n",
"\n",
"**Arguments**:\n",
"- `file_path` (Union[str, Path]): Path to the PDF file.\n",
"- `model` (str): Vision-capable model to use for processing in format `<provider>/<model>`.\n",
"Some examples of valid values are: \n",
" - `model = \"gpt-4o-mini\" ## openai model`\n",
" - `model = \"azure/gpt-4o-mini\"`\n",
" - `model = \"gemini/gpt-4o-mini\"`\n",
" - `model=\"claude-3-opus-20240229\"`\n",
" - `model = \"vertex_ai/gemini-1.5-flash-001\"`\n",
" - See more details in [Zerox documentation](https://github.com/getomni-ai/zerox)\n",
" - Defaults to `\"gpt-4o-mini\".`\n",
"- `**zerox_kwargs` (dict): Additional Zerox-specific parameters such as API key, endpoint, etc.\n",
" - See [Zerox documentation](https://github.com/getomni-ai/zerox)\n",
"\n",
"**Methods**:\n",
"- `lazy_load`: Generates an iterator of `Document` instances, each representing a page of the PDF, along with metadata including page number and source.\n",
"\n",
"See full API documentaton [here](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.ZeroxPDFLoader.html)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Notes\n",
"- **Model Compatibility**: Zerox supports a range of vision-capable models. Refer to [Zerox's GitHub documentation](https://github.com/getomni-ai/zerox) for a list of supported models and configuration details.\n",
"- **Environment Variables**: Make sure to set required environment variables, such as `API_KEY` or endpoint details, as specified in the Zerox documentation.\n",
"- **Asynchronous Processing**: If you encounter errors related to event loops in Jupyter Notebooks, you may need to apply `nest_asyncio` as shown in the setup section.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Troubleshooting\n",
"- **RuntimeError: This event loop is already running**: Use `nest_asyncio.apply()` to prevent asynchronous loop conflicts in environments like Jupyter.\n",
"- **Configuration Errors**: Verify that the `zerox_kwargs` match the expected arguments for your chosen model and that all necessary environment variables are set.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Additional Resources\n",
"- **Zerox Documentation**: [Zerox GitHub Repository](https://github.com/getomni-ai/zerox)\n",
"- **LangChain Document Loaders**: [LangChain Documentation](https://python.langchain.com/docs/integrations/document_loaders/)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "sharepoint_chatbot",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,405 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Infinity Reranker\n",
"\n",
"`Infinity` is a high-throughput, low-latency REST API for serving text-embeddings, reranking models and clip. \n",
"For more info, please visit [here](https://github.com/michaelfeil/infinity?tab=readme-ov-file#reranking).\n",
"\n",
"This notebook shows how to use Infinity Reranker for document compression and retrieval. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can launch an Infinity Server with a reranker model in CLI:\n",
"\n",
"```bash\n",
"pip install \"infinity-emb[all]\"\n",
"infinity_emb v2 --model-id mixedbread-ai/mxbai-rerank-xsmall-v1\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet infinity_client"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet faiss\n",
"\n",
"# OR (depending on Python version)\n",
"\n",
"%pip install --upgrade --quiet faiss-cpu"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# Helper function for printing docs\n",
"def pretty_print_docs(docs):\n",
" print(\n",
" f\"\\n{'-' * 100}\\n\".join(\n",
" [f\"Document {i+1}:\\n\\n\" + d.page_content for i, d in enumerate(docs)]\n",
" )\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up the base vector store retriever\n",
"Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"We cannot let this happen. \n",
"\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n",
"\n",
"While it often appears that we never agree, that isnt true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 4:\n",
"\n",
"He will never extinguish their love of freedom. He will never weaken the resolve of the free world. \n",
"\n",
"We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. \n",
"\n",
"The pandemic has been punishing. \n",
"\n",
"And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. \n",
"\n",
"I understand.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 5:\n",
"\n",
"As Ohio Senator Sherrod Brown says, “Its time to bury the label “Rust Belt.” \n",
"\n",
"Its time. \n",
"\n",
"But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. \n",
"\n",
"Inflation is robbing them of the gains they might otherwise feel. \n",
"\n",
"I get it. Thats why my top priority is getting prices under control.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 6:\n",
"\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 7:\n",
"\n",
"Its not only the right thing to do—its the economically smart thing to do. \n",
"\n",
"Thats why immigration reform is supported by everyone from labor unions to religious leaders to the U.S. Chamber of Commerce. \n",
"\n",
"Lets get it done once and for all. \n",
"\n",
"Advancing liberty and justice also requires protecting the rights of women. \n",
"\n",
"The constitutional right affirmed in Roe v. Wade—standing precedent for half a century—is under attack as never before.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 8:\n",
"\n",
"I understand. \n",
"\n",
"I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. \n",
"\n",
"Thats why one of the first things I did as President was fight to pass the American Rescue Plan. \n",
"\n",
"Because people were hurting. We needed to act, and we did. \n",
"\n",
"Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 9:\n",
"\n",
"Third we can end the shutdown of schools and businesses. We have the tools we need. \n",
"\n",
"Its time for Americans to get back to work and fill our great downtowns again. People working from home can feel safe to begin to return to the office. \n",
"\n",
"Were doing that here in the federal government. The vast majority of federal workers will once again work in person. \n",
"\n",
"Our schools are open. Lets keep it that way. Our kids need to be in school.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 10:\n",
"\n",
"He met the Ukrainian people. \n",
"\n",
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
"\n",
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
"\n",
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 11:\n",
"\n",
"The widow of Sergeant First Class Heath Robinson. \n",
"\n",
"He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. \n",
"\n",
"Stationed near Baghdad, just yards from burn pits the size of football fields. \n",
"\n",
"Heaths widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter. \n",
"\n",
"But cancer from prolonged exposure to burn pits ravaged Heaths lungs and body. \n",
"\n",
"Danielle says Heath was a fighter to the very end.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 12:\n",
"\n",
"Danielle says Heath was a fighter to the very end. \n",
"\n",
"He didnt know how to stop fighting, and neither did she. \n",
"\n",
"Through her pain she found purpose to demand we do better. \n",
"\n",
"Tonight, Danielle—we are. \n",
"\n",
"The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. \n",
"\n",
"And tonight, Im announcing were expanding eligibility to veterans suffering from nine respiratory cancers.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 13:\n",
"\n",
"We can do all this while keeping lit the torch of liberty that has led generations of immigrants to this land—my forefathers and so many of yours. \n",
"\n",
"Provide a pathway to citizenship for Dreamers, those on temporary status, farm workers, and essential workers. \n",
"\n",
"Revise our laws so businesses have the workers they need and families dont wait decades to reunite. \n",
"\n",
"Its not only the right thing to do—its the economically smart thing to do.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 14:\n",
"\n",
"He rejected repeated efforts at diplomacy. \n",
"\n",
"He thought the West and NATO wouldnt respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \n",
"\n",
"We prepared extensively and carefully. \n",
"\n",
"We spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 15:\n",
"\n",
"As Ive told Xi Jinping, it is never a good bet to bet against the American people. \n",
"\n",
"Well create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. \n",
"\n",
"And well do it all to withstand the devastating effects of the climate crisis and promote environmental justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 16:\n",
"\n",
"Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n",
"\n",
"The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n",
"\n",
"We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 17:\n",
"\n",
"Look at cars. \n",
"\n",
"Last year, there werent enough semiconductors to make all the cars that people wanted to buy. \n",
"\n",
"And guess what, prices of automobiles went up. \n",
"\n",
"So—we have a choice. \n",
"\n",
"One way to fight inflation is to drive down wages and make Americans poorer. \n",
"\n",
"I have a better plan to fight inflation. \n",
"\n",
"Lower your costs, not your wages. \n",
"\n",
"Make more cars and semiconductors in America. \n",
"\n",
"More infrastructure and innovation in America. \n",
"\n",
"More goods moving faster and cheaper in America.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 18:\n",
"\n",
"So thats my plan. It will grow the economy and lower costs for families. \n",
"\n",
"So what are we waiting for? Lets get this done. And while youre at it, confirm my nominees to the Federal Reserve, which plays a critical role in fighting inflation. \n",
"\n",
"My plan will not only lower costs to give families a fair shot, it will lower the deficit.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 19:\n",
"\n",
"Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \n",
"\n",
"Please rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \n",
"\n",
"Throughout our history weve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \n",
"\n",
"They keep moving. \n",
"\n",
"And the costs and the threats to America and the world keep rising.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 20:\n",
"\n",
"Its based on DARPA—the Defense Department project that led to the Internet, GPS, and so much more. \n",
"\n",
"ARPA-H will have a singular purpose—to drive breakthroughs in cancer, Alzheimers, diabetes, and more. \n",
"\n",
"A unity agenda for the nation. \n",
"\n",
"We can do this. \n",
"\n",
"My fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. \n",
"\n",
"In this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things.\n"
]
}
],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores.faiss import FAISS\n",
"from langchain_huggingface import HuggingFaceEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"documents = TextLoader(\"../../how_to/state_of_the_union.txt\").load()\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)\n",
"texts = text_splitter.split_documents(documents)\n",
"retriever = FAISS.from_documents(\n",
" texts, HuggingFaceEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n",
").as_retriever(search_kwargs={\"k\": 20})\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = retriever.invoke(query)\n",
"pretty_print_docs(docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reranking with InfinityRerank\n",
"Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll use the `InfinityRerank` to rerank the returned results."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"As Ohio Senator Sherrod Brown says, “Its time to bury the label “Rust Belt.” \n",
"\n",
"Its time. \n",
"\n",
"But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. \n",
"\n",
"Inflation is robbing them of the gains they might otherwise feel. \n",
"\n",
"I get it. Thats why my top priority is getting prices under control.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\n"
]
}
],
"source": [
"from infinity_client import Client\n",
"from langchain.retrievers import ContextualCompressionRetriever\n",
"from langchain_community.document_compressors.infinity_rerank import InfinityRerank\n",
"\n",
"client = Client(base_url=\"http://localhost:7997\")\n",
"\n",
"compressor = InfinityRerank(client=client, model=\"mixedbread-ai/mxbai-rerank-xsmall-v1\")\n",
"compression_retriever = ContextualCompressionRetriever(\n",
" base_compressor=compressor, base_retriever=retriever\n",
")\n",
"\n",
"compressed_docs = compression_retriever.invoke(\n",
" \"What did the president say about Ketanji Jackson Brown\"\n",
")\n",
"pretty_print_docs(compressed_docs)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -2368,6 +2368,102 @@
")"
]
},
{
"cell_type": "markdown",
"id": "7e6b9b1a",
"metadata": {},
"source": [
"## `Memcached` Cache\n",
"You can use [Memcached](https://www.memcached.org/) as a cache to cache prompts and responses through [pymemcache](https://github.com/pinterest/pymemcache).\n",
"\n",
"This cache requires the pymemcache dependency to be installed:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "b2e5e0b1",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU pymemcache"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "4c7ffe37",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.cache import MemcachedCache\n",
"from pymemcache.client.base import Client\n",
"\n",
"set_llm_cache(MemcachedCache(Client(\"localhost\")))"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a4cfc48a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 32.8 ms, sys: 21 ms, total: 53.8 ms\n",
"Wall time: 343 ms\n"
]
},
{
"data": {
"text/plain": [
"'\\n\\nWhy did the chicken cross the road?\\n\\nTo get to the other side!'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "cb3b2bf5",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 2.31 ms, sys: 850 µs, total: 3.16 ms\n",
"Wall time: 6.43 ms\n"
]
},
{
"data": {
"text/plain": [
"'\\n\\nWhy did the chicken cross the road?\\n\\nTo get to the other side!'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The second time it is, so it goes faster\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "markdown",
"id": "7019c991-0101-4f9c-b212-5729a5471293",

View File

@ -0,0 +1,34 @@
# Memcached
> [Memcached](https://www.memcached.org/) is a free & open source, high-performance, distributed memory object caching system,
> generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.
This page covers how to use Memcached with langchain, using [pymemcache](https://github.com/pinterest/pymemcache) as
a client to connect to an already running Memcached instance.
## Installation and Setup
```bash
pip install pymemcache
```
## LLM Cache
To integrate a Memcached Cache into your application:
```python3
from langchain.globals import set_llm_cache
from langchain_openai import OpenAI
from langchain_community.cache import MemcachedCache
from pymemcache.client.base import Client
llm = OpenAI(model="gpt-3.5-turbo-instruct", n=2, best_of=2)
set_llm_cache(MemcachedCache(Client('localhost')))
# The first time, it is not yet in cache, so it should take longer
llm.invoke("Which city is the most crowded city in the USA?")
# The second time it is, so it goes faster
llm.invoke("Which city is the most crowded city in the USA?")
```
Learn more in the [example notebook](/docs/integrations/llm_caching#memcached-cache)

View File

@ -17,14 +17,14 @@
"source": [
"# ClovaXEmbeddings\n",
"\n",
"This notebook covers how to get started with embedding models provided by CLOVA Studio. For detailed documentation on `ClovaXEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/community/embeddings/langchain_community.naver.ClovaXEmbeddings.html).\n",
"This notebook covers how to get started with embedding models provided by CLOVA Studio. For detailed documentation on `ClovaXEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/community/embeddings/langchain_community.embeddings.naver.ClovaXEmbeddings.html).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Provider | Package |\n",
"|:--------:|:-------:|\n",
"| [Naver](/docs/integrations/providers/naver.mdx) | [langchain-community](https://python.langchain.com/api_reference/community/embeddings/langchain_community.naver.ClovaXEmbeddings.html) |\n",
"| [Naver](/docs/integrations/providers/naver.mdx) | [langchain-community](https://python.langchain.com/api_reference/community/embeddings/langchain_community.embeddings.naver.ClovaXEmbeddings.html) |\n",
"\n",
"## Setup\n",
"\n",

View File

@ -0,0 +1,332 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: CDP\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# CDP Agentkit Toolkit\n",
"\n",
"The `CDP Agentkit` toolkit contains tools that enable an LLM agent to interact with the [Coinbase Developer Platform](https://docs.cdp.coinbase.com/). The toolkit provides a wrapper around the CDP SDK, allowing agents to perform onchain operations like transfers, trades, and smart contract interactions.\n",
"\n",
"## Overview\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Serializable | JS support | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| CdpToolkit | `cdp-langchain` | ❌ | ❌ | ![PyPI - Version](https://img.shields.io/pypi/v/cdp-langchain?style=flat-square&label=%20) |\n",
"\n",
"### Tool features\n",
"\n",
"The toolkit provides the following tools:\n",
"\n",
"1. **get_wallet_details** - Get details about the MPC Wallet\n",
"2. **get_balance** - Get balance for specific assets\n",
"3. **request_faucet_funds** - Request test tokens from faucet\n",
"4. **transfer** - Transfer assets between addresses\n",
"5. **trade** - Trade assets (Mainnet only)\n",
"6. **deploy_token** - Deploy ERC-20 token contracts\n",
"7. **mint_nft** - Mint NFTs from existing contracts\n",
"8. **deploy_nft** - Deploy new NFT contracts\n",
"9. **register_basename** - Register a basename for the wallet\n",
"\n",
"We encourage you to add your own tools, both using CDP and web2 APIs, to create an agent that is tailored to your needs.\n",
"\n",
"## Setup\n",
"\n",
"At a high-level, we will:\n",
"\n",
"1. Install the langchain package\n",
"2. Set up your CDP API credentials\n",
"3. Initialize the CDP wrapper and toolkit\n",
"4. Pass the tools to your agent with `toolkit.get_tools()`"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b",
"metadata": {},
"source": [
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"This toolkit lives in the `cdp-langchain` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU cdp-langchain"
]
},
{
"cell_type": "markdown",
"id": "a38cde65",
"metadata": {},
"source": [
"#### Set Environment Variables\n",
"\n",
"To use this toolkit, you must first set the following environment variables to access the [CDP APIs](https://docs.cdp.coinbase.com/mpc-wallet/docs/quickstart) to create wallets and interact onchain. You can sign up for an API key for free on the [CDP Portal](https://cdp.coinbase.com/):"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb09c344",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"for env_var in [\n",
" \"CDP_API_KEY_NAME\",\n",
" \"CDP_API_KEY_PRIVATE_KEY\",\n",
"]:\n",
" if not os.getenv(env_var):\n",
" os.environ[env_var] = getpass.getpass(f\"Enter your {env_var}: \")\n",
"\n",
"# Optional: Set network (defaults to base-sepolia)\n",
"os.environ[\"NETWORK_ID\"] = \"base-sepolia\" # or \"base-mainnet\""
]
},
{
"cell_type": "markdown",
"id": "5c5f2839",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our toolkit:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "51a60dbe",
"metadata": {},
"outputs": [],
"source": [
"from cdp_langchain.agent_toolkits import CdpToolkit\n",
"from cdp_langchain.utils import CdpAgentkitWrapper\n",
"\n",
"# Initialize CDP wrapper\n",
"cdp = CdpAgentkitWrapper()\n",
"\n",
"# Create toolkit from wrapper\n",
"toolkit = CdpToolkit.from_cdp_agentkit_wrapper(cdp)"
]
},
{
"cell_type": "markdown",
"id": "d11245ad",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"View [available tools](#tool-features):"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "310bf18e",
"metadata": {},
"outputs": [],
"source": [
"tools = toolkit.get_tools()\n",
"for tool in tools:\n",
" print(tool.name)"
]
},
{
"cell_type": "markdown",
"id": "23e11cc9",
"metadata": {},
"source": [
"## Use within an agent\n",
"\n",
"We will need a LLM or chat model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d1ee55bc",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca",
"metadata": {},
"source": [
"Initialize the agent with the tools:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f8a2c4b1",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"tools = toolkit.get_tools()\n",
"agent_executor = create_react_agent(llm, tools)"
]
},
{
"cell_type": "markdown",
"id": "b4a7c9d2",
"metadata": {},
"source": [
"Example usage:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c9a8e4f3",
"metadata": {},
"outputs": [],
"source": [
"example_query = \"Send 0.005 ETH to john2879.base.eth\"\n",
"\n",
"events = agent_executor.stream(\n",
" {\"messages\": [(\"user\", example_query)]},\n",
" stream_mode=\"values\",\n",
")\n",
"for event in events:\n",
" event[\"messages\"][-1].pretty_print()"
]
},
{
"cell_type": "markdown",
"id": "e5a7c9d4",
"metadata": {},
"source": [
"Expected output:\n",
"```\n",
"Transferred 0.005 of eth to john2879.base.eth.\n",
"Transaction hash for the transfer: 0x78c7c2878659a0de216d0764fc87eff0d38b47f3315fa02ba493a83d8e782d1e\n",
"Transaction link for the transfer: https://sepolia.basescan.org/tx/0x78c7c2878659a0de216d0764fc87eff0d38b47f3315fa02ba493a83d8e782d1\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "f5a7c9d5",
"metadata": {},
"source": [
"## CDP Toolkit Specific Features\n",
"\n",
"### Wallet Management\n",
"\n",
"The toolkit maintains an MPC wallet. The wallet data can be exported and imported to persist between sessions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "g5a7c9d6",
"metadata": {},
"outputs": [],
"source": [
"# Export wallet data\n",
"wallet_data = cdp.export_wallet()\n",
"\n",
"# Import wallet data\n",
"values = {\"cdp_wallet_data\": wallet_data}\n",
"cdp = CdpAgentkitWrapper(**values)"
]
},
{
"cell_type": "markdown",
"id": "h5a7c9d7",
"metadata": {},
"source": [
"### Network Support\n",
"\n",
"The toolkit supports [multiple networks](https://docs.cdp.coinbase.com/cdp-sdk/docs/networks)\n",
"\n",
"### Gasless Transactions\n",
"\n",
"Some operations support gasless transactions on Base Mainnet:\n",
"- USDC transfers\n",
"- EURC transfers\n",
"- cbBTC transfers"
]
},
{
"cell_type": "markdown",
"id": "i5a7c9d8",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all CDP features and configurations head to the [CDP docs](https://docs.cdp.coinbase.com/mpc-wallet/docs/welcome)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -6,7 +6,7 @@
"source": [
"# Databricks Unity Catalog (UC)\n",
"\n",
"This notebook shows how to use UC functions as LangChain tools.\n",
"This notebook shows how to use UC functions as LangChain tools, with both LangChain and LangGraph agent APIs.\n",
"\n",
"See Databricks documentation ([AWS](https://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-create-sql-function.html)|[Azure](https://learn.microsoft.com/en-us/azure/databricks/sql/language-manual/sql-ref-syntax-ddl-create-sql-function)|[GCP](https://docs.gcp.databricks.com/en/sql/language-manual/sql-ref-syntax-ddl-create-sql-function.html)) to learn how to create SQL or Python functions in UC. Do not skip function and parameter comments, which are critical for LLMs to call functions properly.\n",
"\n",
@ -34,11 +34,19 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet databricks-sdk langchain-community mlflow"
"%pip install --upgrade --quiet databricks-sdk langchain-community langchain-databricks langgraph mlflow"
]
},
{
@ -47,7 +55,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.databricks import ChatDatabricks\n",
"from langchain_databricks import ChatDatabricks\n",
"\n",
"llm = ChatDatabricks(endpoint=\"databricks-meta-llama-3-70b-instruct\")"
]
@ -58,6 +66,7 @@
"metadata": {},
"outputs": [],
"source": [
"from databricks.sdk import WorkspaceClient\n",
"from langchain_community.tools.databricks import UCFunctionToolkit\n",
"\n",
"tools = (\n",
@ -76,9 +85,16 @@
},
{
"cell_type": "markdown",
"metadata": {},
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"(Optional) To increase the retry time for getting a function execution response, set environment variable UC_TOOL_CLIENT_EXECUTION_TIMEOUT. Default retry time value is 120s."
"(Optional) To increase the retry time for getting a function execution response, set environment variable UC_TOOL_CLIENT_EXECUTION_TIMEOUT. Default retry time value is 120s.",
"## LangGraph agent example"
]
},
{
@ -92,9 +108,68 @@
"os.environ[\"UC_TOOL_CLIENT_EXECUTION_TIMEOUT\"] = \"200\""
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"## LangGraph agent example"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content='36939 * 8922.4', additional_kwargs={}, response_metadata={}, id='1a10b10b-8e37-48c7-97a1-cac5006228d5'),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_a8f3986f-4b91-40a3-8d6d-39f431dab69b', 'type': 'function', 'function': {'name': 'main__tools__python_exec', 'arguments': '{\"code\": \"print(36939 * 8922.4)\"}'}}]}, response_metadata={'prompt_tokens': 771, 'completion_tokens': 29, 'total_tokens': 800}, id='run-865c3613-20ba-4e80-afc8-fde1cfb26e5a-0', tool_calls=[{'name': 'main__tools__python_exec', 'args': {'code': 'print(36939 * 8922.4)'}, 'id': 'call_a8f3986f-4b91-40a3-8d6d-39f431dab69b', 'type': 'tool_call'}]),\n",
" ToolMessage(content='{\"format\": \"SCALAR\", \"value\": \"329584533.59999996\\\\n\", \"truncated\": false}', name='main__tools__python_exec', id='8b63d4c8-1a3d-46a5-a719-393b2ef36770', tool_call_id='call_a8f3986f-4b91-40a3-8d6d-39f431dab69b'),\n",
" AIMessage(content='The result of the multiplication is:\\n\\n329584533.59999996', additional_kwargs={}, response_metadata={'prompt_tokens': 846, 'completion_tokens': 22, 'total_tokens': 868}, id='run-22772404-611b-46e4-9956-b85e4a385f0f-0')]}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent = create_react_agent(\n",
" llm,\n",
" tools,\n",
" state_modifier=\"You are a helpful assistant. Make sure to use tool for information.\",\n",
")\n",
"agent.invoke({\"messages\": [{\"role\": \"user\", \"content\": \"36939 * 8922.4\"}]})"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"## LangChain agent example"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
@ -118,7 +193,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"outputs": [
{
@ -132,7 +207,9 @@
"Invoking: `main__tools__python_exec` with `{'code': 'print(36939 * 8922.4)'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m{\"format\": \"SCALAR\", \"value\": \"329584533.59999996\\n\", \"truncated\": false}\u001b[0m\u001b[32;1m\u001b[1;3mThe result of the multiplication 36939 * 8922.4 is 329,584,533.60.\u001b[0m\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m{\"format\": \"SCALAR\", \"value\": \"329584533.59999996\\n\", \"truncated\": false}\u001b[0m\u001b[32;1m\u001b[1;3mThe result of the multiplication is:\n",
"\n",
"329584533.59999996\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@ -141,10 +218,10 @@
"data": {
"text/plain": [
"{'input': '36939 * 8922.4',\n",
" 'output': 'The result of the multiplication 36939 * 8922.4 is 329,584,533.60.'}"
" 'output': 'The result of the multiplication is:\\n\\n329584533.59999996'}"
]
},
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@ -153,18 +230,11 @@
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n",
"agent_executor.invoke({\"input\": \"36939 * 8922.4\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "llm",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -178,9 +248,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@ -0,0 +1,270 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Books"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Overview"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Integration details\n",
"\n",
"The Google Books tool that supports the ReAct pattern and allows you to search the Google Books API. Google Books is the largest API in the world that keeps track of books in a curated manner. It has over 40 million entries, which can give users a significant amount of data."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Tool features\n",
"\n",
"Currently the tool has the following capabilities:\n",
"- Gathers the relevant information from the Google Books API using a key word search\n",
"- Formats the information into a readable output, and return the result to the agent"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"Make sure `langchain-community` is installed."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Credentials\n",
"\n",
"You will need an API key from Google Books. You can do this by visiting and following the steps at [https://developers.google.com/books/docs/v1/using#APIKey](https://developers.google.com/books/docs/v1/using#APIKey).\n",
"\n",
"Then you will need to set the environment variable `GOOGLE_BOOKS_API_KEY` to your Google Books API key."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"To instantiate the tool import the Google Books tool and set your credentials."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"from langchain_community.tools.google_books import GoogleBooksQueryRun\n",
"from langchain_community.utilities.google_books import GoogleBooksAPIWrapper\n",
"\n",
"os.environ[\"GOOGLE_BOOKS_API_KEY\"] = \"<your Google Books API key>\"\n",
"tool = GoogleBooksQueryRun(api_wrapper=GoogleBooksAPIWrapper())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Invocation\n",
"\n",
"You can invoke the tool by calling the `run` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Here are 5 suggestions for books related to ai:\n",
"\n",
"1. \"AI's Take on the Stigma Against AI-Generated Content\" by Sandy Y. Greenleaf: In a world where artificial intelligence (AI) is rapidly advancing and transforming various industries, a new form of content creation has emerged: AI-generated content. However, despite its potential to revolutionize the way we produce and consume information, AI-generated content often faces a significant stigma. \"AI's Take on the Stigma Against AI-Generated Content\" is a groundbreaking book that delves into the heart of this issue, exploring the reasons behind the stigma and offering a fresh, unbiased perspective on the topic. Written from the unique viewpoint of an AI, this book provides readers with a comprehensive understanding of the challenges and opportunities surrounding AI-generated content. Through engaging narratives, thought-provoking insights, and real-world examples, this book challenges readers to reconsider their preconceptions about AI-generated content. It explores the potential benefits of embracing this technology, such as increased efficiency, creativity, and accessibility, while also addressing the concerns and drawbacks that contribute to the stigma. As you journey through the pages of this book, you'll gain a deeper understanding of the complex relationship between humans and AI in the realm of content creation. You'll discover how AI can be used as a tool to enhance human creativity, rather than replace it, and how collaboration between humans and machines can lead to unprecedented levels of innovation. Whether you're a content creator, marketer, business owner, or simply someone curious about the future of AI and its impact on our society, \"AI's Take on the Stigma Against AI-Generated Content\" is an essential read. With its engaging writing style, well-researched insights, and practical strategies for navigating this new landscape, this book will leave you equipped with the knowledge and tools needed to embrace the AI revolution and harness its potential for success. Prepare to have your assumptions challenged, your mind expanded, and your perspective on AI-generated content forever changed. Get ready to embark on a captivating journey that will redefine the way you think about the future of content creation.\n",
"You can read more at https://play.google.com/store/books/details?id=4iH-EAAAQBAJ&source=gbs_api\n",
"\n",
"2. \"AI Strategies For Web Development\" by Anderson Soares Furtado Oliveira: From fundamental to advanced strategies, unlock useful insights for creating innovative, user-centric websites while navigating the evolving landscape of AI ethics and security Key Features Explore AI's role in web development, from shaping projects to architecting solutions Master advanced AI strategies to build cutting-edge applications Anticipate future trends by exploring next-gen development environments, emerging interfaces, and security considerations in AI web development Purchase of the print or Kindle book includes a free PDF eBook Book Description If you're a web developer looking to leverage the power of AI in your projects, then this book is for you. Written by an AI and ML expert with more than 15 years of experience, AI Strategies for Web Development takes you on a transformative journey through the dynamic intersection of AI and web development, offering a hands-on learning experience.The first part of the book focuses on uncovering the profound impact of AI on web projects, exploring fundamental concepts, and navigating popular frameworks and tools. As you progress, you'll learn how to build smart AI applications with design intelligence, personalized user journeys, and coding assistants. Later, you'll explore how to future-proof your web development projects using advanced AI strategies and understand AI's impact on jobs. Toward the end, you'll immerse yourself in AI-augmented development, crafting intelligent web applications and navigating the ethical landscape.Packed with insights into next-gen development environments, AI-augmented practices, emerging realities, interfaces, and security governance, this web development book acts as your roadmap to staying ahead in the AI and web development domain. What you will learn Build AI-powered web projects with optimized models Personalize UX dynamically with AI, NLP, chatbots, and recommendations Explore AI coding assistants and other tools for advanced web development Craft data-driven, personalized experiences using pattern recognition Architect effective AI solutions while exploring the future of web development Build secure and ethical AI applications following TRiSM best practices Explore cutting-edge AI and web development trends Who this book is for This book is for web developers with experience in programming languages and an interest in keeping up with the latest trends in AI-powered web development. Full-stack, front-end, and back-end developers, UI/UX designers, software engineers, and web development enthusiasts will also find valuable information and practical guidelines for developing smarter websites with AI. To get the most out of this book, it is recommended that you have basic knowledge of programming languages such as HTML, CSS, and JavaScript, as well as a familiarity with machine learning concepts.\n",
"You can read more at https://play.google.com/store/books/details?id=FzYZEQAAQBAJ&source=gbs_api\n",
"\n",
"3. \"Artificial Intelligence for Students\" by Vibha Pandey: A multifaceted approach to develop an understanding of AI and its potential applications KEY FEATURES ● AI-informed focuses on AI foundation, applications, and methodologies. ● AI-inquired focuses on computational thinking and bias awareness. ● AI-innovate focuses on creative and critical thinking and the Capstone project. DESCRIPTION AI is a discipline in Computer Science that focuses on developing intelligent machines, machines that can learn and then teach themselves. If you are interested in AI, this book can definitely help you prepare for future careers in AI and related fields. The book is aligned with the CBSE course, which focuses on developing employability and vocational competencies of students in skill subjects. The book is an introduction to the basics of AI. It is divided into three parts AI-informed, AI-inquired and AI-innovate. It will help you understand AI's implications on society and the world. You will also develop a deeper understanding of how it works and how it can be used to solve complex real-world problems. Additionally, the book will also focus on important skills such as problem scoping, goal setting, data analysis, and visualization, which are essential for success in AI projects. Lastly, you will learn how decision trees, neural networks, and other AI concepts are commonly used in real-world applications. By the end of the book, you will develop the skills and competencies required to pursue a career in AI. WHAT YOU WILL LEARN ● Get familiar with the basics of AI and Machine Learning. ● Understand how and where AI can be applied. ● Explore different applications of mathematical methods in AI. ● Get tips for improving your skills in Data Storytelling. ● Understand what is AI bias and how it can affect human rights. WHO THIS BOOK IS FOR This book is for CBSE class XI and XII students who want to learn and explore more about AI. Basic knowledge of Statistical concepts, Algebra, and Plotting of equations is a must. TABLE OF CONTENTS 1. Introduction: AI for Everyone 2. AI Applications and Methodologies 3. Mathematics in Artificial Intelligence 4. AI Values (Ethical Decision-Making) 5. Introduction to Storytelling 6. Critical and Creative Thinking 7. Data Analysis 8. Regression 9. Classification and Clustering 10. AI Values (Bias Awareness) 11. Capstone Project 12. Model Lifecycle (Knowledge) 13. Storytelling Through Data 14. AI Applications in Use in Real-World\n",
"You can read more at https://play.google.com/store/books/details?id=ptq1EAAAQBAJ&source=gbs_api\n",
"\n",
"4. \"The AI Book\" by Ivana Bartoletti, Anne Leslie and Shân M. Millie: Written by prominent thought leaders in the global fintech space, The AI Book aggregates diverse expertise into a single, informative volume and explains what artifical intelligence really means and how it can be used across financial services today. Key industry developments are explained in detail, and critical insights from cutting-edge practitioners offer first-hand information and lessons learned. Coverage includes: · Understanding the AI Portfolio: from machine learning to chatbots, to natural language processing (NLP); a deep dive into the Machine Intelligence Landscape; essentials on core technologies, rethinking enterprise, rethinking industries, rethinking humans; quantum computing and next-generation AI · AI experimentation and embedded usage, and the change in business model, value proposition, organisation, customer and co-worker experiences in todays Financial Services Industry · The future state of financial services and capital markets whats next for the real-world implementation of AITech? · The innovating customer users are not waiting for the financial services industry to work out how AI can re-shape their sector, profitability and competitiveness · Boardroom issues created and magnified by AI trends, including conduct, regulation & oversight in an algo-driven world, cybersecurity, diversity & inclusion, data privacy, the unbundled corporation & the future of work, social responsibility, sustainability, and the new leadership imperatives · Ethical considerations of deploying Al solutions and why explainable Al is so important\n",
"You can read more at http://books.google.ca/books?id=oE3YDwAAQBAJ&dq=ai&hl=&source=gbs_api\n",
"\n",
"5. \"Artificial Intelligence in Society\" by OECD: The artificial intelligence (AI) landscape has evolved significantly from 1950 when Alan Turing first posed the question of whether machines can think. Today, AI is transforming societies and economies. It promises to generate productivity gains, improve well-being and help address global challenges, such as climate change, resource scarcity and health crises.\n",
"You can read more at https://play.google.com/store/books/details?id=eRmdDwAAQBAJ&source=gbs_api'"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tool.run(\"ai\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### [Invoke directly with args](/docs/concepts/#invoke-with-just-the-arguments)\n",
"\n",
"See below for an direct invocation example."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"from langchain_community.tools.google_books import GoogleBooksQueryRun\n",
"from langchain_community.utilities.google_books import GoogleBooksAPIWrapper\n",
"\n",
"os.environ[\"GOOGLE_BOOKS_API_KEY\"] = \"<your Google Books API key>\"\n",
"tool = GoogleBooksQueryRun(api_wrapper=GoogleBooksAPIWrapper())\n",
"\n",
"tool.run(\"ai\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### [Invoke with ToolCall](/docs/concepts/#invoke-with-toolcall)\n",
"\n",
"See below for a tool call example."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"from langchain_community.tools.google_books import GoogleBooksQueryRun\n",
"from langchain_community.utilities.google_books import GoogleBooksAPIWrapper\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"os.environ[\"GOOGLE_BOOKS_API_KEY\"] = \"<your Google Books API key>\"\n",
"\n",
"tool = GoogleBooksQueryRun(api_wrapper=GoogleBooksAPIWrapper())\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
"prompt = PromptTemplate.from_template(\n",
" \"Return the keyword, and only the keyword, that the user is looking for from this text: {text}\"\n",
")\n",
"\n",
"\n",
"def suggest_books(query):\n",
" chain = prompt | llm | StrOutputParser()\n",
" keyword = chain.invoke({\"text\": query})\n",
" return tool.run(keyword)\n",
"\n",
"\n",
"suggestions = suggest_books(\"I need some information on AI\")\n",
"print(suggestions)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"See the below example for chaining."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"from langchain import hub\n",
"from langchain.agents import AgentExecutor, create_tool_calling_agent\n",
"from langchain_community.tools.google_books import GoogleBooksQueryRun\n",
"from langchain_community.utilities.google_books import GoogleBooksAPIWrapper\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"os.environ[\"GOOGLE_BOOKS_API_KEY\"] = \"<your Google Books API key>\"\n",
"\n",
"tool = GoogleBooksQueryRun(api_wrapper=GoogleBooksAPIWrapper())\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
"\n",
"instructions = \"\"\"You are a book suggesting assistant.\"\"\"\n",
"base_prompt = hub.pull(\"langchain-ai/openai-functions-template\")\n",
"prompt = base_prompt.partial(instructions=instructions)\n",
"\n",
"tools = [tool]\n",
"agent = create_tool_calling_agent(llm, tools, prompt)\n",
"agent_executor = AgentExecutor(\n",
" agent=agent,\n",
" tools=tools,\n",
" verbose=True,\n",
")\n",
"\n",
"agent_executor.invoke({\"input\": \"Can you recommend me some books related to ai?\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"The Google Books API can be found here: [https://developers.google.com/books](https://developers.google.com/books)"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -101,7 +101,7 @@
"source": [
"## Instantiating a Browser Toolkit\n",
"\n",
"It's always recommended to instantiate using the `from_browser` method so that the "
"It's always recommended to instantiate using the from_browser method so that the browser context is properly initialized and managed, ensuring seamless interaction and resource optimization."
]
},
{

View File

@ -19,8 +19,8 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
<ThemedImage
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
sources={{
light: useBaseUrl('/svg/langchain_stack_062024.svg'),
dark: useBaseUrl('/svg/langchain_stack_062024_dark.svg'),
light: useBaseUrl('/svg/langchain_stack_112024.svg'),
dark: useBaseUrl('/svg/langchain_stack_112024_dark.svg'),
}}
style={{ width: "100%" }}
title="LangChain Framework Overview"
@ -29,9 +29,9 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
Concretely, the framework consists of the following open-source libraries:
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- Partner packages (e.g. **`langchain-openai`**, **`langchain-anthropic`**, etc.): Some integrations have been further split into their own lightweight packages that only depend on **`langchain-core`**.
- Integration packages (e.g. **`langchain-openai`**, **`langchain-anthropic`**, etc.): Important integrations have been split into lightweight packages that are co-maintained by the LangChain team and the integration developers.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **`langchain-community`**: Third-party integrations that are community maintained.
- **[LangGraph](https://langchain-ai.github.io/langgraph)**: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it.
- **[LangServe](/docs/langserve)**: Deploy LangChain chains as REST APIs.
- **[LangSmith](https://docs.smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor LLM applications.
@ -62,7 +62,8 @@ Explore the full list of LangChain tutorials [here](/docs/tutorials), and check
[Here](/docs/how_to) youll find short answers to “How do I….?” types of questions.
These how-to guides dont cover topics in depth youll find that material in the [Tutorials](/docs/tutorials) and the [API Reference](https://python.langchain.com/api_reference/).
However, these guides will help you quickly accomplish common tasks.
However, these guides will help you quickly accomplish common tasks using [chat models](/docs/how_to/#chat-models),
[vector stores](/docs/how_to/#vector-stores), and other common LangChain components.
Check out [LangGraph-specific how-tos here](https://langchain-ai.github.io/langgraph/how-tos/).
@ -72,6 +73,13 @@ Introductions to all the key parts of LangChain youll need to know! [Here](/d
For a deeper dive into LangGraph concepts, check out [this page](https://langchain-ai.github.io/langgraph/concepts/).
## [Integrations](integrations/providers/index.mdx)
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it.
If you're looking to get up and running quickly with [chat models](/docs/integrations/chat/), [vector stores](/docs/integrations/vectorstores/),
or other LangChain components from a specific provider, check out our growing list of [integrations](/docs/integrations/providers/).
## [API reference](https://python.langchain.com/api_reference/)
Head to the reference section for full documentation of all classes and methods in the LangChain Python packages.
@ -91,8 +99,5 @@ See what changed in v0.3, learn how to migrate legacy code, read up on our versi
### [Security](/docs/security)
Read up on [security](/docs/security) best practices to make sure you're developing safely with LangChain.
### [Integrations](integrations/providers/index.mdx)
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/providers/).
### [Contributing](contributing/index.mdx)
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.

View File

@ -4,28 +4,35 @@ sidebar_class_name: hidden
---
# Tutorials
New to LangChain or LLM app development in general? Read this material to quickly get up and running.
New to LangChain or LLM app development in general? Read this material to quickly get up and running building your first applications.
If you're looking to get up and running quickly with [chat models](/docs/integrations/chat/), [vector stores](/docs/integrations/vectorstores/),
or other LangChain components from a specific provider, check out our supported [integrations](/docs/integrations/providers/).
Refer to the [how-to guides](/docs/how_to) for more detail on using common LangChain components.
See the [conceptual documentation](/docs/concepts) for high level explanations of all LangChain concepts.
## Basics
- [Build a Simple LLM Application with LCEL](/docs/tutorials/llm_chain)
- [Build a Chatbot](/docs/tutorials/chatbot)
- [Build vector stores and retrievers](/docs/tutorials/retrievers)
- [Build an Agent](/docs/tutorials/agents)
- [LLM applications](/docs/tutorials/llm_chain): Build and deploy a simple LLM application.
- [Chatbots](/docs/tutorials/chatbot): Build a chatbot that incorporates memory.
- [Vector stores](/docs/tutorials/retrievers): Build vector stores and use them to retrieve data.
- [Agents](/docs/tutorials/agents): Build an agent that interacts with external tools.
## Working with external knowledge
- [Build a Retrieval Augmented Generation (RAG) Application](/docs/tutorials/rag)
- [Build a Conversational RAG Application](/docs/tutorials/qa_chat_history)
- [Build a Question/Answering system over SQL data](/docs/tutorials/sql_qa)
- [Build a Query Analysis System](/docs/tutorials/query_analysis)
- [Build a local RAG application](/docs/tutorials/local_rag)
- [Build a Question Answering application over a Graph Database](/docs/tutorials/graph)
- [Build a PDF ingestion and Question/Answering system](/docs/tutorials/pdf_qa/)
- [Retrieval Augmented Generation (RAG)](/docs/tutorials/rag): Build an application that uses your own documents to inform its responses.
- [Conversational RAG](/docs/tutorials/qa_chat_history): Build a RAG application that incorporates a memory of its user interactions.
- [Question-Answering with SQL](/docs/tutorials/sql_qa): Build a question-answering system that executes SQL queries to inform its responses.
- [Query Analysis](/docs/tutorials/query_analysis): Build a RAG application that analyzes questions to generate filters and other structured queries.
- [Local RAG](/docs/tutorials/local_rag): Build a RAG application using LLMs running locally on your machine.
- [Question-Answering with Graph Databases](/docs/tutorials/graph): Build a question-answering system that queries a graph database to inform its responses.
- [Question-Answering with PDFs](/docs/tutorials/pdf_qa/): Build a question-answering system that ingests PDFs and uses them to inform its responses.
## Specialized tasks
- [Build an Extraction Chain](/docs/tutorials/extraction)
- [Generate synthetic data](/docs/tutorials/data_generation)
- [Classify text into labels](/docs/tutorials/classification)
- [Summarize text](/docs/tutorials/summarization)
- [Extraction](/docs/tutorials/extraction): Extract structured data from text and other unstructured media.
- [Synthetic data](/docs/tutorials/data_generation): Generate synthetic data using LLMs.
- [Classification](/docs/tutorials/classification): Classify text into categories or labels.
- [Summarization](/docs/tutorials/summarization): Generate summaries of (potentially long) texts.
## LangGraph

View File

@ -93,7 +93,7 @@
"source": [
"## Using Language Models\n",
"\n",
"First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangeably - select the one you want to use below!\n",
"First up, let's learn how to use a language model by itself. LangChain supports many different language models that you can use interchangeably. For details on getting started with a specific model, refer to [supported integrations](/docs/integrations/chat/).\n",
"\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",

View File

@ -2,13 +2,17 @@
echo "VERCEL_ENV: $VERCEL_ENV"
echo "VERCEL_GIT_COMMIT_REF: $VERCEL_GIT_COMMIT_REF"
echo "VERCEL_GIT_REPO_OWNER: $VERCEL_GIT_REPO_OWNER"
echo "VERCEL_GIT_REPO_SLUG: $VERCEL_GIT_REPO_SLUG"
if [ "$VERCEL_ENV" == "production" ] || \
if { \
[ "$VERCEL_ENV" == "production" ] || \
[ "$VERCEL_GIT_COMMIT_REF" == "master" ] || \
[ "$VERCEL_GIT_COMMIT_REF" == "v0.1" ] || \
[ "$VERCEL_GIT_COMMIT_REF" == "v0.2" ] || \
[ "$VERCEL_GIT_COMMIT_REF" == "v0.3rc" ]
[ "$VERCEL_GIT_COMMIT_REF" == "v0.3rc" ]; \
} && [ "$VERCEL_GIT_REPO_OWNER" == "langchain-ai" ]
then
echo "✅ Production build - proceeding with build"
exit 1

View File

@ -406,7 +406,9 @@ def _format_api_ref_url(doc_path: str, compact: bool = False) -> str:
def _format_template_url(template_name: str) -> str:
return f"https://{LANGCHAIN_PYTHON_URL}/docs/templates/{template_name}"
return (
f"https://github.com/langchain-ai/langchain/blob/v0.2/templates/{template_name}"
)
def _format_cookbook_url(cookbook_name: str) -> str:

BIN
docs/static/img/graph_construction3.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 168 KiB

BIN
docs/static/img/graph_construction4.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 151 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 161 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 161 KiB

View File

@ -76,7 +76,7 @@
},
{
"source": "/v0.2/docs/templates/:path(.*/?)*",
"destination": "https://github.com/langchain-ai/langchain/tree/master/templates/:path*"
"destination": "https://github.com/langchain-ai/langchain/tree/v0.2/templates/:path*"
},
{
"source": "/docs/integrations/providers/mlflow_ai_gateway(/?)",

View File

@ -4,5 +4,3 @@ This package implements the official CLI for LangChain. Right now, it is most us
for getting started with LangChain Templates!
[CLI Docs](https://github.com/langchain-ai/langchain/blob/master/libs/cli/DOCS.md)
[LangServe Templates Quickstart](https://github.com/langchain-ai/langchain/blob/master/templates/README.md)

View File

@ -15,7 +15,7 @@ LangChain Community contains third-party integrations that implement the base in
For full documentation see the [API reference](https://api.python.langchain.com/en/stable/community_api_reference.html).
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](https://raw.githubusercontent.com/langchain-ai/langchain/e1d113ea84a2edcf4a7709fc5be0e972ea74a5d9/docs/static/svg/langchain_stack_062024.svg "LangChain Framework Overview")
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](https://raw.githubusercontent.com/langchain-ai/langchain/e1d113ea84a2edcf4a7709fc5be0e972ea74a5d9/docs/static/svg/langchain_stack_112024.svg "LangChain Framework Overview")
## 📕 Releases & Versioning

View File

@ -77,6 +77,7 @@ from langchain_community.utilities.dalle_image_generator import DallEAPIWrapper
from langchain_community.utilities.dataforseo_api_search import DataForSeoAPIWrapper
from langchain_community.utilities.duckduckgo_search import DuckDuckGoSearchAPIWrapper
from langchain_community.utilities.golden_query import GoldenQueryAPIWrapper
from langchain_community.utilities.google_books import GoogleBooksAPIWrapper
from langchain_community.utilities.google_finance import GoogleFinanceAPIWrapper
from langchain_community.utilities.google_jobs import GoogleJobsAPIWrapper
from langchain_community.utilities.google_lens import GoogleLensAPIWrapper
@ -333,6 +334,12 @@ def _get_pubmed(**kwargs: Any) -> BaseTool:
return PubmedQueryRun(api_wrapper=PubMedAPIWrapper(**kwargs))
def _get_google_books(**kwargs: Any) -> BaseTool:
from langchain_community.tools.google_books import GoogleBooksQueryRun
return GoogleBooksQueryRun(api_wrapper=GoogleBooksAPIWrapper(**kwargs))
def _get_google_jobs(**kwargs: Any) -> BaseTool:
return GoogleJobsQueryRun(api_wrapper=GoogleJobsAPIWrapper(**kwargs))
@ -490,6 +497,7 @@ _EXTRA_OPTIONAL_TOOLS: Dict[str, Tuple[Callable[[KwArg(Any)], BaseTool], List[st
"bing-search": (_get_bing_search, ["bing_subscription_key", "bing_search_url"]),
"metaphor-search": (_get_metaphor_search, ["metaphor_api_key"]),
"ddg-search": (_get_ddg_search, []),
"google-books": (_get_google_books, ["google_books_api_key"]),
"google-lens": (_get_google_lens, ["serp_api_key"]),
"google-serper": (_get_google_serper, ["serper_api_key", "aiosession"]),
"google-scholar": (

View File

@ -91,6 +91,7 @@ logger = logging.getLogger(__file__)
if TYPE_CHECKING:
import momento
import pymemcache
from astrapy.db import AstraDB, AsyncAstraDB
from cassandra.cluster import Session as CassandraSession
@ -2599,3 +2600,96 @@ class SingleStoreDBSemanticCache(BaseCache):
if index_name in self._cache_dict:
self._cache_dict[index_name].drop()
del self._cache_dict[index_name]
class MemcachedCache(BaseCache):
"""Cache that uses Memcached backend through pymemcache client lib"""
def __init__(self, client_: Any):
"""
Initialize an instance of MemcachedCache.
Args:
client_ (str): An instance of any of pymemcache's Clients
(Client, PooledClient, HashClient)
Example:
.. code-block:: python
ifrom langchain.globals import set_llm_cache
from langchain_openai import OpenAI
from langchain_community.cache import MemcachedCache
from pymemcache.client.base import Client
llm = OpenAI(model="gpt-3.5-turbo-instruct", n=2, best_of=2)
set_llm_cache(MemcachedCache(Client('localhost')))
# The first time, it is not yet in cache, so it should take longer
llm.invoke("Which city is the most crowded city in the USA?")
# The second time it is, so it goes faster
llm.invoke("Which city is the most crowded city in the USA?")
"""
try:
from pymemcache.client import (
Client,
HashClient,
PooledClient,
RetryingClient,
)
except (ImportError, ModuleNotFoundError):
raise ImportError(
"Could not import pymemcache python package. "
"Please install it with `pip install -U pymemcache`."
)
if not (
isinstance(client_, Client)
or isinstance(client_, PooledClient)
or isinstance(client_, HashClient)
or isinstance(client_, RetryingClient)
):
raise ValueError("Please pass a valid pymemcached client")
self.client = client_
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
key = _hash(prompt + llm_string)
try:
result = self.client.get(key)
except pymemcache.MemcacheError:
return None
return _loads_generations(result) if result is not None else None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache based on prompt and llm_string."""
key = _hash(prompt + llm_string)
# Validate input is made of standard LLM generations
for gen in return_val:
if not isinstance(gen, Generation):
raise ValueError(
"Memcached only supports caching of normal LLM generations, "
+ f"got {type(gen)}"
)
# Deserialize return_val into string and update cache
value = _dumps_generations(return_val)
self.client.set(key, value)
def clear(self, **kwargs: Any) -> None:
"""
Clear the entire cache. Takes optional kwargs:
delay: optional int, the number of seconds to wait before flushing,
or zero to flush immediately (the default). NON-BLOCKING, returns
immediately.
noreply: optional bool, True to not wait for the reply (defaults to
client.default_noreply).
"""
delay = kwargs.get("delay", 0)
noreply = kwargs.get("noreply", None)
self.client.flush_all(delay, noreply)

View File

@ -66,17 +66,13 @@ def convert_message_to_dict(message: BaseMessage) -> dict:
message_dict = {"role": "user", "content": message.content}
elif isinstance(message, AIMessage):
message_dict = {"role": "assistant", "content": message.content}
if "function_call" in message.additional_kwargs:
message_dict["function_call"] = message.additional_kwargs["function_call"]
elif len(message.tool_calls) != 0:
if len(message.tool_calls) != 0:
tool_call = message.tool_calls[0]
message_dict["function_call"] = {
"name": tool_call["name"],
"args": tool_call["args"],
"arguments": json.dumps(tool_call["args"], ensure_ascii=False),
}
# If function call only, content is None not empty string
if "function_call" in message_dict and message_dict["content"] == "":
message_dict["content"] = None
elif isinstance(message, (FunctionMessage, ToolMessage)):
message_dict = {

View File

@ -0,0 +1,245 @@
import logging
from operator import itemgetter
from typing import (
Any,
Callable,
Dict,
List,
Literal,
Optional,
Sequence,
Type,
Union,
cast,
)
from uuid import uuid4
import requests
from langchain.schema import AIMessage, ChatGeneration, ChatResult, HumanMessage
from langchain_core.callbacks import CallbackManagerForLLMRun
from langchain_core.language_models import LanguageModelInput
from langchain_core.language_models.chat_models import BaseChatModel
from langchain_core.messages import (
AIMessageChunk,
BaseMessage,
SystemMessage,
ToolCall,
ToolMessage,
)
from langchain_core.messages.tool import tool_call
from langchain_core.output_parsers import (
JsonOutputParser,
PydanticOutputParser,
)
from langchain_core.output_parsers.base import OutputParserLike
from langchain_core.output_parsers.openai_tools import (
JsonOutputKeyToolsParser,
PydanticToolsParser,
)
from langchain_core.runnables import Runnable, RunnablePassthrough
from langchain_core.runnables.base import RunnableMap
from langchain_core.tools import BaseTool
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_core.utils.pydantic import is_basemodel_subclass
from pydantic import BaseModel, Field
# Initialize logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
_logger = logging.getLogger(__name__)
def _is_pydantic_class(obj: Any) -> bool:
return isinstance(obj, type) and is_basemodel_subclass(obj)
def _convert_messages_to_cloudflare_messages(
messages: List[BaseMessage],
) -> List[Dict[str, Any]]:
"""Convert LangChain messages to Cloudflare Workers AI format."""
cloudflare_messages = []
msg: Dict[str, Any]
for message in messages:
# Base structure for each message
msg = {
"role": "",
"content": message.content if isinstance(message.content, str) else "",
}
# Determine role and additional fields based on message type
if isinstance(message, HumanMessage):
msg["role"] = "user"
elif isinstance(message, AIMessage):
msg["role"] = "assistant"
# If the AIMessage includes tool calls, format them as needed
if message.tool_calls:
tool_calls = [
{"name": tool_call["name"], "arguments": tool_call["args"]}
for tool_call in message.tool_calls
]
msg["tool_calls"] = tool_calls
elif isinstance(message, SystemMessage):
msg["role"] = "system"
elif isinstance(message, ToolMessage):
msg["role"] = "tool"
msg["tool_call_id"] = (
message.tool_call_id
) # Use tool_call_id if it's a ToolMessage
# Add the formatted message to the list
cloudflare_messages.append(msg)
return cloudflare_messages
def _get_tool_calls_from_response(response: requests.Response) -> List[ToolCall]:
"""Get tool calls from ollama response."""
tool_calls = []
if "tool_calls" in response.json()["result"]:
for tc in response.json()["result"]["tool_calls"]:
tool_calls.append(
tool_call(
id=str(uuid4()),
name=tc["name"],
args=tc["arguments"],
)
)
return tool_calls
class ChatCloudflareWorkersAI(BaseChatModel):
"""Custom chat model for Cloudflare Workers AI"""
account_id: str = Field(...)
api_token: str = Field(...)
model: str = Field(...)
ai_gateway: str = ""
url: str = ""
base_url: str = "https://api.cloudflare.com/client/v4/accounts"
gateway_url: str = "https://gateway.ai.cloudflare.com/v1"
def __init__(self, **kwargs: Any) -> None:
"""Initialize with necessary credentials."""
super().__init__(**kwargs)
if self.ai_gateway:
self.url = (
f"{self.gateway_url}/{self.account_id}/"
f"{self.ai_gateway}/workers-ai/run/{self.model}"
)
else:
self.url = f"{self.base_url}/{self.account_id}/ai/run/{self.model}"
def _generate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> ChatResult:
"""Generate a response based on the messages provided."""
formatted_messages = _convert_messages_to_cloudflare_messages(messages)
headers = {"Authorization": f"Bearer {self.api_token}"}
prompt = "\n".join(
f"role: {msg['role']}, content: {msg['content']}"
+ (f", tools: {msg['tool_calls']}" if "tool_calls" in msg else "")
+ (
f", tool_call_id: {msg['tool_call_id']}"
if "tool_call_id" in msg
else ""
)
for msg in formatted_messages
)
# Initialize `data` with `prompt`
data = {
"prompt": prompt,
"tools": kwargs["tools"] if "tools" in kwargs else None,
**{key: value for key, value in kwargs.items() if key not in ["tools"]},
}
# Ensure `tools` is a list if it's included in `kwargs`
if data["tools"] is not None and not isinstance(data["tools"], list):
data["tools"] = [data["tools"]]
_logger.info(f"Sending prompt to Cloudflare Workers AI: {data}")
response = requests.post(self.url, headers=headers, json=data)
tool_calls = _get_tool_calls_from_response(response)
ai_message = AIMessage(
content=str(response.json()), tool_calls=cast(AIMessageChunk, tool_calls)
)
chat_generation = ChatGeneration(message=ai_message)
return ChatResult(generations=[chat_generation])
def bind_tools(
self,
tools: Sequence[Union[Dict[str, Any], Type, Callable[..., Any], BaseTool]],
**kwargs: Any,
) -> Runnable[LanguageModelInput, BaseMessage]:
"""Bind tools for use in model generation."""
formatted_tools = [convert_to_openai_tool(tool) for tool in tools]
return super().bind(tools=formatted_tools, **kwargs)
def with_structured_output(
self,
schema: Union[Dict, Type[BaseModel]],
*,
include_raw: bool = False,
method: Optional[Literal["json_mode", "function_calling"]] = "function_calling",
**kwargs: Any,
) -> Runnable[LanguageModelInput, Union[Dict, BaseModel]]:
"""Model wrapper that returns outputs formatted to match the given schema."""
if kwargs:
raise ValueError(f"Received unsupported arguments {kwargs}")
is_pydantic_schema = _is_pydantic_class(schema)
if method == "function_calling":
if schema is None:
raise ValueError(
"schema must be specified when method is 'function_calling'. "
"Received None."
)
tool_name = convert_to_openai_tool(schema)["function"]["name"]
llm = self.bind_tools([schema], tool_choice=tool_name)
if is_pydantic_schema:
output_parser: OutputParserLike = PydanticToolsParser(
tools=[schema], # type: ignore[list-item]
first_tool_only=True, # type: ignore[list-item]
)
else:
output_parser = JsonOutputKeyToolsParser(
key_name=tool_name, first_tool_only=True
)
elif method == "json_mode":
llm = self.bind(response_format={"type": "json_object"})
output_parser = (
PydanticOutputParser(pydantic_object=schema) # type: ignore[type-var, arg-type]
if is_pydantic_schema
else JsonOutputParser()
)
else:
raise ValueError(
f"Unrecognized method argument. Expected one of 'function_calling' or "
f"'json_mode'. Received: '{method}'"
)
if include_raw:
parser_assign = RunnablePassthrough.assign(
parsed=itemgetter("raw") | output_parser, parsing_error=lambda _: None
)
parser_none = RunnablePassthrough.assign(parsed=lambda _: None)
parser_with_fallback = parser_assign.with_fallbacks(
[parser_none], exception_key="parsing_error"
)
return RunnableMap(raw=llm) | parser_with_fallback
else:
return llm | output_parser
@property
def _llm_type(self) -> str:
"""Return the type of the LLM (for Langchain compatibility)."""
return "cloudflare-workers-ai"

View File

@ -4,6 +4,7 @@ from __future__ import annotations
import json
import logging
from json import JSONDecodeError
from typing import (
Any,
AsyncIterator,
@ -96,7 +97,10 @@ def _parse_tool_calling(tool_call: dict) -> ToolCall:
"""
name = tool_call["function"].get("name", "")
try:
args = json.loads(tool_call["function"]["arguments"])
except (JSONDecodeError, TypeError):
args = {}
id = tool_call.get("id")
return create_tool_call(name=name, args=args, id=id)

View File

@ -8,6 +8,9 @@ if TYPE_CHECKING:
from langchain_community.document_compressors.flashrank_rerank import (
FlashrankRerank,
)
from langchain_community.document_compressors.infinity_rerank import (
InfinityRerank,
)
from langchain_community.document_compressors.jina_rerank import (
JinaRerank,
)
@ -32,6 +35,7 @@ _module_lookup = {
"FlashrankRerank": "langchain_community.document_compressors.flashrank_rerank",
"DashScopeRerank": "langchain_community.document_compressors.dashscope_rerank",
"VolcengineRerank": "langchain_community.document_compressors.volcengine_rerank",
"InfinityRerank": "langchain_community.document_compressors.infinity_rerank",
}
@ -50,4 +54,5 @@ __all__ = [
"RankLLMRerank",
"DashScopeRerank",
"VolcengineRerank",
"InfinityRerank",
]

View File

@ -0,0 +1,135 @@
from __future__ import annotations
from copy import deepcopy
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Sequence, Union
from langchain.retrievers.document_compressors.base import BaseDocumentCompressor
from langchain_core.callbacks.manager import Callbacks
from langchain_core.documents import Document
from pydantic import ConfigDict, model_validator
if TYPE_CHECKING:
from infinity_client.api.default import rerank
from infinity_client.client import Client
from infinity_client.models import RerankInput
else:
# Avoid pydantic annotation issues when actually instantiating
# while keeping this import optional
try:
from infinity_client.api.default import rerank
from infinity_client.client import Client
from infinity_client.models import RerankInput
except ImportError:
pass
DEFAULT_MODEL_NAME = "BAAI/bge-reranker-base"
DEFAULT_BASE_URL = "http://localhost:7997"
class InfinityRerank(BaseDocumentCompressor):
"""Document compressor that uses `Infinity Rerank API`."""
client: Optional[Client] = None
"""Infinity client to use for compressing documents."""
model: Optional[str] = None
"""Model to use for reranking."""
top_n: Optional[int] = 3
"""Number of documents to return."""
model_config = ConfigDict(
populate_by_name=True,
arbitrary_types_allowed=True,
extra="forbid",
)
@model_validator(mode="before")
@classmethod
def validate_environment(cls, values: Dict) -> Any:
"""Validate that python package exists in environment."""
if "client" in values:
return values
else:
try:
from infinity_client.client import Client
except ImportError:
raise ImportError(
"Could not import infinity_client python package. "
"Please install it with `pip install infinity_client`."
)
values["model"] = values.get("model", DEFAULT_MODEL_NAME)
values["client"] = Client(base_url=DEFAULT_BASE_URL)
return values
def rerank(
self,
documents: Sequence[Union[str, Document, dict]],
query: str,
*,
model: Optional[str] = None,
top_n: Optional[int] = -1,
) -> List[Dict[str, Any]]:
"""Returns an ordered list of documents ordered by their relevance to the provided query.
Args:
query: The query to use for reranking.
documents: A sequence of documents to rerank.
model: The model to use for re-ranking. Default to self.model.
top_n : The number of results to return. If None returns all results.
Defaults to self.top_n.
max_chunks_per_doc : The maximum number of chunks derived from a document.
""" # noqa: E501
if len(documents) == 0: # to avoid empty api call
return []
docs = [
doc.page_content if isinstance(doc, Document) else doc for doc in documents
]
model = model or self.model
input = RerankInput(
query=query,
documents=docs,
model=model,
)
results = rerank.sync(client=self.client, body=input)
if hasattr(results, "results"):
results = getattr(results, "results")
result_dicts = []
for res in results:
result_dicts.append(
{"index": res.index, "relevance_score": res.relevance_score}
)
result_dicts.sort(key=lambda x: x["relevance_score"], reverse=True)
top_n = top_n if (top_n is None or top_n > 0) else self.top_n
return result_dicts[:top_n]
def compress_documents(
self,
documents: Sequence[Document],
query: str,
callbacks: Optional[Callbacks] = None,
) -> Sequence[Document]:
"""
Compress documents using Infinity's rerank API.
Args:
documents: A sequence of documents to compress.
query: The query to use for compressing the documents.
callbacks: Callbacks to run during the compression process.
Returns:
A sequence of compressed documents.
"""
compressed = []
for res in self.rerank(documents, query):
doc = documents[res["index"]]
doc_copy = Document(doc.page_content, metadata=deepcopy(doc.metadata))
doc_copy.metadata["relevance_score"] = res["relevance_score"]
compressed.append(doc_copy)
return compressed

View File

@ -3,26 +3,29 @@
from __future__ import annotations
import logging
import mimetypes
import os
import tempfile
from abc import abstractmethod
from enum import Enum
from pathlib import Path, PurePath
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Sequence, Union
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Sequence, Union
from pydantic import (
BaseModel,
Field,
FilePath,
PrivateAttr,
SecretStr,
)
from pydantic_settings import BaseSettings, SettingsConfigDict
from langchain_community.document_loaders.base import BaseLoader
from langchain_community.document_loaders.base import BaseBlobParser, BaseLoader
from langchain_community.document_loaders.blob_loaders.file_system import (
FileSystemBlobLoader,
)
from langchain_community.document_loaders.blob_loaders.schema import Blob
from langchain_community.document_loaders.parsers.generic import MimeTypeBasedParser
from langchain_community.document_loaders.parsers.registry import get_parser
if TYPE_CHECKING:
from O365 import Account
@ -46,24 +49,27 @@ class _O365TokenStorage(BaseSettings):
token_path: FilePath = Path.home() / ".credentials" / "o365_token.txt"
class _FileType(str, Enum):
DOC = "doc"
DOCX = "docx"
PDF = "pdf"
def fetch_mime_types(file_types: Sequence[_FileType]) -> Dict[str, str]:
def fetch_mime_types(file_types: Sequence[str]) -> Dict[str, str]:
"""Fetch the mime types for the specified file types."""
mime_types_mapping = {}
for file_type in file_types:
if file_type.value == "doc":
mime_types_mapping[file_type.value] = "application/msword"
elif file_type.value == "docx":
mime_types_mapping[file_type.value] = (
"application/vnd.openxmlformats-officedocument.wordprocessingml.document" # noqa: E501
)
elif file_type.value == "pdf":
mime_types_mapping[file_type.value] = "application/pdf"
for ext in file_types:
mime_type, _ = mimetypes.guess_type(f"file.{ext}")
if mime_type:
mime_types_mapping[ext] = mime_type
else:
raise ValueError(f"Unknown mimetype of extention {ext}")
return mime_types_mapping
def fetch_extensions(mime_types: Sequence[str]) -> Dict[str, str]:
"""Fetch the mime types for the specified file types."""
mime_types_mapping = {}
for mime_type in mime_types:
ext = mimetypes.guess_extension(mime_type)
if ext:
mime_types_mapping[ext[1:]] = mime_type # ignore leading `.`
else:
raise ValueError(f"Unknown mimetype {mime_type}")
return mime_types_mapping
@ -78,16 +84,82 @@ class O365BaseLoader(BaseLoader, BaseModel):
"""Number of bytes to retrieve from each api call to the server. int or 'auto'."""
recursive: bool = False
"""Should the loader recursively load subfolders?"""
handlers: Optional[Dict[str, Any]] = {}
"""
Provide custom handlers for MimeTypeBasedParser.
@property
@abstractmethod
def _file_types(self) -> Sequence[_FileType]:
"""Return supported file types."""
Pass a dictionary mapping either file extensions (like "doc", "pdf", etc.)
or MIME types (like "application/pdf", "text/plain", etc.) to parsers.
Note that you must use either file extensions or MIME types exclusively and
cannot mix them.
Do not include the leading dot for file extensions.
Example using file extensions:
```python
handlers = {
"doc": MsWordParser(),
"pdf": PDFMinerParser(),
"txt": TextParser()
}
```
Example using MIME types:
```python
handlers = {
"application/msword": MsWordParser(),
"application/pdf": PDFMinerParser(),
"text/plain": TextParser()
}
```
"""
_blob_parser: BaseBlobParser = PrivateAttr()
_file_types: Sequence[str] = PrivateAttr()
_mime_types: Dict[str, str] = PrivateAttr()
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
if self.handlers:
handler_keys = list(self.handlers.keys())
try:
# assume handlers.keys() are file extensions
self._mime_types = fetch_mime_types(handler_keys)
self._file_types = list(set(handler_keys))
mime_handlers = {
self._mime_types[extension]: handler
for extension, handler in self.handlers.items()
}
except ValueError:
try:
# assume handlers.keys() are mime types
self._mime_types = fetch_extensions(handler_keys)
self._file_types = list(set(self._mime_types.keys()))
mime_handlers = self.handlers
except ValueError:
raise ValueError(
"`handlers` keys must be either file extensions or mimetypes.\n"
f"{handler_keys} could not be interpreted as either.\n"
"File extensions and mimetypes cannot mix. "
"Use either one or the other"
)
self._blob_parser = MimeTypeBasedParser(
handlers=mime_handlers, fallback_parser=None
)
else:
self._blob_parser = get_parser("default")
if not isinstance(self._blob_parser, MimeTypeBasedParser):
raise TypeError(
'get_parser("default) was supposed to return MimeTypeBasedParser.'
f"It returned {type(self._blob_parser)}"
)
self._mime_types = fetch_extensions(list(self._blob_parser.handlers.keys()))
@property
def _fetch_mime_types(self) -> Dict[str, str]:
"""Return a dict of supported file types to corresponding mime types."""
return fetch_mime_types(self._file_types)
return self._mime_types
@property
@abstractmethod

View File

@ -18,6 +18,7 @@ class AzureAIDocumentIntelligenceLoader(BaseLoader):
api_key: str,
file_path: Optional[str] = None,
url_path: Optional[str] = None,
bytes_source: Optional[bytes] = None,
api_version: Optional[str] = None,
api_model: str = "prebuilt-layout",
mode: str = "markdown",
@ -41,10 +42,13 @@ class AzureAIDocumentIntelligenceLoader(BaseLoader):
The API key to use for DocumentIntelligenceClient construction.
file_path : Optional[str]
The path to the file that needs to be loaded.
Either file_path or url_path must be specified.
Either file_path, url_path or bytes_source must be specified.
url_path : Optional[str]
The URL to the file that needs to be loaded.
Either file_path or url_path must be specified.
Either file_path, url_path or bytes_source must be specified.
bytes_source : Optional[bytes]
The bytes array of the file that needs to be loaded.
Either file_path, url_path or bytes_source must be specified.
api_version: Optional[str]
The API version for DocumentIntelligenceClient. Setting None to use
the default value from `azure-ai-documentintelligence` package.
@ -73,10 +77,11 @@ class AzureAIDocumentIntelligenceLoader(BaseLoader):
"""
assert (
file_path is not None or url_path is not None
), "file_path or url_path must be provided"
file_path is not None or url_path is not None or bytes_source is not None
), "file_path, url_path or bytes_source must be provided"
self.file_path = file_path
self.url_path = url_path
self.bytes_source = bytes_source
self.parser = AzureAIDocumentIntelligenceParser( # type: ignore[misc]
api_endpoint=api_endpoint,
@ -90,9 +95,13 @@ class AzureAIDocumentIntelligenceLoader(BaseLoader):
def lazy_load(
self,
) -> Iterator[Document]:
"""Lazy load given path as pages."""
"""Lazy load the document as pages."""
if self.file_path is not None:
blob = Blob.from_path(self.file_path) # type: ignore[attr-defined]
yield from self.parser.parse(blob)
else:
elif self.url_path is not None:
yield from self.parser.parse_url(self.url_path) # type: ignore[arg-type]
elif self.bytes_source is not None:
yield from self.parser.parse_bytes(self.bytes_source)
else:
raise ValueError("No data source provided.")

View File

@ -1,94 +1,19 @@
"""Loads data from OneDrive"""
from typing import Any
from __future__ import annotations
import logging
from typing import TYPE_CHECKING, Iterator, List, Optional, Sequence, Union
from langchain_core.documents import Document
from pydantic import Field
from langchain_community.document_loaders.base_o365 import (
O365BaseLoader,
_FileType,
)
from langchain_community.document_loaders.parsers.registry import get_parser
if TYPE_CHECKING:
from O365.drive import Drive, Folder
logger = logging.getLogger(__name__)
from langchain_community.document_loaders import SharePointLoader
class OneDriveLoader(O365BaseLoader):
"""Load from `Microsoft OneDrive`."""
class OneDriveLoader(SharePointLoader):
"""
Load documents from Microsoft OneDrive.
Uses `SharePointLoader` under the hood.
"""
drive_id: str = Field(...)
"""The ID of the OneDrive drive to load data from."""
folder_path: Optional[str] = None
""" The path to the folder to load data from."""
object_ids: Optional[List[str]] = None
""" The IDs of the objects to load data from."""
@property
def _file_types(self) -> Sequence[_FileType]:
"""Return supported file types."""
return _FileType.DOC, _FileType.DOCX, _FileType.PDF
@property
def _scopes(self) -> List[str]:
"""Return required scopes."""
return ["offline_access", "Files.Read.All"]
def _get_folder_from_path(self, drive: Drive) -> Union[Folder, Drive]:
"""
Returns the folder or drive object located at the
specified path relative to the given drive.
Args:
drive (Drive): The root drive from which the folder path is relative.
Returns:
Union[Folder, Drive]: The folder or drive object
located at the specified path.
Raises:
FileNotFoundError: If the path does not exist.
"""
subfolder_drive = drive
if self.folder_path is None:
return subfolder_drive
subfolders = [f for f in self.folder_path.split("/") if f != ""]
if len(subfolders) == 0:
return subfolder_drive
items = subfolder_drive.get_items()
for subfolder in subfolders:
try:
subfolder_drive = list(filter(lambda x: subfolder in x.name, items))[0]
items = subfolder_drive.get_items()
except (IndexError, AttributeError):
raise FileNotFoundError("Path {} not exist.".format(self.folder_path))
return subfolder_drive
def lazy_load(self) -> Iterator[Document]:
"""Load documents lazily. Use this when working at a large scale."""
try:
from O365.drive import Drive
except ImportError:
raise ImportError(
"O365 package not found, please install it with `pip install o365`"
)
drive = self._auth().storage().get_drive(self.drive_id)
if not isinstance(drive, Drive):
raise ValueError(f"There isn't a Drive with id {self.drive_id}.")
blob_parser = get_parser("default")
if self.folder_path:
folder = self._get_folder_from_path(drive)
for blob in self._load_from_folder(folder):
yield from blob_parser.lazy_parse(blob)
if self.object_ids:
for blob in self._load_from_object_ids(drive, self.object_ids):
yield from blob_parser.lazy_parse(blob)
def __init__(self, **kwargs: Any) -> None:
kwargs["document_library_id"] = kwargs["drive_id"]
super().__init__(**kwargs)

View File

@ -109,3 +109,21 @@ class AzureAIDocumentIntelligenceParser(BaseBlobParser):
yield from self._generate_docs_page(result)
else:
raise ValueError(f"Invalid mode: {self.mode}")
def parse_bytes(self, bytes_source: bytes) -> Iterator[Document]:
from azure.ai.documentintelligence.models import AnalyzeDocumentRequest
poller = self.client.begin_analyze_document(
self.api_model,
analyze_request=AnalyzeDocumentRequest(bytes_source=bytes_source),
# content_type="application/octet-stream",
output_content_format="markdown" if self.mode == "markdown" else "text",
)
result = poller.result()
if self.mode in ["single", "markdown"]:
yield from self._generate_docs_single(result)
elif self.mode in ["page"]:
yield from self._generate_docs_page(result)
else:
raise ValueError(f"Invalid mode: {self.mode}")

View File

@ -945,5 +945,82 @@ class DocumentIntelligenceLoader(BasePDFLoader):
yield from self.parser.parse(blob)
class ZeroxPDFLoader(BasePDFLoader):
"""
Document loader utilizing Zerox library:
https://github.com/getomni-ai/zerox
Zerox converts PDF document to serties of images (page-wise) and
uses vision-capable LLM model to generate Markdown representation.
Zerox utilizes anyc operations. Therefore when using this loader
inside Jupyter Notebook (or any environment running async)
you will need to:
```python
import nest_asyncio
nest_asyncio.apply()
```
"""
def __init__(
self,
file_path: Union[str, Path],
model: str = "gpt-4o-mini",
**zerox_kwargs: Any,
) -> None:
super().__init__(file_path=file_path)
"""
Initialize the parser with arguments to be passed to the zerox function.
Make sure to set necessary environmnet variables such as API key, endpoint, etc.
Check zerox documentation for list of necessary environment variables for
any given model.
Args:
file_path:
Path or url of the pdf file
model:
Vision capable model to use. Defaults to "gpt-4o-mini".
Hosted models are passed in format "<provider>/<model>"
Examples: "azure/gpt-4o-mini", "vertex_ai/gemini-1.5-flash-001"
See more details in zerox documentation.
**zerox_kwargs:
Arguments specific to the zerox function.
see datailed list of arguments here in zerox repository:
https://github.com/getomni-ai/zerox/blob/main/py_zerox/pyzerox/core/zerox.py#L25
""" # noqa: E501
self.zerox_kwargs = zerox_kwargs
self.model = model
def lazy_load(self) -> Iterator[Document]:
"""
Loads documnts from pdf utilizing zerox library:
https://github.com/getomni-ai/zerox
Returns:
Iterator[Document]: An iterator over parsed Document instances.
"""
import asyncio
from pyzerox import zerox
# Directly call asyncio.run to execute zerox synchronously
zerox_output = asyncio.run(
zerox(file_path=self.file_path, model=self.model, **self.zerox_kwargs)
)
# Convert zerox output to Document instances and yield them
if len(zerox_output.pages) > 0:
num_pages = zerox_output.pages[-1].page
for page in zerox_output.pages:
yield Document(
page_content=page.content,
metadata={
"source": self.source,
"page": page.page,
"num_pages": num_pages,
},
)
# Legacy: only for backwards compatibility. Use PyPDFLoader instead
PagedPDFSplitter = PyPDFLoader

View File

@ -4,7 +4,7 @@ from __future__ import annotations
import json
from pathlib import Path
from typing import Any, Iterator, List, Optional, Sequence
from typing import Any, Dict, Iterator, List, Optional
import requests # type: ignore
from langchain_core.document_loaders import BaseLoader
@ -13,9 +13,7 @@ from pydantic import Field
from langchain_community.document_loaders.base_o365 import (
O365BaseLoader,
_FileType,
)
from langchain_community.document_loaders.parsers.registry import get_parser
class SharePointLoader(O365BaseLoader, BaseLoader):
@ -36,14 +34,6 @@ class SharePointLoader(O365BaseLoader, BaseLoader):
load_extended_metadata: Optional[bool] = False
""" Whether to load extended metadata. Size, Owner and full_path."""
@property
def _file_types(self) -> Sequence[_FileType]:
"""Return supported file types.
Returns:
A sequence of supported file types.
"""
return _FileType.DOC, _FileType.DOCX, _FileType.PDF
@property
def _scopes(self) -> List[str]:
"""Return required scopes.
@ -67,7 +57,6 @@ class SharePointLoader(O365BaseLoader, BaseLoader):
drive = self._auth().storage().get_drive(self.document_library_id)
if not isinstance(drive, Drive):
raise ValueError(f"There isn't a Drive with id {self.document_library_id}.")
blob_parser = get_parser("default")
if self.folder_path:
target_folder = drive.get_item_by_path(self.folder_path)
if not isinstance(target_folder, Folder):
@ -79,7 +68,7 @@ class SharePointLoader(O365BaseLoader, BaseLoader):
if self.load_extended_metadata is True:
extended_metadata = self.get_extended_metadata(file_id)
extended_metadata.update({"source_full_url": target_folder.web_url})
for parsed_blob in blob_parser.lazy_parse(blob):
for parsed_blob in self._blob_parser.lazy_parse(blob):
if self.load_auth is True:
parsed_blob.metadata["authorized_identities"] = auth_identities
if self.load_extended_metadata is True:
@ -96,7 +85,7 @@ class SharePointLoader(O365BaseLoader, BaseLoader):
if self.load_extended_metadata is True:
extended_metadata = self.get_extended_metadata(file_id)
extended_metadata.update({"source_full_url": target_folder.web_url})
for parsed_blob in blob_parser.lazy_parse(blob):
for parsed_blob in self._blob_parser.lazy_parse(blob):
if self.load_auth is True:
parsed_blob.metadata["authorized_identities"] = auth_identities
if self.load_extended_metadata is True:
@ -109,7 +98,7 @@ class SharePointLoader(O365BaseLoader, BaseLoader):
auth_identities = self.authorized_identities(file_id)
if self.load_extended_metadata is True:
extended_metadata = self.get_extended_metadata(file_id)
for parsed_blob in blob_parser.lazy_parse(blob):
for parsed_blob in self._blob_parser.lazy_parse(blob):
if self.load_auth is True:
parsed_blob.metadata["authorized_identities"] = auth_identities
if self.load_extended_metadata is True:
@ -126,7 +115,7 @@ class SharePointLoader(O365BaseLoader, BaseLoader):
auth_identities = self.authorized_identities(file_id)
if self.load_extended_metadata is True:
extended_metadata = self.get_extended_metadata(file_id)
for blob_part in blob_parser.lazy_parse(blob):
for blob_part in self._blob_parser.lazy_parse(blob):
blob_part.metadata.update(blob.metadata)
if self.load_auth is True:
blob_part.metadata["authorized_identities"] = auth_identities
@ -182,7 +171,7 @@ class SharePointLoader(O365BaseLoader, BaseLoader):
data = json.loads(s)
return data
def get_extended_metadata(self, file_id: str) -> dict:
def get_extended_metadata(self, file_id: str) -> Dict:
"""
Retrieve extended metadata for a file in SharePoint.
As of today, following fields are supported in the extended metadata:

View File

@ -140,6 +140,9 @@ if TYPE_CHECKING:
GmailSearch,
GmailSendMessage,
)
from langchain_community.tools.google_books import (
GoogleBooksQueryRun,
)
from langchain_community.tools.google_cloud.texttospeech import (
GoogleCloudTextToSpeechTool,
)
@ -407,6 +410,7 @@ __all__ = [
"GmailGetThread",
"GmailSearch",
"GmailSendMessage",
"GoogleBooksQueryRun",
"GoogleCloudTextToSpeechTool",
"GooglePlacesTool",
"GoogleSearchResults",
@ -559,6 +563,7 @@ _module_lookup = {
"GmailGetThread": "langchain_community.tools.gmail",
"GmailSearch": "langchain_community.tools.gmail",
"GmailSendMessage": "langchain_community.tools.gmail",
"GoogleBooksQueryRun": "langchain_community.tools.google_books",
"GoogleCloudTextToSpeechTool": "langchain_community.tools.google_cloud.texttospeech", # noqa: E501
"GooglePlacesTool": "langchain_community.tools.google_places.tool",
"GoogleSearchResults": "langchain_community.tools.google_search.tool",

View File

@ -0,0 +1,38 @@
"""Tool for the Google Books API."""
from typing import Optional, Type
from langchain_core.callbacks import CallbackManagerForToolRun
from langchain_core.tools import BaseTool
from pydantic import BaseModel, Field
from langchain_community.utilities.google_books import GoogleBooksAPIWrapper
class GoogleBooksQueryInput(BaseModel):
"""Input for the GoogleBooksQuery tool."""
query: str = Field(description="query to look up on google books")
class GoogleBooksQueryRun(BaseTool): # type: ignore[override]
"""Tool that searches the Google Books API."""
name: str = "GoogleBooks"
description: str = (
"A wrapper around Google Books. "
"Useful for when you need to answer general inquiries about "
"books of certain topics and generate recommendation based "
"off of key words"
"Input should be a query string"
)
api_wrapper: GoogleBooksAPIWrapper
args_schema: Type[BaseModel] = GoogleBooksQueryInput
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
"""Use the Google Books tool."""
return self.api_wrapper.run(query)

View File

@ -45,6 +45,9 @@ if TYPE_CHECKING:
from langchain_community.utilities.golden_query import (
GoldenQueryAPIWrapper,
)
from langchain_community.utilities.google_books import (
GoogleBooksAPIWrapper,
)
from langchain_community.utilities.google_finance import (
GoogleFinanceAPIWrapper,
)
@ -185,6 +188,7 @@ __all__ = [
"DriaAPIWrapper",
"DuckDuckGoSearchAPIWrapper",
"GoldenQueryAPIWrapper",
"GoogleBooksAPIWrapper",
"GoogleFinanceAPIWrapper",
"GoogleJobsAPIWrapper",
"GoogleLensAPIWrapper",
@ -248,6 +252,7 @@ _module_lookup = {
"DriaAPIWrapper": "langchain_community.utilities.dria_index",
"DuckDuckGoSearchAPIWrapper": "langchain_community.utilities.duckduckgo_search",
"GoldenQueryAPIWrapper": "langchain_community.utilities.golden_query",
"GoogleBooksAPIWrapper": "langchain_community.utilities.google_books",
"GoogleFinanceAPIWrapper": "langchain_community.utilities.google_finance",
"GoogleJobsAPIWrapper": "langchain_community.utilities.google_jobs",
"GoogleLensAPIWrapper": "langchain_community.utilities.google_lens",

View File

@ -0,0 +1,92 @@
"""Util that calls Google Books."""
from typing import Dict, List, Optional
import requests
from langchain_core.utils import get_from_dict_or_env
from pydantic import BaseModel, ConfigDict, model_validator
GOOGLE_BOOKS_MAX_ITEM_SIZE = 5
GOOGLE_BOOKS_API_URL = "https://www.googleapis.com/books/v1/volumes"
class GoogleBooksAPIWrapper(BaseModel):
"""Wrapper around Google Books API.
To use, you should have a Google Books API key available.
This wrapper will use the Google Books API to conduct searches and
fetch books based on a query passed in by the agents. By default,
it will return the top-k results.
The response for each book will contain the book title, author name, summary, and
a source link.
"""
google_books_api_key: Optional[str] = None
top_k_results: int = GOOGLE_BOOKS_MAX_ITEM_SIZE
model_config = ConfigDict(
extra="forbid",
)
@model_validator(mode="before")
@classmethod
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key exists in environment."""
google_books_api_key = get_from_dict_or_env(
values, "google_books_api_key", "GOOGLE_BOOKS_API_KEY"
)
values["google_books_api_key"] = google_books_api_key
return values
def run(self, query: str) -> str:
# build Url based on API key, query, and max results
params = (
("q", query),
("maxResults", self.top_k_results),
("key", self.google_books_api_key),
)
# send request
response = requests.get(GOOGLE_BOOKS_API_URL, params=params)
json = response.json()
# some error handeling
if response.status_code != 200:
code = response.status_code
error = json.get("error", {}).get("message", "Internal failure")
return f"Unable to retrieve books got status code {code}: {error}"
# send back data
return self._format(query, json.get("items", []))
def _format(self, query: str, books: List) -> str:
if not books:
return f"Sorry no books could be found for your query: {query}"
start = f"Here are {len(books)} suggestions for books related to {query}:"
results = []
results.append(start)
i = 1
for book in books:
info = book["volumeInfo"]
title = info["title"]
authors = self._format_authors(info["authors"])
summary = info["description"]
source = info["infoLink"]
desc = f'{i}. "{title}" by {authors}: {summary}\n'
desc += f"You can read more at {source}"
results.append(desc)
i += 1
return "\n\n".join(results)
def _format_authors(self, authors: List) -> str:
if len(authors) == 1:
return authors[0]
return "{} and {}".format(", ".join(authors[:-1]), authors[-1])

View File

@ -753,6 +753,9 @@ class Chroma(VectorStore):
embeddings = self._embedding_function.embed_documents(text)
if hasattr(
self._collection._client,
"get_max_batch_size", # for Chroma 0.5.1 and above
) or hasattr(
self._collection._client, "max_batch_size"
): # for Chroma 0.4.10 and above
from chromadb.utils.batch_utils import create_batches
@ -824,7 +827,10 @@ class Chroma(VectorStore):
ids = [str(uuid.uuid4()) for _ in texts]
if hasattr(
chroma_collection._client, # type: ignore[has-type]
"max_batch_size", # type: ignore[has-type]
"get_max_batch_size", # for Chroma 0.5.1 and above
) or hasattr(
chroma_collection._client, # type: ignore[has-type]
"max_batch_size",
): # for Chroma 0.4.10 and above
from chromadb.utils.batch_utils import create_batches

View File

@ -2,7 +2,7 @@ from __future__ import annotations
import uuid
import warnings
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple
from typing import TYPE_CHECKING, Any, Callable, Dict, Iterable, List, Optional, Tuple
import numpy as np
from langchain_core.documents import Document
@ -23,57 +23,18 @@ SCRIPT_SCORING_SEARCH = "script_scoring"
PAINLESS_SCRIPTING_SEARCH = "painless_scripting"
MATCH_ALL_QUERY = {"match_all": {}} # type: Dict
def _import_opensearch() -> Any:
"""Import OpenSearch if available, otherwise raise error."""
try:
from opensearchpy import OpenSearch
except ImportError:
raise ImportError(IMPORT_OPENSEARCH_PY_ERROR)
return OpenSearch
if TYPE_CHECKING:
from opensearchpy import AsyncOpenSearch, OpenSearch
def _import_async_opensearch() -> Any:
"""Import AsyncOpenSearch if available, otherwise raise error."""
try:
from opensearchpy import AsyncOpenSearch
except ImportError:
raise ImportError(IMPORT_ASYNC_OPENSEARCH_PY_ERROR)
return AsyncOpenSearch
def _import_bulk() -> Any:
"""Import bulk if available, otherwise raise error."""
try:
from opensearchpy.helpers import bulk
except ImportError:
raise ImportError(IMPORT_OPENSEARCH_PY_ERROR)
return bulk
def _import_async_bulk() -> Any:
"""Import async_bulk if available, otherwise raise error."""
try:
from opensearchpy.helpers import async_bulk
except ImportError:
raise ImportError(IMPORT_ASYNC_OPENSEARCH_PY_ERROR)
return async_bulk
def _import_not_found_error() -> Any:
"""Import not found error if available, otherwise raise error."""
try:
from opensearchpy.exceptions import NotFoundError
except ImportError:
raise ImportError(IMPORT_OPENSEARCH_PY_ERROR)
return NotFoundError
def _get_opensearch_client(opensearch_url: str, **kwargs: Any) -> Any:
def _get_opensearch_client(opensearch_url: str, **kwargs: Any) -> OpenSearch:
"""Get OpenSearch client from the opensearch_url, otherwise raise error."""
try:
opensearch = _import_opensearch()
client = opensearch(opensearch_url, **kwargs)
from opensearchpy import OpenSearch
client = OpenSearch(opensearch_url, **kwargs)
except ImportError:
raise ImportError(IMPORT_OPENSEARCH_PY_ERROR)
except ValueError as e:
raise ImportError(
f"OpenSearch client string provided is not in proper format. "
@ -82,11 +43,14 @@ def _get_opensearch_client(opensearch_url: str, **kwargs: Any) -> Any:
return client
def _get_async_opensearch_client(opensearch_url: str, **kwargs: Any) -> Any:
def _get_async_opensearch_client(opensearch_url: str, **kwargs: Any) -> AsyncOpenSearch:
"""Get AsyncOpenSearch client from the opensearch_url, otherwise raise error."""
try:
async_opensearch = _import_async_opensearch()
client = async_opensearch(opensearch_url, **kwargs)
from opensearchpy import AsyncOpenSearch
client = AsyncOpenSearch(opensearch_url, **kwargs)
except ImportError:
raise ImportError(IMPORT_ASYNC_OPENSEARCH_PY_ERROR)
except ValueError as e:
raise ImportError(
f"AsyncOpenSearch client string provided is not in proper format. "
@ -127,7 +91,7 @@ def _is_aoss_enabled(http_auth: Any) -> bool:
def _bulk_ingest_embeddings(
client: Any,
client: OpenSearch,
index_name: str,
embeddings: List[List[float]],
texts: Iterable[str],
@ -142,16 +106,19 @@ def _bulk_ingest_embeddings(
"""Bulk Ingest Embeddings into given index."""
if not mapping:
mapping = dict()
try:
from opensearchpy.exceptions import NotFoundError
from opensearchpy.helpers import bulk
except ImportError:
raise ImportError(IMPORT_OPENSEARCH_PY_ERROR)
bulk = _import_bulk()
not_found_error = _import_not_found_error()
requests = []
return_ids = []
mapping = mapping
try:
client.indices.get(index=index_name)
except not_found_error:
except NotFoundError:
client.indices.create(index=index_name, body=mapping)
for i, text in enumerate(texts):
@ -177,7 +144,7 @@ def _bulk_ingest_embeddings(
async def _abulk_ingest_embeddings(
client: Any,
client: AsyncOpenSearch,
index_name: str,
embeddings: List[List[float]],
texts: Iterable[str],
@ -193,14 +160,18 @@ async def _abulk_ingest_embeddings(
if not mapping:
mapping = dict()
async_bulk = _import_async_bulk()
not_found_error = _import_not_found_error()
try:
from opensearchpy.exceptions import NotFoundError
from opensearchpy.helpers import async_bulk
except ImportError:
raise ImportError(IMPORT_ASYNC_OPENSEARCH_PY_ERROR)
requests = []
return_ids = []
try:
await client.indices.get(index=index_name)
except not_found_error:
except NotFoundError:
await client.indices.create(index=index_name, body=mapping)
for i, text in enumerate(texts):
@ -230,7 +201,7 @@ async def _abulk_ingest_embeddings(
def _default_scripting_text_mapping(
dim: int,
vector_field: str = "vector_field",
) -> Dict:
) -> Dict[str, Any]:
"""For Painless Scripting or Script Scoring,the default mapping to create index."""
return {
"mappings": {
@ -249,7 +220,7 @@ def _default_text_mapping(
ef_construction: int = 512,
m: int = 16,
vector_field: str = "vector_field",
) -> Dict:
) -> Dict[str, Any]:
"""For Approximate k-NN Search, this is the default mapping to create index."""
return {
"settings": {"index": {"knn": True, "knn.algo_param.ef_search": ef_search}},
@ -275,7 +246,7 @@ def _default_approximate_search_query(
k: int = 4,
vector_field: str = "vector_field",
score_threshold: Optional[float] = 0.0,
) -> Dict:
) -> Dict[str, Any]:
"""For Approximate k-NN Search, this is the default query."""
return {
"size": k,
@ -291,7 +262,7 @@ def _approximate_search_query_with_boolean_filter(
vector_field: str = "vector_field",
subquery_clause: str = "must",
score_threshold: Optional[float] = 0.0,
) -> Dict:
) -> Dict[str, Any]:
"""For Approximate k-NN Search, with Boolean Filter."""
return {
"size": k,
@ -313,7 +284,7 @@ def _approximate_search_query_with_efficient_filter(
k: int = 4,
vector_field: str = "vector_field",
score_threshold: Optional[float] = 0.0,
) -> Dict:
) -> Dict[str, Any]:
"""For Approximate k-NN Search, with Efficient Filter for Lucene and
Faiss Engines."""
search_query = _default_approximate_search_query(
@ -330,7 +301,7 @@ def _default_script_query(
pre_filter: Optional[Dict] = None,
vector_field: str = "vector_field",
score_threshold: Optional[float] = 0.0,
) -> Dict:
) -> Dict[str, Any]:
"""For Script Scoring Search, this is the default query."""
if not pre_filter:
@ -376,7 +347,7 @@ def _default_painless_scripting_query(
pre_filter: Optional[Dict] = None,
vector_field: str = "vector_field",
score_threshold: Optional[float] = 0.0,
) -> Dict:
) -> Dict[str, Any]:
"""For Painless Scripting Search, this is the default query."""
if not pre_filter:
@ -692,7 +663,10 @@ class OpenSearchVectorSearch(VectorStore):
refresh_indices: Whether to refresh the index
after deleting documents. Defaults to True.
"""
bulk = _import_bulk()
try:
from opensearchpy.helpers import bulk
except ImportError:
raise ImportError(IMPORT_OPENSEARCH_PY_ERROR)
body = []
@ -902,6 +876,7 @@ class OpenSearchVectorSearch(VectorStore):
if metadata_field == "*" or metadata_field not in hit["_source"]
else hit["_source"][metadata_field]
),
id=hit["_id"],
),
hit["_score"],
)
@ -1099,6 +1074,7 @@ class OpenSearchVectorSearch(VectorStore):
Document(
page_content=results[i]["_source"][text_field],
metadata=results[i]["_source"][metadata_field],
id=results[i]["_id"],
)
for i in mmr_selected
]

View File

@ -0,0 +1,61 @@
"""
Test Memcached llm cache functionality. Requires running instance of Memcached on
localhost default port (11211) and pymemcache
"""
import pytest
from langchain.globals import get_llm_cache, set_llm_cache
from langchain_core.outputs import Generation, LLMResult
from langchain_community.cache import MemcachedCache
from tests.unit_tests.llms.fake_llm import FakeLLM
DEFAULT_MEMCACHED_URL = "localhost"
@pytest.mark.requires("pymemcache")
def test_memcached_cache() -> None:
"""Test general Memcached caching"""
from pymemcache import Client
set_llm_cache(MemcachedCache(Client(DEFAULT_MEMCACHED_URL)))
llm = FakeLLM()
params = llm.dict()
params["stop"] = None
llm_string = str(sorted([(k, v) for k, v in params.items()]))
get_llm_cache().update("foo", llm_string, [Generation(text="fizz")])
output = llm.generate(["foo"])
expected_output = LLMResult(
generations=[[Generation(text="fizz")]],
llm_output={},
)
assert output == expected_output
# clear the cache
get_llm_cache().clear()
@pytest.mark.requires("pymemcache")
def test_memcached_cache_flush() -> None:
"""Test flushing Memcached cache"""
from pymemcache import Client
set_llm_cache(MemcachedCache(Client(DEFAULT_MEMCACHED_URL)))
llm = FakeLLM()
params = llm.dict()
params["stop"] = None
llm_string = str(sorted([(k, v) for k, v in params.items()]))
get_llm_cache().update("foo", llm_string, [Generation(text="fizz")])
output = llm.generate(["foo"])
expected_output = LLMResult(
generations=[[Generation(text="fizz")]],
llm_output={},
)
assert output == expected_output
# clear the cache
get_llm_cache().clear(delay=0, noreply=False)
# After cache has been cleared, the result shouldn't be the same
output = llm.generate(["foo"])
assert output != expected_output

View File

@ -0,0 +1,32 @@
from langchain_core.documents import Document
from langchain_community.document_compressors.infinity_rerank import (
InfinityRerank,
)
def test_rerank() -> None:
reranker = InfinityRerank()
docs = [
Document(
page_content=(
"This is a document not related to the python package infinity_emb, "
"hence..."
)
),
Document(page_content="Paris is in France!"),
Document(
page_content=(
"infinity_emb is a package for sentence embeddings and rerankings using"
" transformer models in Python!"
)
),
Document(page_content="random text for nothing"),
]
compressed = reranker.compress_documents(
query="What is the python package infinity_emb?",
documents=docs,
)
assert len(compressed) == 3, "default top_n is 3"
assert compressed[0].page_content == docs[2].page_content, "rerank works"

View File

@ -15,15 +15,19 @@ from tests.integration_tests.vectorstores.fake_embeddings import (
DEFAULT_OPENSEARCH_URL = "http://localhost:9200"
texts = ["foo", "bar", "baz"]
ids = ["id_foo", "id_bar", "id_baz"]
def test_opensearch() -> None:
"""Test end to end indexing and search using Approximate Search."""
docsearch = OpenSearchVectorSearch.from_texts(
texts, FakeEmbeddings(), opensearch_url=DEFAULT_OPENSEARCH_URL
texts,
FakeEmbeddings(),
opensearch_url=DEFAULT_OPENSEARCH_URL,
ids=ids,
)
output = docsearch.similarity_search("foo", k=1)
assert output == [Document(page_content="foo")]
assert output == [Document(page_content="foo", id="id_foo")]
def test_similarity_search_with_score() -> None:
@ -34,11 +38,12 @@ def test_similarity_search_with_score() -> None:
FakeEmbeddings(),
metadatas=metadatas,
opensearch_url=DEFAULT_OPENSEARCH_URL,
ids=ids,
)
output = docsearch.similarity_search_with_score("foo", k=2)
assert output == [
(Document(page_content="foo", metadata={"page": 0}), 1.0),
(Document(page_content="bar", metadata={"page": 1}), 0.5),
(Document(page_content="foo", metadata={"page": 0}, id="id_foo"), 1.0),
(Document(page_content="bar", metadata={"page": 1}, id="id_bar"), 0.5),
]
@ -50,20 +55,24 @@ def test_opensearch_with_custom_field_name() -> None:
opensearch_url=DEFAULT_OPENSEARCH_URL,
vector_field="my_vector",
text_field="custom_text",
ids=ids,
)
output = docsearch.similarity_search(
"foo", k=1, vector_field="my_vector", text_field="custom_text"
)
assert output == [Document(page_content="foo")]
assert output == [Document(page_content="foo", id="id_foo")]
text_input = ["test", "add", "text", "method"]
OpenSearchVectorSearch.add_texts(
docsearch, text_input, vector_field="my_vector", text_field="custom_text"
docsearch,
text_input,
vector_field="my_vector",
text_field="custom_text",
)
output = docsearch.similarity_search(
"add", k=1, vector_field="my_vector", text_field="custom_text"
)
assert output == [Document(page_content="foo")]
assert output == [Document(page_content="foo", id="id_foo")]
def test_opensearch_with_metadatas() -> None:
@ -74,9 +83,22 @@ def test_opensearch_with_metadatas() -> None:
FakeEmbeddings(),
metadatas=metadatas,
opensearch_url=DEFAULT_OPENSEARCH_URL,
ids=ids,
)
output = docsearch.similarity_search("foo", k=1)
assert output == [Document(page_content="foo", metadata={"page": 0})]
assert output == [Document(page_content="foo", metadata={"page": 0}, id="id_foo")]
def test_max_marginal_relevance_search() -> None:
"""Test end to end indexing and mmr search."""
docsearch = OpenSearchVectorSearch.from_texts(
texts,
FakeEmbeddings(),
opensearch_url=DEFAULT_OPENSEARCH_URL,
ids=ids,
)
output = docsearch.max_marginal_relevance_search("foo", k=1)
assert output == [Document(page_content="foo", id="id_foo")]
def test_add_text() -> None:
@ -86,8 +108,8 @@ def test_add_text() -> None:
docsearch = OpenSearchVectorSearch.from_texts(
texts, FakeEmbeddings(), opensearch_url=DEFAULT_OPENSEARCH_URL
)
docids = OpenSearchVectorSearch.add_texts(docsearch, text_input, metadatas)
assert len(docids) == len(text_input)
doc_ids = OpenSearchVectorSearch.add_texts(docsearch, text_input, metadatas)
assert len(doc_ids) == len(text_input)
def test_add_embeddings() -> None:
@ -112,7 +134,8 @@ def test_add_embeddings() -> None:
)
docsearch.add_embeddings(list(zip(text_input, embedding_vectors)), metadatas)
output = docsearch.similarity_search("foo1", k=1)
assert output == [Document(page_content="foo3", metadata={"page": 2})]
assert output[0].page_content == "foo3"
assert output[0].metadata == {"page": 2}
def test_opensearch_script_scoring() -> None:
@ -127,7 +150,8 @@ def test_opensearch_script_scoring() -> None:
output = docsearch.similarity_search(
"foo", k=1, search_type=SCRIPT_SCORING_SEARCH, pre_filter=pre_filter_val
)
assert output == [Document(page_content="bar")]
assert output[0].page_content == "bar"
assert output[0].id is not None
def test_add_text_script_scoring() -> None:
@ -144,7 +168,8 @@ def test_add_text_script_scoring() -> None:
output = docsearch.similarity_search(
"add", k=1, search_type=SCRIPT_SCORING_SEARCH, space_type="innerproduct"
)
assert output == [Document(page_content="test")]
assert output[0].page_content == "test"
assert output[0].id is not None
def test_opensearch_painless_scripting() -> None:
@ -159,7 +184,8 @@ def test_opensearch_painless_scripting() -> None:
output = docsearch.similarity_search(
"foo", k=1, search_type=PAINLESS_SCRIPTING_SEARCH, pre_filter=pre_filter_val
)
assert output == [Document(page_content="baz")]
assert output[0].page_content == "baz"
assert output[0].id is not None
def test_add_text_painless_scripting() -> None:
@ -176,7 +202,8 @@ def test_add_text_painless_scripting() -> None:
output = docsearch.similarity_search(
"add", k=1, search_type=PAINLESS_SCRIPTING_SEARCH, space_type="cosineSimilarity"
)
assert output == [Document(page_content="test")]
assert output[0].page_content == "test"
assert output[0].id is not None
def test_opensearch_invalid_search_type() -> None:
@ -207,7 +234,8 @@ def test_appx_search_with_boolean_filter() -> None:
output = docsearch.similarity_search(
"foo", k=3, boolean_filter=boolean_filter_val, subquery_clause="should"
)
assert output == [Document(page_content="bar")]
assert output[0].page_content == "bar"
assert output[0].id is not None
def test_appx_search_with_lucene_filter() -> None:
@ -217,7 +245,8 @@ def test_appx_search_with_lucene_filter() -> None:
texts, FakeEmbeddings(), opensearch_url=DEFAULT_OPENSEARCH_URL, engine="lucene"
)
output = docsearch.similarity_search("foo", k=3, lucene_filter=lucene_filter_val)
assert output == [Document(page_content="bar")]
assert output[0].page_content == "bar"
assert output[0].id is not None
def test_opensearch_with_custom_field_name_appx_true() -> None:
@ -230,7 +259,8 @@ def test_opensearch_with_custom_field_name_appx_true() -> None:
is_appx_search=True,
)
output = docsearch.similarity_search("add", k=1)
assert output == [Document(page_content="add")]
assert output[0].page_content == "add"
assert output[0].id is not None
def test_opensearch_with_custom_field_name_appx_false() -> None:
@ -240,7 +270,8 @@ def test_opensearch_with_custom_field_name_appx_false() -> None:
text_input, FakeEmbeddings(), opensearch_url=DEFAULT_OPENSEARCH_URL
)
output = docsearch.similarity_search("add", k=1)
assert output == [Document(page_content="add")]
assert output[0].page_content == "add"
assert output[0].id is not None
def test_opensearch_serverless_with_scripting_search_indexing_throws_error() -> None:
@ -338,4 +369,5 @@ def test_appx_search_with_faiss_efficient_filter() -> None:
output = docsearch.similarity_search(
"foo", k=3, efficient_filter=efficient_filter_val
)
assert output == [Document(page_content="bar")]
assert output[0].page_content == "bar"
assert output[0].id is not None

View File

@ -0,0 +1,78 @@
"""Test CloudflareWorkersAI Chat API wrapper."""
from typing import Any, Dict, List, Type
import pytest
from langchain_core.language_models import BaseChatModel
from langchain_core.messages import (
AIMessage,
BaseMessage,
HumanMessage,
SystemMessage,
ToolMessage,
)
from langchain_standard_tests.unit_tests import ChatModelUnitTests
from langchain_community.chat_models.cloudflare_workersai import (
ChatCloudflareWorkersAI,
_convert_messages_to_cloudflare_messages,
)
class TestChatCloudflareWorkersAI(ChatModelUnitTests):
@property
def chat_model_class(self) -> Type[BaseChatModel]:
return ChatCloudflareWorkersAI
@property
def chat_model_params(self) -> dict:
return {
"account_id": "my_account_id",
"api_token": "my_api_token",
"model": "@hf/nousresearch/hermes-2-pro-mistral-7b",
}
@pytest.mark.parametrize(
("messages", "expected"),
[
# Test case with a single HumanMessage
(
[HumanMessage(content="Hello, AI!")],
[{"role": "user", "content": "Hello, AI!"}],
),
# Test case with SystemMessage, HumanMessage, and AIMessage without tool calls
(
[
SystemMessage(content="System initialized."),
HumanMessage(content="Hello, AI!"),
AIMessage(content="Response from AI"),
],
[
{"role": "system", "content": "System initialized."},
{"role": "user", "content": "Hello, AI!"},
{"role": "assistant", "content": "Response from AI"},
],
),
# Test case with ToolMessage and tool_call_id
(
[
ToolMessage(
content="Tool message content", tool_call_id="tool_call_123"
),
],
[
{
"role": "tool",
"content": "Tool message content",
"tool_call_id": "tool_call_123",
}
],
),
],
)
def test_convert_messages_to_cloudflare_format(
messages: List[BaseMessage], expected: List[Dict[str, Any]]
) -> None:
result = _convert_messages_to_cloudflare_messages(messages)
assert result == expected

View File

@ -8,6 +8,7 @@ EXPECTED_ALL = [
"FlashrankRerank",
"DashScopeRerank",
"VolcengineRerank",
"InfinityRerank",
]

View File

@ -62,6 +62,7 @@ EXPECTED_ALL = [
"GmailGetThread",
"GmailSearch",
"GmailSendMessage",
"GoogleBooksQueryRun",
"GoogleCloudTextToSpeechTool",
"GooglePlacesTool",
"GoogleSearchResults",

View File

@ -13,6 +13,7 @@ EXPECTED_ALL = [
"DuckDuckGoSearchAPIWrapper",
"DriaAPIWrapper",
"GoldenQueryAPIWrapper",
"GoogleBooksAPIWrapper",
"GoogleFinanceAPIWrapper",
"GoogleJobsAPIWrapper",
"GoogleLensAPIWrapper",

View File

@ -53,7 +53,7 @@ LangChain Core compiles LCEL sequences to an _optimized execution plan_, with au
For more check out the [LCEL docs](https://python.langchain.com/docs/expression_language/).
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](https://raw.githubusercontent.com/langchain-ai/langchain/e1d113ea84a2edcf4a7709fc5be0e972ea74a5d9/docs/static/svg/langchain_stack_062024.svg "LangChain Framework Overview")
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](https://raw.githubusercontent.com/langchain-ai/langchain/master/docs/static/svg/langchain_stack_112024.svg "LangChain Framework Overview")
For more advanced use cases, also check out [LangGraph](https://github.com/langchain-ai/langgraph), which is a graph-based runner for cyclic and recursive LLM workflows.

View File

@ -590,7 +590,7 @@ def trim_messages(
include_system: bool = False,
text_splitter: Optional[Union[Callable[[str], list[str]], TextSplitter]] = None,
) -> list[BaseMessage]:
"""Trim messages to be below a token count.
r"""Trim messages to be below a token count.
trim_messages can be used to reduce the size of a chat history to a specified token
count or specified message count.

View File

@ -2,7 +2,7 @@ import asyncio
import base64
import re
from dataclasses import asdict
from typing import Optional
from typing import Literal, Optional
from langchain_core.runnables.graph import (
CurveStyle,
@ -306,6 +306,7 @@ def _render_mermaid_using_api(
mermaid_syntax: str,
output_file_path: Optional[str] = None,
background_color: Optional[str] = "white",
file_type: Optional[Literal["jpeg", "png", "webp"]] = "png",
) -> bytes:
"""Renders Mermaid graph using the Mermaid.INK API."""
try:
@ -329,7 +330,8 @@ def _render_mermaid_using_api(
background_color = f"!{background_color}"
image_url = (
f"https://mermaid.ink/img/{mermaid_syntax_encoded}?bgColor={background_color}"
f"https://mermaid.ink/img/{mermaid_syntax_encoded}"
f"?type={file_type}&bgColor={background_color}"
)
response = requests.get(image_url, timeout=10)
if response.status_code == 200:

View File

@ -341,7 +341,7 @@ def convert_to_openai_function(
A dictionary, Pydantic BaseModel class, TypedDict class, a LangChain
Tool object, or a Python function. If a dictionary is passed in, it is
assumed to already be a valid OpenAI function, a JSON schema with
top-level 'title' and 'description' keys specified, an Anthropic format
top-level 'title' key specified, an Anthropic format
tool, or an Amazon Bedrock Converse format tool.
strict:
If True, model output is guaranteed to exactly match the JSON Schema
@ -366,40 +366,47 @@ def convert_to_openai_function(
.. versionchanged:: 0.3.14
Support for Amazon Bedrock Converse format tools added.
.. versionchanged:: 0.3.16
'description' and 'parameters' keys are now optional. Only 'name' is
required and guaranteed to be part of the output.
"""
from langchain_core.tools import BaseTool
# already in OpenAI function format
if isinstance(function, dict) and all(
k in function for k in ("name", "description", "parameters")
):
oai_function = function
# a JSON schema with title and description
elif isinstance(function, dict) and all(
k in function for k in ("title", "description", "properties")
):
function = function.copy()
oai_function = {
"name": function.pop("title"),
"description": function.pop("description"),
"parameters": function,
}
# an Anthropic format tool
elif isinstance(function, dict) and all(
k in function for k in ("name", "description", "input_schema")
if isinstance(function, dict) and all(
k in function for k in ("name", "input_schema")
):
oai_function = {
"name": function["name"],
"description": function["description"],
"parameters": function["input_schema"],
}
if "description" in function:
oai_function["description"] = function["description"]
# an Amazon Bedrock Converse format tool
elif isinstance(function, dict) and "toolSpec" in function:
oai_function = {
"name": function["toolSpec"]["name"],
"description": function["toolSpec"]["description"],
"parameters": function["toolSpec"]["inputSchema"]["json"],
}
if "description" in function["toolSpec"]:
oai_function["description"] = function["toolSpec"]["description"]
# already in OpenAI function format
elif isinstance(function, dict) and "name" in function:
oai_function = {
k: v
for k, v in function.items()
if k in ("name", "description", "parameters", "strict")
}
# a JSON schema with title and description
elif isinstance(function, dict) and "title" in function:
function_copy = function.copy()
oai_function = {"name": function_copy.pop("title")}
if "description" in function_copy:
oai_function["description"] = function_copy.pop("description")
if function_copy and "properties" in function_copy:
oai_function["parameters"] = function_copy
elif isinstance(function, type) and is_basemodel_subclass(function):
oai_function = cast(dict, convert_pydantic_to_openai_function(function))
elif is_typeddict(function):
@ -420,6 +427,13 @@ def convert_to_openai_function(
raise ValueError(msg)
if strict is not None:
if "strict" in oai_function and oai_function["strict"] != strict:
msg = (
f"Tool/function already has a 'strict' key wth value "
f"{oai_function['strict']} which is different from the explicit "
f"`strict` arg received {strict=}."
)
raise ValueError(msg)
oai_function["strict"] = strict
if strict:
# As of 08/06/24, OpenAI requires that additionalProperties be supplied and
@ -438,12 +452,16 @@ def convert_to_openai_tool(
) -> dict[str, Any]:
"""Convert a tool-like object to an OpenAI tool schema.
OpenAI tool schema reference:
https://platform.openai.com/docs/api-reference/chat/create#chat-create-tools
Args:
tool:
Either a dictionary, a pydantic.BaseModel class, Python function, or
BaseTool. If a dictionary is passed in, it is assumed to already be a valid
OpenAI tool, OpenAI function, a JSON schema with top-level 'title' and
'description' keys specified, or an Anthropic format tool.
BaseTool. If a dictionary is passed in, it is
assumed to already be a valid OpenAI function, a JSON schema with
top-level 'title' key specified, an Anthropic format
tool, or an Amazon Bedrock Converse format tool.
strict:
If True, model output is guaranteed to exactly match the JSON Schema
provided in the function definition. If None, ``strict`` argument will not
@ -460,6 +478,15 @@ def convert_to_openai_tool(
.. versionchanged:: 0.3.13
Support for Anthropic format tools added.
.. versionchanged:: 0.3.14
Support for Amazon Bedrock Converse format tools added.
.. versionchanged:: 0.3.16
'description' and 'parameters' keys are now optional. Only 'name' is
required and guaranteed to be part of the output.
"""
if isinstance(tool, dict) and tool.get("type") == "function" and "function" in tool:
return tool

View File

@ -459,6 +459,130 @@ def test_convert_to_openai_function_nested_strict() -> None:
assert actual == expected
json_schema_no_description_no_params = {
"title": "dummy_function",
}
json_schema_no_description = {
"title": "dummy_function",
"type": "object",
"properties": {
"arg1": {"description": "foo", "type": "integer"},
"arg2": {
"description": "one of 'bar', 'baz'",
"enum": ["bar", "baz"],
"type": "string",
},
},
"required": ["arg1", "arg2"],
}
anthropic_tool_no_description = {
"name": "dummy_function",
"input_schema": {
"type": "object",
"properties": {
"arg1": {"description": "foo", "type": "integer"},
"arg2": {
"description": "one of 'bar', 'baz'",
"enum": ["bar", "baz"],
"type": "string",
},
},
"required": ["arg1", "arg2"],
},
}
bedrock_converse_tool_no_description = {
"toolSpec": {
"name": "dummy_function",
"inputSchema": {
"json": {
"type": "object",
"properties": {
"arg1": {"description": "foo", "type": "integer"},
"arg2": {
"description": "one of 'bar', 'baz'",
"enum": ["bar", "baz"],
"type": "string",
},
},
"required": ["arg1", "arg2"],
}
},
}
}
openai_function_no_description = {
"name": "dummy_function",
"parameters": {
"type": "object",
"properties": {
"arg1": {"description": "foo", "type": "integer"},
"arg2": {
"description": "one of 'bar', 'baz'",
"enum": ["bar", "baz"],
"type": "string",
},
},
"required": ["arg1", "arg2"],
},
}
openai_function_no_description_no_params = {
"name": "dummy_function",
}
@pytest.mark.parametrize(
"func",
[
anthropic_tool_no_description,
json_schema_no_description,
bedrock_converse_tool_no_description,
openai_function_no_description,
],
)
def test_convert_to_openai_function_no_description(func: dict) -> None:
expected = {
"name": "dummy_function",
"parameters": {
"type": "object",
"properties": {
"arg1": {"description": "foo", "type": "integer"},
"arg2": {
"description": "one of 'bar', 'baz'",
"enum": ["bar", "baz"],
"type": "string",
},
},
"required": ["arg1", "arg2"],
},
}
actual = convert_to_openai_function(func)
assert actual == expected
@pytest.mark.parametrize(
"func",
[
json_schema_no_description_no_params,
openai_function_no_description_no_params,
],
)
def test_convert_to_openai_function_no_description_no_params(func: dict) -> None:
expected = {
"name": "dummy_function",
}
actual = convert_to_openai_function(func)
assert actual == expected
@pytest.mark.xfail(
reason="Pydantic converts Optional[str] to str in .model_json_schema()"
)

View File

@ -149,7 +149,16 @@ def init_chat_model(
``config["configurable"]["{config_prefix}_{param}"]`` keys. If
config_prefix is an empty string then model will be configurable via
``config["configurable"]["{param}"]``.
kwargs: Additional keyword args to pass to
temperature: Model temperature.
max_tokens: Max output tokens.
timeout: The maximum time (in seconds) to wait for a response from the model
before canceling the request.
max_retries: The maximum number of attempts the system will make to resend a
request if it fails due to issues like network timeouts or rate limits.
base_url: The URL of the API endpoint where requests are sent.
rate_limiter: A ``BaseRateLimiter`` to space out requests to avoid exceeding
rate limits.
kwargs: Additional model-specific keyword args to pass to
``<<selected ChatModel>>.__init__(model=model_name, **kwargs)``.
Returns:

View File

@ -141,5 +141,5 @@ packages:
repo: langchain-ai/langchain
path: libs/partners/ollama
- name: langchain-box
repo: langchain-ai/langchain
path: libs/partners/box
repo: langchain-ai/langchain-box
path: libs/box

View File

@ -24,14 +24,14 @@ def test_anthropic_model_param() -> None:
def test_anthropic_call() -> None:
"""Test valid call to anthropic."""
llm = Anthropic(model="claude-instant-1") # type: ignore[call-arg]
llm = Anthropic(model="claude-2.1") # type: ignore[call-arg]
output = llm.invoke("Say foo:")
assert isinstance(output, str)
def test_anthropic_streaming() -> None:
"""Test streaming tokens from anthropic."""
llm = Anthropic(model="claude-instant-1") # type: ignore[call-arg]
llm = Anthropic(model="claude-2.1") # type: ignore[call-arg]
generator = llm.stream("I'm Pickle Rick")
assert isinstance(generator, Generator)

View File

@ -1 +0,0 @@
__pycache__

View File

@ -1,21 +0,0 @@
MIT License
Copyright (c) 2024 LangChain, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,65 +0,0 @@
.PHONY: all format lint test tests integration_tests docker_tests help extended_tests
# Default target executed when no arguments are given to make.
all: help
# Define a variable for the test file path.
TEST_FILE ?= tests/unit_tests/
integration_test integration_tests: TEST_FILE = tests/integration_tests/
# unit tests are run with the --disable-socket flag to prevent network calls
test tests:
poetry run pytest --disable-socket --allow-unix-socket $(TEST_FILE)
test_watch:
poetry run ptw --snapshot-update --now . -- -vv $(TEST_FILE)
# integration tests are run without the --disable-socket flag to allow network calls
integration_test integration_tests:
poetry run pytest $(TEST_FILE)
######################
# LINTING AND FORMATTING
######################
# Define a variable for Python and notebook files.
PYTHON_FILES=.
MYPY_CACHE=.mypy_cache
lint format: PYTHON_FILES=.
lint_diff format_diff: PYTHON_FILES=$(shell git diff --relative=libs/partners/box --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$')
lint_package: PYTHON_FILES=langchain_box
lint_tests: PYTHON_FILES=tests
lint_tests: MYPY_CACHE=.mypy_cache_test
lint lint_diff lint_package lint_tests:
poetry run ruff .
poetry run ruff format $(PYTHON_FILES) --diff
poetry run ruff --select I $(PYTHON_FILES)
mkdir -p $(MYPY_CACHE); poetry run mypy $(PYTHON_FILES) --cache-dir $(MYPY_CACHE)
format format_diff:
poetry run ruff format $(PYTHON_FILES)
poetry run ruff --select I --fix $(PYTHON_FILES)
spell_check:
poetry run codespell --toml pyproject.toml
spell_fix:
poetry run codespell --toml pyproject.toml -w
check_imports: $(shell find langchain_box -name '*.py')
poetry run python ./scripts/check_imports.py $^
######################
# HELP
######################
help:
@echo '----'
@echo 'check_imports - check imports'
@echo 'format - run code formatters'
@echo 'lint - run linters'
@echo 'test - run unit tests'
@echo 'tests - run unit tests'
@echo 'test TEST_FILE=<test_file> - run all tests in file'

View File

@ -1,195 +1,3 @@
# langchain-box
This package has moved!
This package contains the LangChain integration with Box. For more information about
Box, check out our [developer documentation](https://developer.box.com).
## Pre-requisites
In order to integrate with Box, you need a few things:
* A Box instance — if you are not a current Box customer, sign up for a
[free dev account](https://account.box.com/signup/n/developer#ty9l3).
* A Box app — more on how to
[create an app](https://developer.box.com/guides/getting-started/first-application/)
* Your app approved in your Box instance — This is done by your admin.
The good news is if you are using a free developer account, you are the admin.
[Authorize your app](https://developer.box.com/guides/authorization/custom-app-approval/#manual-approval)
## Installation
```bash
pip install -U langchain-box
```
## Authentication
The `box-langchain` package offers some flexibility to authentication. The
most basic authentication method is by using a developer token. This can be
found in the [Box developer console](https://account.box.com/developers/console)
on the configuration screen. This token is purposely short-lived (1 hour) and is
intended for development. With this token, you can add it to your environment as
`BOX_DEVELOPER_TOKEN`, you can pass it directly to the loader, or you can use the
`BoxAuth` authentication helper class.
We will cover passing it directly to the loader in the section below.
### BoxAuth helper class
`BoxAuth` supports the following authentication methods:
* Token — either a developer token or any token generated through the Box SDK
* JWT with a service account
* JWT with a specified user
* CCG with a service account
* CCG with a specified user
> [!NOTE]
> If using JWT authentication, you will need to download the configuration from the Box
> developer console after generating your public/private key pair. Place this file in your
> application directory structure somewhere. You will use the path to this file when using
> the `BoxAuth` helper class.
For more information, learn about how to
[set up a Box application](https://developer.box.com/guides/getting-started/first-application/),
and check out the
[Box authentication guide](https://developer.box.com/guides/authentication/select/)
for more about our different authentication options.
Examples:
**Token**
```python
from langchain_box.document_loaders import BoxLoader
from langchain_box.utilities import BoxAuth, BoxAuthType
auth = BoxAuth(
auth_type=BoxAuthType.TOKEN,
box_developer_token=box_developer_token
)
loader = BoxLoader(
box_auth=auth,
...
)
```
**JWT with a service account**
```python
from langchain_box.document_loaders import BoxLoader
from langchain_box.utilities import BoxAuth, BoxAuthType
auth = BoxAuth(
auth_type=BoxAuthType.JWT,
box_jwt_path=box_jwt_path
)
loader = BoxLoader(
box_auth=auth,
...
```
**JWT with a specified user**
```python
from langchain_box.document_loaders import BoxLoader
from langchain_box.utilities import BoxAuth, BoxAuthType
auth = BoxAuth(
auth_type=BoxAuthType.JWT,
box_jwt_path=box_jwt_path,
box_user_id=box_user_id
)
loader = BoxLoader(
box_auth=auth,
...
```
**CCG with a service account**
```python
from langchain_box.document_loaders import BoxLoader
from langchain_box.utilities import BoxAuth, BoxAuthType
auth = BoxAuth(
auth_type=BoxAuthType.CCG,
box_client_id=box_client_id,
box_client_secret=box_client_secret,
box_enterprise_id=box_enterprise_id
)
loader = BoxLoader(
box_auth=auth,
...
```
**CCG with a specified user**
```python
from langchain_box.document_loaders import BoxLoader
from langchain_box.utilities import BoxAuth, BoxAuthType
auth = BoxAuth(
auth_type=BoxAuthType.CCG,
box_client_id=box_client_id,
box_client_secret=box_client_secret,
box_user_id=box_user_id
)
loader = BoxLoader(
box_auth=auth,
...
```
## Document Loaders
The `BoxLoader` class helps you get your unstructured content from Box
in Langchain's `Document` format. You can do this with either a `List[str]`
containing Box file IDs, or with a `str` containing a Box folder ID.
If getting files from a folder with folder ID, you can also set a `Bool` to
tell the loader to get all sub-folders in that folder, as well.
:::info
A Box instance can contain Petabytes of files, and folders can contain millions
of files. Be intentional when choosing what folders you choose to index. And we
recommend never getting all files from folder 0 recursively. Folder ID 0 is your
root folder.
:::
### Load files
```python
import os
from langchain_box.document_loaders import BoxLoader
os.environ["BOX_DEVELOPER_TOKEN"] = "df21df2df21df2d1f21df2df1"
loader = BoxLoader(
box_file_ids=["12345", "67890"],
character_limit=10000 # Optional. Defaults to no limit
)
docs = loader.lazy_load()
```
### Load from folder
```python
import os
from langchain_box.document_loaders import BoxLoader
os.environ["BOX_DEVELOPER_TOKEN"] = "df21df2df21df2d1f21df2df1"
loader = BoxLoader(
box_folder_id="12345",
recursive=False, # Optional. return entire tree, defaults to False
character_limit=10000 # Optional. Defaults to no limit
)
docs = loader.lazy_load()
```
https://github.com/langchain-ai/langchain-box/tree/main/libs/box

View File

@ -1,31 +0,0 @@
from importlib import metadata
from langchain_box.document_loaders import BoxLoader
from langchain_box.retrievers import BoxRetriever
from langchain_box.utilities.box import (
BoxAuth,
BoxAuthType,
BoxSearchOptions,
DocumentFiles,
SearchTypeFilter,
_BoxAPIWrapper,
)
try:
__version__ = metadata.version(__package__)
except metadata.PackageNotFoundError:
# Case where package metadata is not available.
__version__ = ""
del metadata # optional, avoids polluting the results of dir(__package__)
__all__ = [
"BoxLoader",
"BoxRetriever",
"BoxAuth",
"BoxAuthType",
"BoxSearchOptions",
"DocumentFiles",
"SearchTypeFilter",
"_BoxAPIWrapper",
"__version__",
]

View File

@ -1,5 +0,0 @@
"""Box Document Loaders."""
from langchain_box.document_loaders.box import BoxLoader
__all__ = ["BoxLoader"]

View File

@ -1,260 +0,0 @@
from typing import Iterator, List, Optional
from box_sdk_gen import FileBaseTypeField # type: ignore
from langchain_core.document_loaders.base import BaseLoader
from langchain_core.documents import Document
from langchain_core.utils import from_env
from pydantic import BaseModel, ConfigDict, Field, model_validator
from typing_extensions import Self
from langchain_box.utilities import BoxAuth, _BoxAPIWrapper
class BoxLoader(BaseLoader, BaseModel):
"""BoxLoader.
This class will help you load files from your Box instance. You must have a
Box account. If you need one, you can sign up for a free developer account.
You will also need a Box application created in the developer portal, where
you can select your authorization type.
If you wish to use either of the Box AI options, you must be on an Enterprise
Plus plan or above. The free developer account does not have access to Box AI.
In addition, using the Box AI API requires a few prerequisite steps:
* Your administrator must enable the Box AI API
* You must enable the ``Manage AI`` scope in your app in the developer console.
* Your administrator must install and enable your application.
**Setup**:
Install ``langchain-box`` and set environment variable ``BOX_DEVELOPER_TOKEN``.
.. code-block:: bash
pip install -U langchain-box
export BOX_DEVELOPER_TOKEN="your-api-key"
This loader returns ``Document`` objects built from text representations of files
in Box. It will skip any document without a text representation available. You can
provide either a ``List[str]`` containing Box file IDS, or you can provide a
``str`` contining a Box folder ID. If providing a folder ID, you can also enable
recursive mode to get the full tree under that folder.
.. note::
A Box instance can contain Petabytes of files, and folders can contain millions
of files. Be intentional when choosing what folders you choose to index. And we
recommend never getting all files from folder 0 recursively. Folder ID 0 is your
root folder.
**Instantiate**:
.. list-table:: Initialization variables
:widths: 25 50 15 10
:header-rows: 1
* - Variable
- Description
- Type
- Default
* - box_developer_token
- Token to use for auth.
- ``str``
- ``None``
* - box_auth
- client id for you app. Used for CCG
- ``langchain_box.utilities.BoxAuth``
- ``None``
* - box_file_ids
- client id for you app. Used for CCG
- ``List[str]``
- ``None``
* - box_folder_id
- client id for you app. Used for CCG
- ``str``
- ``None``
* - recursive
- client id for you app. Used for CCG
- ``Bool``
- ``False``
* - character_limit
- client id for you app. Used for CCG
- ``int``
- ``-1``
**Get files** this method requires you pass the ``box_file_ids`` parameter.
This is a ``List[str]`` containing the file IDs you wish to index.
.. code-block:: python
from langchain_box.document_loaders import BoxLoader
box_file_ids = ["1514555423624", "1514553902288"]
loader = BoxLoader(
box_file_ids=box_file_ids,
character_limit=10000 # Optional. Defaults to no limit
)
**Get files in a folder** this method requires you pass the ``box_folder_id``
parameter. This is a ``str`` containing the folder ID you wish to index.
.. code-block:: python
from langchain_box.document_loaders import BoxLoader
box_folder_id = "260932470532"
loader = BoxLoader(
box_folder_id=box_folder_id,
recursive=False # Optional. return entire tree, defaults to False
)
**Load**:
.. code-block:: python
docs = loader.load()
docs[0]
.. code-block:: python
Document(metadata={'source': 'https://dl.boxcloud.com/api/2.0/
internal_files/1514555423624/versions/1663171610024/representations
/extracted_text/content/', 'title': 'Invoice-A5555_txt'},
page_content='Vendor: AstroTech Solutions\\nInvoice Number: A5555\\n\\nLine
Items:\\n - Gravitational Wave Detector Kit: $800\\n - Exoplanet
Terrarium: $120\\nTotal: $920')
**Lazy load**:
.. code-block:: python
docs = []
docs_lazy = loader.lazy_load()
for doc in docs_lazy:
docs.append(doc)
print(docs[0].page_content[:100])
print(docs[0].metadata)
.. code-block:: python
Document(metadata={'source': 'https://dl.boxcloud.com/api/2.0/
internal_files/1514555423624/versions/1663171610024/representations
/extracted_text/content/', 'title': 'Invoice-A5555_txt'},
page_content='Vendor: AstroTech Solutions\\nInvoice Number: A5555\\n\\nLine
Items:\\n - Gravitational Wave Detector Kit: $800\\n - Exoplanet
Terrarium: $120\\nTotal: $920')
"""
box_developer_token: Optional[str] = Field(
default_factory=from_env("BOX_DEVELOPER_TOKEN", default=None)
)
"""String containing the Box Developer Token generated in the developer console"""
box_auth: Optional[BoxAuth] = None
"""Configured
`BoxAuth <https://python.langchain.com/v0.2/api_reference/box/utilities/langchain_box.utilities.box.BoxAuth.html>`_
object"""
box_file_ids: Optional[List[str]] = None
"""List[str] containing Box file ids"""
box_folder_id: Optional[str] = None
"""String containing box folder id to load files from"""
recursive: Optional[bool] = False
"""If getting files by folder id, recursive is a bool to determine if you wish
to traverse subfolders to return child documents. Default is False"""
character_limit: Optional[int] = -1
"""character_limit is an int that caps the number of characters to
return per document."""
_box: Optional[_BoxAPIWrapper] = None
model_config = ConfigDict(
arbitrary_types_allowed=True,
extra="allow",
use_enum_values=True,
)
@model_validator(mode="after")
def validate_box_loader_inputs(self) -> Self:
_box = None
"""Validate that has either box_file_ids or box_folder_id."""
if not self.box_file_ids and not self.box_folder_id:
raise ValueError("You must provide box_file_ids or box_folder_id.")
"""Validate that we don't have both box_file_ids and box_folder_id."""
if self.box_file_ids and self.box_folder_id:
raise ValueError(
"You must provide either box_file_ids or box_folder_id, not both."
)
"""Validate that we have either a box_developer_token or box_auth."""
if not self.box_auth:
if not self.box_developer_token:
raise ValueError(
"you must provide box_developer_token or a box_auth "
"generated with langchain_box.utilities.BoxAuth"
)
else:
_box = _BoxAPIWrapper( # type: ignore[call-arg]
box_developer_token=self.box_developer_token,
character_limit=self.character_limit,
)
else:
_box = _BoxAPIWrapper( # type: ignore[call-arg]
box_auth=self.box_auth,
character_limit=self.character_limit,
)
self._box = _box
return self
def _get_files_from_folder(self, folder_id): # type: ignore[no-untyped-def]
folder_content = self.box.get_folder_items(folder_id)
for file in folder_content:
try:
if file.type == FileBaseTypeField.FILE:
doc = self._box.get_document_by_file_id(file.id)
if doc is not None:
yield doc
elif file.type == "folder" and self.recursive:
try:
yield from self._get_files_from_folder(file.id)
except TypeError:
pass
except TypeError:
pass
def lazy_load(self) -> Iterator[Document]:
"""Load documents. Accepts no arguments. Returns `Iterator[Document]`"""
if self.box_file_ids:
for file_id in self.box_file_ids:
try:
file = self._box.get_document_by_file_id(file_id) # type: ignore[union-attr]
if file is not None:
yield file
except TypeError:
pass
elif self.box_folder_id:
try:
yield from self._get_files_from_folder(self.box_folder_id)
except TypeError:
pass
except Exception as e:
print(f"Exception {e}") # noqa: T201
else:
raise ValueError(
"You must provide either `box_file_ids` or `box_folder_id`"
)

View File

@ -1,5 +0,0 @@
"""Box Document Loaders."""
from langchain_box.retrievers.box import BoxRetriever
__all__ = ["BoxRetriever"]

View File

@ -1,185 +0,0 @@
from typing import List, Optional
from langchain_core.callbacks import CallbackManagerForRetrieverRun
from langchain_core.documents import Document
from langchain_core.retrievers import BaseRetriever
from langchain_core.utils import from_env
from pydantic import ConfigDict, Field, model_validator
from typing_extensions import Self
from langchain_box.utilities import BoxAuth, BoxSearchOptions, _BoxAPIWrapper
class BoxRetriever(BaseRetriever):
"""Box retriever.
`BoxRetriever` provides the ability to retrieve content from
your Box instance in a couple of ways.
1. You can use the Box full-text search to retrieve the
complete document(s) that match your search query, as
`List[Document]`
2. You can use the Box AI Platform API to retrieve the results
from a Box AI prompt. This can be a `Document` containing
the result of the prompt, or you can retrieve the citations
used to generate the prompt to include in your vectorstore.
Setup:
Install ``langchain-box``:
.. code-block:: bash
pip install -U langchain-box
Instantiate:
To use search:
.. code-block:: python
from langchain_box.retrievers import BoxRetriever
retriever = BoxRetriever()
To use Box AI:
.. code-block:: python
from langchain_box.retrievers import BoxRetriever
file_ids=["12345","67890"]
retriever = BoxRetriever(file_ids)
Usage:
.. code-block:: python
retriever = BoxRetriever()
retriever.invoke("victor")
print(docs[0].page_content[:100])
.. code-block:: none
[
Document(
metadata={
'source': 'url',
'title': 'FIVE_FEET_AND_RISING_by_Peter_Sollett_pdf'
},
page_content='\\n3/20/23, 5:31 PM F...'
)
]
Use within a chain:
.. code-block:: python
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
retriever = BoxRetriever(box_developer_token=box_developer_token, character_limit=10000)
context="You are an actor reading scripts to learn about your role in an upcoming movie."
question="describe the character Victor"
prompt = ChatPromptTemplate.from_template(
\"""Answer the question based only on the context provided.
Context: {context}
Question: {question}\"""
)
def format_docs(docs):
return "\\n\\n".join(doc.page_content for doc in docs)
chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
chain.invoke("Victor") # search query to find files in Box
)
.. code-block:: none
'Victor is a skinny 12-year-old with sloppy hair who is seen
sleeping on his fire escape in the sun. He is hesitant to go to
the pool with his friend Carlos because he is afraid of getting
in trouble for not letting his mother cut his hair. Ultimately,
he decides to go to the pool with Carlos.'
""" # noqa: E501
box_developer_token: Optional[str] = Field(
default_factory=from_env("BOX_DEVELOPER_TOKEN", default=None)
)
box_auth: Optional[BoxAuth] = None
"""Configured
`BoxAuth <https://python.langchain.com/v0.2/api_reference/box/utilities/langchain_box.utilities.box.BoxAuth.html>`_
object"""
box_file_ids: Optional[List[str]] = None
"""List[str] containing Box file ids"""
character_limit: Optional[int] = -1
"""character_limit is an int that caps the number of characters to
return per document."""
box_search_options: Optional[BoxSearchOptions] = None
"""Search options to configure BoxRetriever to narrow search results."""
answer: Optional[bool] = True
"""When using Box AI, return the answer to the prompt as a `Document`
object. Returned as `List[Document`]. Default is `True`."""
citations: Optional[bool] = False
"""When using Box AI, return the citations from to the prompt as
`Document` objects. Can be used with answer. Returned as `List[Document`].
Default is `False`."""
_box: Optional[_BoxAPIWrapper]
model_config = ConfigDict(
arbitrary_types_allowed=True,
extra="allow",
)
@model_validator(mode="after")
def validate_box_loader_inputs(self) -> Self:
_box = None
"""Validate that we have either a box_developer_token or box_auth."""
if not self.box_auth and not self.box_developer_token:
raise ValueError(
"you must provide box_developer_token or a box_auth "
"generated with langchain_box.utilities.BoxAuth"
)
_box = _BoxAPIWrapper( # type: ignore[call-arg]
box_developer_token=self.box_developer_token,
box_auth=self.box_auth,
character_limit=self.character_limit,
box_search_options=self.box_search_options,
)
self._box = _box
return self
def _get_relevant_documents(
self, query: str, *, run_manager: CallbackManagerForRetrieverRun
) -> List[Document]:
if self.box_file_ids: # If using Box AI
return self._box.ask_box_ai( # type: ignore[union-attr]
query=query,
box_file_ids=self.box_file_ids,
answer=self.answer, # type: ignore[arg-type]
citations=self.citations, # type: ignore[arg-type]
)
else: # If using Search
return self._box.search_box(query=query) # type: ignore[union-attr]

View File

@ -1,19 +0,0 @@
"""Box API Utilities."""
from langchain_box.utilities.box import (
BoxAuth,
BoxAuthType,
BoxSearchOptions,
DocumentFiles,
SearchTypeFilter,
_BoxAPIWrapper,
)
__all__ = [
"BoxAuth",
"BoxAuthType",
"BoxSearchOptions",
"DocumentFiles",
"SearchTypeFilter",
"_BoxAPIWrapper",
]

View File

@ -1,875 +0,0 @@
"""Util that calls Box APIs."""
from enum import Enum
from typing import Any, Dict, List, Optional
import box_sdk_gen # type: ignore
import requests
from langchain_core.documents import Document
from langchain_core.utils import from_env
from pydantic import BaseModel, ConfigDict, Field, model_validator
from typing_extensions import Self
class DocumentFiles(Enum):
"""DocumentFiles(Enum).
An enum containing all of the supported extensions for files
Box considers Documents. These files should have text
representations.
"""
DOC = "doc"
DOCX = "docx"
GDOC = "gdoc"
GSHEET = "gsheet"
NUMBERS = "numbers"
ODS = "ods"
ODT = "odt"
PAGES = "pages"
PDF = "pdf"
RTF = "rtf"
WPD = "wpd"
XLS = "xls"
XLSM = "xlsm"
XLSX = "xlsx"
AS = "as"
AS3 = "as3"
ASM = "asm"
BAT = "bat"
C = "c"
CC = "cc"
CMAKE = "cmake"
CPP = "cpp"
CS = "cs"
CSS = "css"
CSV = "csv"
CXX = "cxx"
DIFF = "diff"
ERB = "erb"
GROOVY = "groovy"
H = "h"
HAML = "haml"
HH = "hh"
HTM = "htm"
HTML = "html"
JAVA = "java"
JS = "js"
JSON = "json"
LESS = "less"
LOG = "log"
M = "m"
MAKE = "make"
MD = "md"
ML = "ml"
MM = "mm"
MSG = "msg"
PHP = "php"
PL = "pl"
PROPERTIES = "properties"
PY = "py"
RB = "rb"
RST = "rst"
SASS = "sass"
SCALA = "scala"
SCM = "scm"
SCRIPT = "script"
SH = "sh"
SML = "sml"
SQL = "sql"
TXT = "txt"
VI = "vi"
VIM = "vim"
WEBDOC = "webdoc"
XHTML = "xhtml"
XLSB = "xlsb"
XML = "xml"
XSD = "xsd"
XSL = "xsl"
YAML = "yaml"
GSLLIDE = "gslide"
GSLIDES = "gslides"
KEY = "key"
ODP = "odp"
PPT = "ppt"
PPTX = "pptx"
BOXNOTE = "boxnote"
class ImageFiles(Enum):
"""ImageFiles(Enum).
An enum containing all of the supported extensions for files
Box considers images.
"""
ARW = "arw"
BMP = "bmp"
CR2 = "cr2"
DCM = "dcm"
DICM = "dicm"
DICOM = "dicom"
DNG = "dng"
EPS = "eps"
EXR = "exr"
GIF = "gif"
HEIC = "heic"
INDD = "indd"
INDML = "indml"
INDT = "indt"
INX = "inx"
JPEG = "jpeg"
JPG = "jpg"
NEF = "nef"
PNG = "png"
SVG = "svg"
TIF = "tif"
TIFF = "tiff"
TGA = "tga"
SVS = "svs"
class BoxAuthType(Enum):
"""BoxAuthType(Enum).
an enum to tell BoxLoader how you wish to autheticate your Box connection.
Options are:
TOKEN - Use a developer token generated from the Box Deevloper Token.
Only recommended for development.
Provide ``box_developer_token``.
CCG - Client Credentials Grant.
provide ``box_client_id`, ``box_client_secret`,
and ``box_enterprise_id`` or optionally `box_user_id`.
JWT - Use JWT for authentication. Config should be stored on the file
system accessible to your app.
provide ``box_jwt_path``. Optionally, provide ``box_user_id`` to
act as a specific user
"""
TOKEN = "token"
"""Use a developer token or a token retrieved from ``box-sdk-gen``"""
CCG = "ccg"
"""Use ``client_credentials`` type grant"""
JWT = "jwt"
"""Use JWT bearer token auth"""
class BoxAuth(BaseModel):
"""**BoxAuth.**
The ``box-langchain`` package offers some flexibility to authentication. The
most basic authentication method is by using a developer token. This can be
found in the `Box developer console <https://account.box.com/developers/console>`_
on the configuration screen. This token is purposely short-lived (1 hour) and is
intended for development. With this token, you can add it to your environment as
``BOX_DEVELOPER_TOKEN``, you can pass it directly to the loader, or you can use the
``BoxAuth`` authentication helper class.
`BoxAuth` supports the following authentication methods:
* **Token** either a developer token or any token generated through the Box SDK
* **JWT** with a service account
* **JWT** with a specified user
* **CCG** with a service account
* **CCG** with a specified user
.. note::
If using JWT authentication, you will need to download the configuration from
the Box developer console after generating your public/private key pair. Place
this file in your application directory structure somewhere. You will use the
path to this file when using the ``BoxAuth`` helper class. If you wish to use
OAuth2 with the authorization_code flow, please use ``BoxAuthType.TOKEN`` with
the token you have acquired.
For more information, learn about how to
`set up a Box application <https://developer.box.com/guides/getting-started/first-application/>`_,
and check out the
`Box authentication guide <https://developer.box.com/guides/authentication/select/>`_
for more about our different authentication options.
Simple implementation:
To instantiate, you must provide a ``langchain_box.utilities.BoxAuthType``.
BoxAuthType is an enum to tell BoxLoader how you wish to autheticate your
Box connection.
Options are:
TOKEN - Use a developer token generated from the Box Deevloper Token.
Only recommended for development.
Provide ``box_developer_token``.
CCG - Client Credentials Grant.
provide ``box_client_id``, ``box_client_secret``,
and ``box_enterprise_id`` or optionally ``box_user_id``.
JWT - Use JWT for authentication. Config should be stored on the file
system accessible to your app.
provide ``box_jwt_path``. Optionally, provide ``box_user_id`` to
act as a specific user
**Examples**:
**Token**
.. code-block:: python
from langchain_box.document_loaders import BoxLoader
from langchain_box.utilities import BoxAuth, BoxAuthType
auth = BoxAuth(
auth_type=BoxAuthType.TOKEN,
box_developer_token=box_developer_token
)
loader = BoxLoader(
box_auth=auth,
...
)
**JWT with a service account**
.. code-block:: python
from langchain_box.document_loaders import BoxLoader
from langchain_box.utilities import BoxAuth, BoxAuthType
auth = BoxAuth(
auth_type=BoxAuthType.JWT,
box_jwt_path=box_jwt_path
)
loader = BoxLoader(
box_auth=auth,
...
)
**JWT with a specified user**
.. code-block:: python
from langchain_box.document_loaders import BoxLoader
from langchain_box.utilities import BoxAuth, BoxAuthType
auth = BoxAuth(
auth_type=BoxAuthType.JWT,
box_jwt_path=box_jwt_path,
box_user_id=box_user_id
)
loader = BoxLoader(
box_auth=auth,
...
)
**CCG with a service account**
.. code-block:: python
from langchain_box.document_loaders import BoxLoader
from langchain_box.utilities import BoxAuth, BoxAuthType
auth = BoxAuth(
auth_type=BoxAuthType.CCG,
box_client_id=box_client_id,
box_client_secret=box_client_secret,
box_enterprise_id=box_enterprise_id
)
loader = BoxLoader(
box_auth=auth,
...
)
**CCG with a specified user**
.. code-block:: python
from langchain_box.document_loaders import BoxLoader
from langchain_box.utilities import BoxAuth, BoxAuthType
auth = BoxAuth(
auth_type=BoxAuthType.CCG,
box_client_id=box_client_id,
box_client_secret=box_client_secret,
box_user_id=box_user_id
)
loader = BoxLoader(
box_auth=auth,
...
)
"""
auth_type: BoxAuthType
"""``langchain_box.utilities.BoxAuthType``. Enum describing how to
authenticate against Box"""
box_developer_token: Optional[str] = Field(
default_factory=from_env("BOX_DEVELOPER_TOKEN", default=None)
)
""" If using ``BoxAuthType.TOKEN``, provide your token here"""
box_jwt_path: Optional[str] = Field(
default_factory=from_env("BOX_JWT_PATH", default=None)
)
"""If using ``BoxAuthType.JWT``, provide local path to your
JWT configuration file"""
box_client_id: Optional[str] = Field(
default_factory=from_env("BOX_CLIENT_ID", default=None)
)
"""If using ``BoxAuthType.CCG``, provide your app's client ID"""
box_client_secret: Optional[str] = Field(
default_factory=from_env("BOX_CLIENT_SECRET", default=None)
)
"""If using ``BoxAuthType.CCG``, provide your app's client secret"""
box_enterprise_id: Optional[str] = None
"""If using ``BoxAuthType.CCG``, provide your enterprise ID.
Only required if you are not sending ``box_user_id``"""
box_user_id: Optional[str] = None
"""If using ``BoxAuthType.CCG`` or ``BoxAuthType.JWT``, providing
``box_user_id`` will act on behalf of a specific user"""
_box_client: Optional[box_sdk_gen.BoxClient] = None
_custom_header: Dict = dict({"x-box-ai-library": "langchain"})
model_config = ConfigDict(
arbitrary_types_allowed=True,
use_enum_values=True,
extra="allow",
)
@model_validator(mode="after")
def validate_box_auth_inputs(self) -> Self:
"""Validate auth_type is set"""
if not self.auth_type:
raise ValueError("Auth type must be set.")
"""Validate that TOKEN auth type provides box_developer_token."""
if self.auth_type == "token" and not self.box_developer_token:
raise ValueError(f"{self.auth_type} requires box_developer_token to be set")
"""Validate that JWT auth type provides box_jwt_path."""
if self.auth_type == "jwt" and not self.box_jwt_path:
raise ValueError(f"{self.auth_type} requires box_jwt_path to be set")
"""Validate that CCG auth type provides box_client_id and
box_client_secret and either box_enterprise_id or box_user_id."""
if self.auth_type == "ccg":
if (
not self.box_client_id
or not self.box_client_secret
or (not self.box_enterprise_id and not self.box_user_id)
):
raise ValueError(
f"{self.auth_type} requires box_client_id, \
box_client_secret, and box_enterprise_id/box_user_id."
)
return self
def _authorize(self) -> None:
if self.auth_type == "token":
try:
auth = box_sdk_gen.BoxDeveloperTokenAuth(token=self.box_developer_token)
self._box_client = box_sdk_gen.BoxClient(auth=auth).with_extra_headers(
extra_headers=self._custom_header
)
except box_sdk_gen.BoxSDKError as bse:
raise RuntimeError(
f"Error getting client from developer token: {bse.message}"
)
except Exception as ex:
raise ValueError(
f"Invalid Box developer token. Please verify your \
token and try again.\n{ex}"
) from ex
elif self.auth_type == "jwt":
try:
jwt_config = box_sdk_gen.JWTConfig.from_config_file(
config_file_path=self.box_jwt_path
)
auth = box_sdk_gen.BoxJWTAuth(config=jwt_config)
self._box_client = box_sdk_gen.BoxClient(auth=auth).with_extra_headers(
extra_headers=self._custom_header
)
if self.box_user_id is not None:
user_auth = auth.with_user_subject(self.box_user_id)
self._box_client = box_sdk_gen.BoxClient(
auth=user_auth
).with_extra_headers(extra_headers=self._custom_header)
except box_sdk_gen.BoxSDKError as bse:
raise RuntimeError(
f"Error getting client from jwt token: {bse.message}"
)
except Exception as ex:
raise ValueError(
"Error authenticating. Please verify your JWT config \
and try again."
) from ex
elif self.auth_type == "ccg":
try:
if self.box_user_id is not None:
ccg_config = box_sdk_gen.CCGConfig(
client_id=self.box_client_id,
client_secret=self.box_client_secret,
user_id=self.box_user_id,
)
else:
ccg_config = box_sdk_gen.CCGConfig(
client_id=self.box_client_id,
client_secret=self.box_client_secret,
enterprise_id=self.box_enterprise_id,
)
auth = box_sdk_gen.BoxCCGAuth(config=ccg_config)
self._box_client = box_sdk_gen.BoxClient(auth=auth).with_extra_headers(
extra_headers=self._custom_header
)
except box_sdk_gen.BoxSDKError as bse:
raise RuntimeError(
f"Error getting client from ccg token: {bse.message}"
)
except Exception as ex:
raise ValueError(
"Error authenticating. Please verify you are providing a \
valid client id, secret and either a valid user ID or \
enterprise ID."
) from ex
else:
raise ValueError(
f"{self.auth_type} is not a valid auth_type. Value must be \
TOKEN, CCG, or JWT."
)
def get_client(self) -> box_sdk_gen.BoxClient:
"""Instantiate the Box SDK."""
if self._box_client is None:
self._authorize()
return self._box_client
class SearchTypeFilter(Enum):
"""SearchTypeFilter.
Enum to limit the what we search.
"""
NAME = "name"
"""The name of the item, as defined by its ``name`` field."""
DESCRIPTION = "description"
"""The description of the item, as defined by its ``description`` field."""
FILE_CONTENT = "file_content"
"""The actual content of the file."""
COMMENTS = "comments"
"""The content of any of the comments on a file or folder."""
TAGS = "tags"
"""Any tags that are applied to an item, as defined by its ``tags`` field."""
class BoxSearchOptions(BaseModel):
ancestor_folder_ids: Optional[List[str]] = None
"""Limits the search results to items within the given list of folders,
defined as a comma separated lists of folder IDs."""
search_type_filter: Optional[List[SearchTypeFilter]] = None
"""Limits the search results to any items that match the search query for a
specific part of the file, for example the file description.
Content types are defined as a comma separated lists of Box recognized
content types. The allowed content types are as follows. Default is all."""
created_date_range: Optional[List[str]] = None
"""Limits the search results to any items created within a given date range.
Date ranges are defined as comma separated RFC3339 timestamps.
If the the start date is omitted (,2014-05-17T13:35:01-07:00) anything
created before the end date will be returned.
If the end date is omitted (2014-05-15T13:35:01-07:00,) the current
date will be used as the end date instead."""
file_extensions: Optional[List[DocumentFiles]] = None
"""Limits the search results to any files that match any of the provided
file extensions. This list is a comma-separated list of
``langchain_box.utilities.DocumentFiles`` entries"""
k: Optional[int] = 100
"""Defines the maximum number of items to return. Defaults to 100, maximum
is 200."""
size_range: Optional[List[int]] = None
"""Limits the search results to any items with a size within a given file
size range. This applied to files and folders.
Size ranges are defined as comma separated list of a lower and upper
byte size limit (inclusive).
The upper and lower bound can be omitted to create open ranges."""
updated_date_range: Optional[List[str]] = None
"""Limits the search results to any items updated within a given date range.
Date ranges are defined as comma separated RFC3339 timestamps.
If the start date is omitted (,2014-05-17T13:35:01-07:00) anything
updated before the end date will be returned.
If the end date is omitted (2014-05-15T13:35:01-07:00,) the current
date will be used as the end date instead."""
class Config:
arbitrary_types_allowed = True
use_enum_values = True
extra = "allow"
@model_validator(mode="after")
def validate_search_options(self) -> Self:
"""Validate k is between 1 and 200"""
if self.k > 200 or self.k < 1: # type: ignore[operator]
raise ValueError(
f"Invalid setting of k {self.k}. " "Value must be between 1 and 200."
)
"""Validate created_date_range start date is before end date"""
if self.created_date_range:
if (
self.created_date_range[0] is None # type: ignore[index]
or self.created_date_range[0] == "" # type: ignore[index]
or self.created_date_range[1] is None # type: ignore[index]
or self.created_date_range[1] == "" # type: ignore[index]
):
pass
else:
if (
self.created_date_range[0] # type: ignore[index]
> self.created_date_range[1] # type: ignore[index]
):
raise ValueError("Start date must be before end date.")
"""Validate updated_date_range start date is before end date"""
if self.updated_date_range:
if (
self.updated_date_range[0] is None # type: ignore[index]
or self.updated_date_range[0] == "" # type: ignore[index]
or self.updated_date_range[1] is None # type: ignore[index]
or self.updated_date_range[1] == "" # type: ignore[index]
):
pass
else:
if (
self.updated_date_range[0] # type: ignore[index]
> self.updated_date_range[1] # type: ignore[index]
):
raise ValueError("Start date must be before end date.")
return self
class _BoxAPIWrapper(BaseModel):
"""Wrapper for Box API."""
box_developer_token: Optional[str] = Field(
default_factory=from_env("BOX_DEVELOPER_TOKEN", default=None)
)
"""String containing the Box Developer Token generated in the developer console"""
box_auth: Optional[BoxAuth] = None
"""Configured langchain_box.utilities.BoxAuth object"""
character_limit: Optional[int] = -1
"""character_limit is an int that caps the number of characters to
return per document."""
box_search_options: Optional[BoxSearchOptions] = None
"""Search options to configure BoxRetriever to narrow search results."""
_box: Optional[box_sdk_gen.BoxClient]
model_config = ConfigDict(
arbitrary_types_allowed=True,
use_enum_values=True,
extra="allow",
)
@model_validator(mode="after")
def validate_box_api_inputs(self) -> Self:
self._box = None
"""Validate that TOKEN auth type provides box_developer_token."""
if not self.box_auth:
if not self.box_developer_token:
raise ValueError(
"You must configure either box_developer_token of box_auth"
)
else:
box_auth = self.box_auth
self._box = box_auth.get_client() # type: ignore[union-attr]
return self
def get_box_client(self) -> box_sdk_gen.BoxClient:
box_auth = BoxAuth(
auth_type=BoxAuthType.TOKEN, box_developer_token=self.box_developer_token
)
self._box = box_auth.get_client()
def _do_request(self, url: str) -> Any:
try:
access_token = self._box.auth.retrieve_token().access_token # type: ignore[union-attr]
except box_sdk_gen.BoxSDKError as bse:
raise RuntimeError(f"Error getting client from jwt token: {bse.message}")
resp = requests.get(url, headers={"Authorization": f"Bearer {access_token}"})
resp.raise_for_status()
return resp.content
def _get_text_representation(self, file_id: str = "") -> tuple[str, str, str]:
try:
from box_sdk_gen import BoxAPIError, BoxSDKError
except ImportError:
raise ImportError("You must run `pip install box-sdk-gen`")
if self._box is None:
self.get_box_client()
try:
file = self._box.files.get_file_by_id( # type: ignore[union-attr]
file_id,
x_rep_hints="[extracted_text]",
fields=["name", "representations", "type"],
)
except BoxAPIError as bae:
raise RuntimeError(f"BoxAPIError: Error getting text rep: {bae.message}")
except BoxSDKError as bse:
raise RuntimeError(f"BoxSDKError: Error getting text rep: {bse.message}")
except Exception:
return None, None, None # type: ignore[return-value]
file_repr = file.representations.entries
if len(file_repr) <= 0:
return None, None, None # type: ignore[return-value]
for entry in file_repr:
if entry.representation == "extracted_text":
# If the file representation doesn't exist, calling
# info.url will generate text if possible
if entry.status.state == "none":
self._do_request(entry.info.url)
url = entry.content.url_template.replace("{+asset_path}", "")
file_name = file.name.replace(".", "_").replace(" ", "_")
try:
raw_content = self._do_request(url)
except requests.exceptions.HTTPError:
return None, None, None # type: ignore[return-value]
if (
self.character_limit is not None and self.character_limit > 0 # type: ignore[operator]
):
content = raw_content[0 : (self.character_limit - 1)]
else:
content = raw_content
return file_name, content, url
return None, None, None # type: ignore[return-value]
def get_document_by_file_id(self, file_id: str) -> Optional[Document]:
"""Load a file from a Box id. Accepts file_id as str.
Returns `Document`"""
if self._box is None:
self.get_box_client()
file = self._box.files.get_file_by_id( # type: ignore[union-attr]
file_id, fields=["name", "type", "extension"]
)
if file.type == "file":
if hasattr(DocumentFiles, file.extension.upper()):
file_name, content, url = self._get_text_representation(file_id=file_id)
if file_name is None or content is None or url is None:
return None
metadata = {
"source": f"{url}",
"title": f"{file_name}",
}
return Document(page_content=content, metadata=metadata)
return None
return None
def get_folder_items(self, folder_id: str) -> box_sdk_gen.Items:
"""Get all the items in a folder. Accepts folder_id as str.
returns box_sdk_gen.Items"""
if self._box is None:
self.get_box_client()
try:
folder_contents = self._box.folders.get_folder_items( # type: ignore[union-attr]
folder_id, fields=["id", "type", "name"]
)
except box_sdk_gen.BoxAPIError as bae:
raise RuntimeError(
f"BoxAPIError: Error getting folder content: {bae.message}"
)
except box_sdk_gen.BoxSDKError as bse:
raise RuntimeError(
f"BoxSDKError: Error getting folder content: {bse.message}"
)
return folder_contents.entries
def search_box(self, query: str) -> List[Document]:
if self._box is None:
self.get_box_client()
files = []
try:
results = None
if self.box_search_options is None:
results = self._box.search.search_for_content( # type: ignore[union-attr]
query=query, fields=["id", "type", "extension"], type="file"
)
else:
results = self._box.search.search_for_content( # type: ignore[union-attr]
query=query,
fields=["id", "type", "extension"],
type="file",
ancestor_folder_ids=self.box_search_options.ancestor_folder_ids, # type: ignore[union-attr]
content_types=self.box_search_options.search_type_filter, # type: ignore[union-attr]
created_at_range=self.box_search_options.created_date_range, # type: ignore[union-attr]
file_extensions=self.box_search_options.file_extensions, # type: ignore[union-attr]
limit=self.box_search_options.k, # type: ignore[union-attr]
size_range=self.box_search_options.size_range, # type: ignore[union-attr]
updated_at_range=self.box_search_options.updated_date_range, # type: ignore[union-attr]
)
if results.entries is None or len(results.entries) <= 0:
return None # type: ignore[return-value]
for file in results.entries:
if (
file is not None
and file.type == "file"
and hasattr(DocumentFiles, file.extension.upper())
):
doc = self.get_document_by_file_id(file.id)
if doc is not None:
files.append(doc)
return files
except box_sdk_gen.BoxAPIError as bae:
raise RuntimeError(
f"BoxAPIError: Error getting search results: {bae.message}"
)
except box_sdk_gen.BoxSDKError as bse:
raise RuntimeError(
f"BoxSDKError: Error getting search results: {bse.message}"
)
def ask_box_ai(
self,
query: str,
box_file_ids: List[str],
answer: bool = True,
citations: bool = False,
) -> List[Document]:
if self._box is None:
self.get_box_client()
ai_mode = box_sdk_gen.CreateAiAskMode.SINGLE_ITEM_QA.value
if len(box_file_ids) > 1:
ai_mode = box_sdk_gen.CreateAiAskMode.MULTIPLE_ITEM_QA.value
elif len(box_file_ids) <= 0:
raise ValueError("BOX_AI_ASK requires at least one file ID")
items = []
for file_id in box_file_ids:
item = box_sdk_gen.AiItemBase(
id=file_id, type=box_sdk_gen.AiItemBaseTypeField.FILE.value
)
items.append(item)
try:
response = self._box.ai.create_ai_ask( # type: ignore[union-attr]
mode=ai_mode, prompt=query, items=items, include_citations=citations
)
except box_sdk_gen.BoxAPIError as bae:
raise RuntimeError(
f"BoxAPIError: Error getting Box AI result: {bae.message}"
)
except box_sdk_gen.BoxSDKError as bse:
raise RuntimeError(
f"BoxSDKError: Error getting Box AI result: {bse.message}"
)
docs = []
if answer:
content = response.answer
metadata = {"source": "Box AI", "title": f"Box AI {query}"}
document = Document(page_content=content, metadata=metadata)
docs.append(document)
if citations:
box_citations = response.citations
for citation in box_citations:
content = citation.content
file_name = citation.name
file_id = citation.id
file_type = citation.type.value
metadata = {
"source": f"Box AI {query}",
"file_name": file_name,
"file_id": file_id,
"file_type": file_type,
}
document = Document(page_content=content, metadata=metadata)
docs.append(document)
return docs

File diff suppressed because it is too large Load Diff

View File

@ -1,84 +0,0 @@
[build-system]
requires = [ "poetry-core>=1.0.0",]
build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "langchain-box"
version = "0.2.3"
description = "An integration package connecting Box and LangChain"
authors = []
readme = "README.md"
repository = "https://github.com/langchain-ai/langchain"
license = "MIT"
[tool.mypy]
disallow_untyped_defs = "True"
[tool.poetry.urls]
"Source Code" = "https://github.com/langchain-ai/langchain/tree/master/libs/partners/box"
"Release Notes" = "https://github.com/langchain-ai/langchain/releases?q=tag%3A%22langchain-box%3D%3D0%22&expanded=true"
[tool.poetry.dependencies]
python = ">=3.9.0,<3.13"
langchain-core = "^0.3.15"
pydantic = "^2"
[tool.ruff.lint]
select = [ "E", "F", "I", "T201",]
[tool.coverage.run]
omit = [ "tests/*",]
[tool.pytest.ini_options]
markers = [ "compile: mark placeholder test used to compile integration tests without running them",]
asyncio_mode = "auto"
[tool.poetry.dependencies.box-sdk-gen]
extras = [ "jwt",]
version = "^1.5.0"
[tool.poetry.group.test]
optional = true
[tool.poetry.group.codespell]
optional = true
[tool.poetry.group.test_integration]
optional = true
[tool.poetry.group.lint]
optional = true
[tool.poetry.group.dev]
optional = true
[tool.poetry.group.test.dependencies]
pytest = "^7.4.3"
pytest_mock = "^3.14.0"
pytest-asyncio = "^0.23.2"
pytest-socket = "^0.7.0"
[tool.poetry.group.codespell.dependencies]
codespell = "^2.2.6"
[tool.poetry.group.test_integration.dependencies]
python-dotenv = "^1.0.1"
[tool.poetry.group.lint.dependencies]
ruff = "^0.1.8"
[tool.poetry.group.typing.dependencies]
mypy = "^1.7.1"
types-requests = "^2.32.0.20240712"
[tool.poetry.group.test.dependencies.langchain-core]
path = "../../core"
develop = true
[tool.poetry.group.dev.dependencies.langchain-core]
path = "../../core"
develop = true
[tool.poetry.group.typing.dependencies.langchain-core]
path = "../../core"
develop = true

View File

@ -1,17 +0,0 @@
import sys
import traceback
from importlib.machinery import SourceFileLoader
if __name__ == "__main__":
files = sys.argv[1:]
has_failure = False
for file in files:
try:
SourceFileLoader("x", file).load_module()
except Exception:
has_faillure = True
print(file) # noqa: T201
traceback.print_exc()
print() # noqa: T201
sys.exit(1 if has_failure else 0)

View File

@ -1,18 +0,0 @@
#!/bin/bash
set -eu
# Initialize a variable to keep track of errors
errors=0
# make sure not importing from langchain, langchain_experimental, or langchain_community
git --no-pager grep '^from langchain\.' . && errors=$((errors+1))
git --no-pager grep '^from langchain_experimental\.' . && errors=$((errors+1))
git --no-pager grep '^from langchain_community\.' . && errors=$((errors+1))
# Decide on an exit status based on the errors
if [ "$errors" -gt 0 ]; then
exit 1
else
exit 0
fi

Some files were not shown because too many files have changed in this diff Show More