mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-16 18:24:31 +00:00
Compare commits
74 Commits
langchain-
...
langchain-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ca137bfe62 | ||
|
|
fa487fb62d | ||
|
|
053fb16a05 | ||
|
|
3672bbc71e | ||
|
|
a02ad3d192 | ||
|
|
0c4054a7fc | ||
|
|
75517c3ea9 | ||
|
|
ebf2e11bcb | ||
|
|
e41e6ec6aa | ||
|
|
09769373b3 | ||
|
|
3fc27e7a95 | ||
|
|
8acfd677bc | ||
|
|
af3789b9ed | ||
|
|
a6896794ca | ||
|
|
d40fd5a3ce | ||
|
|
116b758498 | ||
|
|
10996a2821 | ||
|
|
2aed07efb6 | ||
|
|
64dac1faf7 | ||
|
|
58768d8aef | ||
|
|
d65da13299 | ||
|
|
c14bd1fcfe | ||
|
|
a1ccabf85d | ||
|
|
2104cf0d9a | ||
|
|
18c64aed6d | ||
|
|
fc802d8f9f | ||
|
|
b4d87c709c | ||
|
|
383bc8f2ef | ||
|
|
64261449b8 | ||
|
|
8b8d90bea5 | ||
|
|
095f4a7c28 | ||
|
|
ddaba21e83 | ||
|
|
8e4396bb32 | ||
|
|
6794422b85 | ||
|
|
2ef9465893 | ||
|
|
0355da3159 | ||
|
|
6c18073fe6 | ||
|
|
38581f31dd | ||
|
|
8246b5b660 | ||
|
|
668c084520 | ||
|
|
cc076ed891 | ||
|
|
6d71bb83de | ||
|
|
f7d1b1fbb1 | ||
|
|
98bfd57a76 | ||
|
|
427d2d6397 | ||
|
|
50a12a7ee5 | ||
|
|
72a0f425ec | ||
|
|
22535eb4b3 | ||
|
|
5da986c3f6 | ||
|
|
6d449df8bb | ||
|
|
3f4d27fe21 | ||
|
|
59407338dd | ||
|
|
a1519af513 | ||
|
|
b61ce9178c | ||
|
|
9165cde538 | ||
|
|
2c0e8dce0d | ||
|
|
491f63ca82 | ||
|
|
587c213760 | ||
|
|
98c3bbbaf0 | ||
|
|
d3072e2d2e | ||
|
|
b49372595e | ||
|
|
16664d3b68 | ||
|
|
ea8f2a05ba | ||
|
|
2df05f6f6a | ||
|
|
b1c7de98f5 | ||
|
|
96bf8262e2 | ||
|
|
15103b0520 | ||
|
|
686a6b754c | ||
|
|
12d370a55a | ||
|
|
5a4c0c0816 | ||
|
|
e2dc36b126 | ||
|
|
c133eff6c8 | ||
|
|
1892a67eef | ||
|
|
2ab2cab203 |
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -14,7 +14,7 @@ Thank you for contributing to LangChain! Follow these steps to mark your pull re
|
||||
|
||||
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
|
||||
- **Description:** a description of the change. Include a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) if applicable to a relevant issue.
|
||||
- **Issue:** the issue # it fixes, if applicable
|
||||
- **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
|
||||
- **Dependencies:** any dependencies required for this change
|
||||
- **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!
|
||||
|
||||
|
||||
151
.github/copilot-instructions.md
vendored
Normal file
151
.github/copilot-instructions.md
vendored
Normal file
@@ -0,0 +1,151 @@
|
||||
### 1. Avoid Breaking Changes (Stable Public Interfaces)
|
||||
|
||||
* Carefully preserve **function signatures**, argument positions, and names for any exported/public methods.
|
||||
* Be cautious when **renaming**, **removing**, or **reordering** arguments — even small changes can break downstream consumers.
|
||||
* Use keyword-only arguments or clearly mark experimental features to isolate unstable APIs.
|
||||
|
||||
Bad:
|
||||
|
||||
```python
|
||||
def get_user(id, verbose=False): # Changed from `user_id`
|
||||
```
|
||||
|
||||
Good:
|
||||
|
||||
```python
|
||||
def get_user(user_id: str, verbose: bool = False): # Maintains stable interface
|
||||
```
|
||||
|
||||
🧠 *Ask yourself:* “Would this change break someone's code if they used it last week?”
|
||||
|
||||
---
|
||||
|
||||
### 2. Simplify Code and Use Clear Variable Names
|
||||
|
||||
* Prefer descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
|
||||
* Break up overly long or deeply nested functions for **readability and maintainability**.
|
||||
* Avoid unnecessary abstraction or premature optimization.
|
||||
* All generated Python code must include type hints and return types.
|
||||
|
||||
Bad:
|
||||
|
||||
```python
|
||||
def p(u, d):
|
||||
return [x for x in u if x not in d]
|
||||
```
|
||||
|
||||
Good:
|
||||
|
||||
```python
|
||||
def filter_unknown_users(users: List[str], known_users: Set[str]) -> List[str]:
|
||||
return [user for user in users if user not in known_users]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Ensure Unit Tests Cover New and Updated Functionality
|
||||
|
||||
* Every new feature or bugfix should be **covered by a unit test**.
|
||||
* Test edge cases and failure conditions.
|
||||
* Use `pytest`, `unittest`, or the project’s existing framework consistently.
|
||||
|
||||
Checklist:
|
||||
|
||||
* [ ] Does the test suite fail if your new logic is broken?
|
||||
* [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
|
||||
* [ ] Do tests use fixtures or mocks where needed?
|
||||
|
||||
---
|
||||
|
||||
### 4. Look for Suspicious or Risky Code
|
||||
|
||||
* Watch out for:
|
||||
|
||||
* Use of `eval()`, `exec()`, or `pickle` on user-controlled input.
|
||||
* Silent failure modes (`except: pass`).
|
||||
* Unreachable code or commented-out blocks.
|
||||
* Race conditions or resource leaks (file handles, sockets, threads).
|
||||
|
||||
Bad:
|
||||
|
||||
```python
|
||||
def load_config(path):
|
||||
with open(path) as f:
|
||||
return eval(f.read()) # ⚠️ Never eval config
|
||||
```
|
||||
|
||||
Good:
|
||||
|
||||
```python
|
||||
import json
|
||||
|
||||
def load_config(path: str) -> dict:
|
||||
with open(path) as f:
|
||||
return json.load(f)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. Use Google-Style Docstrings (with Args section)
|
||||
|
||||
* All public functions should include a **Google-style docstring**.
|
||||
* Include an `Args:` section where relevant.
|
||||
* Types should NOT be written in the docstring — use type hints instead.
|
||||
|
||||
Bad:
|
||||
|
||||
```python
|
||||
def send_email(to, msg):
|
||||
"""Send an email to a recipient."""
|
||||
```
|
||||
|
||||
Good:
|
||||
|
||||
```python
|
||||
def send_email(to: str, msg: str) -> None:
|
||||
"""
|
||||
Sends an email to a recipient.
|
||||
|
||||
Args:
|
||||
to: The email address of the recipient.
|
||||
msg: The message body.
|
||||
"""
|
||||
```
|
||||
|
||||
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
|
||||
|
||||
---
|
||||
|
||||
### 6. Propose Better Designs When Applicable
|
||||
|
||||
* If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it.
|
||||
* Suggest improvements, even if they require some refactoring — especially if the new code would:
|
||||
|
||||
* Reduce duplication
|
||||
* Make unit testing easier
|
||||
* Improve separation of concerns
|
||||
* Add clarity without adding complexity
|
||||
|
||||
Instead of:
|
||||
|
||||
```python
|
||||
def save(data, db_conn):
|
||||
# manually serializes fields
|
||||
```
|
||||
|
||||
You might suggest:
|
||||
|
||||
```python
|
||||
# Suggest using dataclasses or Pydantic for automatic serialization and validation
|
||||
```
|
||||
|
||||
### 7. Misc
|
||||
|
||||
* When suggesting package installation commands, use `uv pip install` as this project uses `uv`.
|
||||
* When creating tools for agents, use the @tool decorator from langchain_core.tools. The tool's docstring serves as its functional description for the agent.
|
||||
* Avoid suggesting deprecated components, such as the legacy LLMChain.
|
||||
* We use Conventional Commits format for pull request titles. Example PR titles:
|
||||
* feat(core): add multi‐tenant support
|
||||
* fix(cli): resolve flag parsing error
|
||||
* docs: update API usage examples
|
||||
* docs(openai): update API usage examples
|
||||
6
.github/workflows/pr_lint.yml
vendored
6
.github/workflows/pr_lint.yml
vendored
@@ -99,14 +99,10 @@ jobs:
|
||||
prompty
|
||||
qdrant
|
||||
xai
|
||||
infra
|
||||
requireScope: false
|
||||
disallowScopes: |
|
||||
release
|
||||
[A-Z]+
|
||||
subjectPattern: ^(?![A-Z]).+$
|
||||
subjectPatternError: |
|
||||
The subject "{subject}" found in the pull request title "{title}"
|
||||
didn't match the configured pattern. Please ensure that the subject
|
||||
doesn't start with an uppercase character.
|
||||
ignoreLabels: |
|
||||
ignore-lint-pr-title
|
||||
|
||||
@@ -31,15 +31,13 @@ LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
|
||||
a bounty program for our open source projects.
|
||||
|
||||
Please report security vulnerabilities associated with the LangChain
|
||||
open source projects by visiting the following link:
|
||||
|
||||
[https://huntr.com/bounties/disclose/](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true)
|
||||
open source projects [here](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true).
|
||||
|
||||
Before reporting a vulnerability, please review:
|
||||
|
||||
1) In-Scope Targets and Out-of-Scope Targets below.
|
||||
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
|
||||
3) The [Best practices](#best-practices) above to
|
||||
3) The [Best Practices](#best-practices) above to
|
||||
understand what we consider to be a security vulnerability vs. developer
|
||||
responsibility.
|
||||
|
||||
@@ -64,7 +62,7 @@ All out of scope targets defined by huntr as well as:
|
||||
bounties. This includes the following directories
|
||||
- libs/langchain/langchain/tools
|
||||
- libs/community/langchain_community/tools
|
||||
- Please review the [best practices](#best-practices)
|
||||
- Please review the [Best Practices](#best-practices)
|
||||
for more details, but generally tools interact with the real world. Developers are
|
||||
expected to understand the security implications of their code and are responsible
|
||||
for the security of their tools.
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -1 +1 @@
|
||||
eNqdVX1sE+cZTxS12la1o4KVFKZy9aa2Yzn7PuyLz643EueDkA97cXA+Jhq9vnvtO+e+cnd2bKOwAduqCQo7xtZqGqyAE4PJAlnCx2hpV6ZqUCbaratoaEUnUbGubAsboKnsI3vPcWgi+Gsnf9zd87y/93l+v+d53i2FNNQNUVUqx0TFhDrgTPRgWFsKOhxMQcP8zqgMTUHl8+FQpOtAShenVwumqRk+lwtoolPVoAJEJ6fKrjTp4gRgutC9JsESTD6m8tlLlU9tdMjQMEACGg7fNzc6OBVtpZgOn2MtlCTVUePQVQmix5QBdcfwhhqHrPJQQi8Smom7VVwWFRF5GaYOgezwxYFkwBqHqarSHKCZ1ezl8ZRSCh+53rn1bXQoQLatOjR1EaYhsvLQ4HRRm3NwdJYNmKjEVV0G9mtMhxIwIY+ZKgYwxISedaKFGtARGOLJsIE1HaWvmyIsPZWc7JtyNChaUUk4hodReohLUYc8CrbshnIsu6mxJORM5Da8YbggQMAj8J15QTVMa3wxw0cAx0FECFQ4lUfQ1s8TOVGrwXgYt2MtIloVWMraKg5AqOFAEtNwdG6VdRRomiRypexcSUNVxsoy4HYgd5uLthg40kwxrakQCqKuxRXOolJQMNLJME7qaAY3TCAqEpIWlwCKZ1Qr2V9aaNAAN4BA8HKZWaNzi8cX+qiGNdIOuFBkESTQOcEaAbrMuCcXvtdTiinK0CoEw3dvVzbe2a5AO0kSfSYWIRtZhbNGSmV0YtFqVAlZnFMRiLWPGJ8nSIJKwhSsAxTLHNShoaHKhltH0TIzZWzJIzHgb88WyhW+P9Q6r+LliuX5BiSMdToCzBqMorB2oGMUQXkwkvHRtI+msOb2rrFgeZuue+ow0aUDxYgjLRrndS9wQkoZgHwxeE/FT9uKo2zs8FFj4TCjqQbEy1FZYz1451xv4y0Nk3Plhat6AihirrStdbok/VAuM8RzKZ4X0kMywebctBiDKS4+VV6Cqt/eBgWEy4Z1gGaZ8bJlnvwiypXASQInyFMZXEdUSKIsIj5Lv+UBY1h5D0EQJ+92MNUBiEZRwU2UrlcWeuhQRqLZe38K42ZZ9uV7O81D0ax9eU4t9jLgwmhISjZO3u1QhthPGGOZeW9c5K3pL6OHfj7m4WM0ZN2AomMMx/AsSXo9EPI0wcVjdO0v7WnAIRRbTE3VTdyAHJqmZtaarpFBxm60AE16aAZl6keTiJNSPIykYg2qnYPhxzQ0kVTAH+HiOAc4AeJz9WcVGno76tpbgsUICjKoqgMi3HWpsrq/n4v3x+TAuvVUQvV2s/G14a5kk8gEBba+qS+irQ8yjZoCRVqT3ZnmhgTUEzhZ66YYhqRICiedhBP1Dd4Y68vEpXqDBx6qtUPpVcgs2aaTcTrJ9Aw2JshWRalfRzPpgbZ+dn1YVOtz1ECUTHka0+5mZ39LJCszfK4u0zsUVenWcJOXjMqiVBsWhhKhlqbMIAjXO3sHmyhvrtmNUgSmEHD5MVSwaFoagXLb4KhtcLtpan3EfNP4Mb5ETMC5eEb6sbXodAopUtaPRWyGIfpHIzsimjDQoSpwejciJpUW+UDyG8mM1i2AyAAEdan2ru6oJ9YIhnrDvOLNCDLb3t+QbusksmanuoAZhmBxokwOQ7i9pdL8NPT/M6rjPfjCKYCHtLljuKCohiLG46MRqKOusoqcpKZ4NO51OBpswjvreq0plnLXkrWeOMt6WW/c68Hr0SCdR7szM/L2WVEAEiq8NGdNCnTA4XO7aYcfk0HAy6AeKx3Wm0fnjq3XK4+t2vaZitJVhb6zs9t3hXa+TSx5feaTZVvffHDdfnP25i86j9Xf99fNVcvXHD47+MSh1/5evbH+UPQ/fzvbcuLcGcfgObDpxhMzz/3pK5VLTj28cknS1XvrQv69fwU/uXL08ImZ45+/feM37ycat6369/Pvhb72rSWHD754e+XqTdHT6//g3zp2rqMnu3v7YHLThke6u7PW8i99dKhuKdxzZSo/dTFaaIzuFtN7rtYmbu2dOoMx1z2VFZkVr/6uIF9//t01x3PjN6u3/CUsOdsrzvzw+194PPjW5hE48ZN9jvNC8zW/+3uXf7XjoRfz3/0n8dk/rq0iPP/InFiWk57FDmT91ZMVv++LPfrrnT9e9eaHJikeNPZ+fP3ZrqlLHx898vQba5gVn2vr2H5zd8XTO85XP/Ph0guRmZOx2zuuzLyhr3jy6pMvgL0/e3xpK33x4rV3vvp28f7e5NJHzi9bVwwwk/rll7+4ov2BH9RQs/LVY9MRRX5p9tGvV428RTI/uvGucIu6/v5V8rGPXr3k9D/24LZv/7cQ7ZloWvXBvteiD8V9Kx/esXniSJ/sv3nh2p/Tfd3v1B47teF822rrp7s+KGlTVXF/3x6tBQn1P0jA3k8=
|
||||
eNqdVWtsFNcVtnEDDaKpqhC1jYuYTm3ktp71zL68683SENsYGz/W3rV3bYKsuzN3dseeF/NYdm1cpSSqkhCVDIpIWuFAYnuXGMvBtUtIqBMiEqCPtKVJkexISd/QR6SmLWmjBOiZ3TXYgl+92se995x77jnn+865e3MprOmCIpdOCbKBNcQasNCtvTkN7zKxbjySlbCRVLjxUEc4MmZqwsI3koah6nU1NUgVHIqKZSQ4WEWqSTE1bBIZNTBXRZw3Mx5XuMxiadUwKWFdRwmsk3U7hklWgatkg6wjt2FRVMhqUlNEDEtTxxo5srOalBQOi7CRUA3KrVCSIAugpRsaRhJZxyNRx9WkoShiwaCRUe3jvCnn3QfVG9O6YVJGki3VsKEJOIVBymGd1QS1oEB2FQWEIPOKJiF7m9CwiAzMEYZCIAIyoWUccFBFGhiDPOm2YVWD8DVDwPlVXsmeFL0BbwU5QY6MQHiQS0HDHDhbVIMYi2pKfACzBqiN7BzJJTHiwPj+8aSiG9b0ygy/iFgWQ0KwzCocmLZmE0OCWk1wmLd9rSaGdIObhOTKOB+7NTmIsUohUUjhbOGsdRypqiiw+RhrBnRFniqCQdnu3CqetCGhADnZsOY6wJUtzTWhDBBCJhiHz+egj6cp3UCCLALAlIjAq6yal59aLlAROwhGqCLZrGzh8PRyHUW3JtoQ2xFeYRJpbNKaQJrkdc8u39dM2RAkbOXqQ7deVxTeuC7ncjAMfGZWWNYzMmtN5Mn00orTwIcMxSpgxHqOnl5KkIjlhJG0xpx+71EN6yrwGz+chWOGqe8dB0jwz8/nijx/vmP7EpbvlXx5vAHgseYjSbOaYGqJFlMknLTTQzCeOsZX52aIprbIVH3xmshtcZiJaEjWecCicQn9HJs05UHMTdbfFvF5G3GIxnYfyovCaVXRMVX0ypqKUV2FCqeaG2YLJKMULYFkYSh/rfWCjSZUtCDPFcXAd9skXE5JujXmqaWnb0psFlvz9qS/mVdTQnu9M5X2DLQJvVsb3cl4o6hnlrSXYJmELNAUQ1M0M58GWqeUDGWqhSqngHcpgcVUHmW4y/1KmtIgj6IgCQBG/rfYo4A4LhrGyVs1DGUQQzt7gfHQhfHqch0NSxCeHc9NS04/jB/fXuuGNbc/P7yvrNTT8TKfxpySfvJWedHG87Q+lV5SpgTOWqiART/tYTGuddNxvwc76VpgiTvudmOa5f1e2Im/bHcVFqzYdFAVzYA8sdCVjYy1UC2htF2qQRfjcXkh1gB0NFY0ORw24w2KHYQeIFTobAriXmR5ikVsElMFBlu5ht72LW3N9ZNhcLJeUQYFfGCx9Ev9/SzfH5eCrDMcjwmJRGe/GXVudyaTfQ3bU61pxPHdYu9ApMHvHIj2RSIuzTtIMbUeZ63X43EyFOOgHVB5VE8r19AtJ+OqudvVt6svEXmgZxfvibobfXLa40Btu7raRdfAgIaT0VATH+repummb/fuHildG9IH2hvSLV7ULnODTZFQqFn2qGJoCxtzxRjFETbbeSMW1VLNPi2cUrshahUZyWBNgADKQ9fVg8XCo6DwqELZuZfKLkBw+cQEHSt7bYDYBq9chyxmAkTYzjCGf2j9YcHAwXZFxgtPQWLMlMAFOxuFHr6vuXeINmqllozAtrT2R4TIYNTX7/RGW5VWVeIbTJbrcrcty4yf8VN0MTle2u3Lk/Om6/+nVydi1PI+QnWohec8Jyu6LPB8NgxVhTVrkhUVk4NnQ8PZ+q1U15Zea87vpVHc6/e7eFTL+DCiGqNdx5es3eg64/abk0MiEC/FWrNJV5Csc7tdZICQUNDnddN0/tH/Trbw/L1Zemzjvs+W5EcZfK9ff6Lrdfldet38J9/0jf7hq3cemHhr4v7OddTQebncu/675IXv9fSurxjaEfrj+2t+0vz3ufJf7btrz553hvfv+XrJIxfin3vgg4pjF9/954nhqdfOXU182qW8c2X01NWPrshvv/HWn/+a+dubFUzyk7vvORzbeM+P3iPLteDUqpazbvNE3y+ihw/+7OPqMkqYe+xi2vHFyImz39c2j9b89C+Th3ZuWAg4147FzuxfVfLbyteac2//t2kK1fm7+57o9FZdvLSp5Avns48/yfzg/K9nnll3hDx3+Yf/Hp47uPl08Orqyg333d/xUNA8Pf6na3zrutjuzk0fTs+mPrPt0qPYebjyWxusM73jE2fXLJ6NRjrLE7+fXlv1tZcrqpwZ60J96T+eOnPZ8emdG49eeZAJstfkmcvnHkN3lPkCqyufiS0+9+p9liPbfW/vveVNaG3lb+66cvorB54e9bpXPW5deil+zXPqP9KD/3r2w7Ij64e27vz4ocWj28/U7zhtHLk+GEgcumPfo797ujM28/qTHx2ZOTj6wS+3vjHCiccWT65++POb3m9fZHdVXVtlI1JW8u01h2Y2Azz/A3nj8PY=
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -1 +1 @@
|
||||
eNqdVWtwE9cVtgemdBpCX3nUbV02miTNJF55pV1JSK4mMfIDAXqr8iNDndXdK2vR7t71PmRLxlOgpJMQJnQT0o4bhjbYSK1wHBy7IXUKTEmYUCalJUxoDBP3R1rSMCTUpT9oJwm9qwexB351Z6Xdu+dxv3O+c87dXsxCReWRVD/BSxpUWKDhhWpsLypwQIeqtqMgQi2NuPFwKBYf0xV+7sG0psmqp7mZlXkrkqHE8laAxOasrRmkWa0Zv8sCLLsZTyIud77+gWGLCFWV7YeqxfPosAUgvJWkWTyWdVAQkKXJoiAB4qWuQsUysqnJIiIOCvhDv6yRDCJFXuKxlqopkBUtnhQrqLDJoiEkVBxqOdk0T+lSGT5WvfHqGbZIrGhKFagpPMxCLOWgChRerihYolUBwUsppIis+ZlQoMBqkCM0RLAEzoSSs2JDmVWwM5wn1XQsKzh8ReNheVVWMl+qaDBaXuq3jIzg8HAueQVyGGxVDcdYVUPJzRBoWG1k00gxDVkOO989nkaqZkwuzfBLLAAQJwRKAHHYtfFif56XmwgOpkysJZxWCZajNkoZCGWSFfgsLFSsjEOsLAs8KEfXvFlF0kSVBtIEcrO4ZJJBYs4kzZgJYRCt/uZwDpeCRNisTqfVfmiIVDWWlwRMLSmwGE9BLstfWyyQWZDBTshqmRmFivHkYh2kGgcCLAjFlrhkFZA2DrCK6GSmF39XdEnjRWgUfeGbt6sKb2xXpK02G76nlnhWcxIwDpTL6PASa1wJORIg7MR4gZqsJUiAUr+WNsbsbuevFKjKuLLhjwrYTNPV7eOYDPjWyWK1wveHNtRYnK+7e7wNE2McibFaE2G3EwFWIeyU3UHYnB7ahW+iMxCf8FW3id+Sh6m4wkpqCnPRXuO9CNK6lIFcyXdLxo+YjONoTPi4sUg4JCMVklVUxkQ3Ga30Nulvm66UF4mUflbi8+VtjSNl6gfzQ4Mc0DkunR0UKXeeofkk1EFqpmqCq9/cBgMiRdUYc9psk1VJLfklHCtF2iiSss0OkQpOhcCLPM5n+b86YFRj3EFR1Ks3K2goA/EoKjJU+Tq6WEOBIibN3PtzN4zb7f7drZVqrmi3eTlml2qpcDEam11UX71ZoepiP6VODNW0SZ4z5u7Fiz6GAzTltrscbofbRTPJNa6Ug4b4kUw63S4O/NacBgB7McmUkaKRKgR4mmo5Y65JZIfMRvPSNgftxJG24EkEBJ2DMT3ZhswY1BZCxhMJsdxLIEUCFqQhWak/o9jWE2wN+H2lGAbpQyjDw2fO13+jrw+k+pKiN5lQOyFHBTYHab2L6drAyKFoJNjT1RuB6UB4IDTgSAE2FxwcSGZIm4uxO502xuEibVbKivuGdPp62qSNyTTrttlla1pL9m0Yas37Y628VeoKMEzCQUXXu3q5lFVQkmKYbk/0ZiJBuq0dMHJaifiRNZKXsyGuryeTCzrDGg06rF3ieocjAeLWfhRPBSi+py8S72AADpHV0t7mFgIXLJ6WqrfaNiRuG9JsGpeHqjVNC8GVE+O1Lp2RLcQ6fDqFJCHXQsTMDEP8xCM7xmvQG0QSnNuDE6Nnec7bJ4L2pCB0CszafIJx5ZS4P+P0A5++1iqEYoFgaKNKDUUVF6V/f1FmnBRDUrXkUMyacml+Dv3/RPVKN7l4CpAhuXIMFyWkSnwqVYhBBXeVUQIC0jk87hVY8HWQ0dYeY8ZtZ1w2zuF2sna7m3Jw5Fo8SGvebsyMcfOsKLICLrwsMKbTtNfiYRja0kKIrHeNE/dY+bDeVqgcWyfqp1c/9cW68rUM/65f3/VMK7pArXz8ymd37j298NiK7LMXozPv3uO77TJ1mbnjx01nn/b03HFvflP4b8e+/PLZd9/zPHb36msLZ65EtvjrdkhvrNrhWZc4Qn/4XvGDF0+c3/L74dOnPv3oddfR6Z6Pj83Pv7ZiaueXSr8ILxy8NLZ/voELZI+fGuN2xa4Gpo7OnnpKobfv+yYTOecO3rZ+WNRnT59r/7n49pbhxOp1D61Mrvjr1+u2/mzhWyPnv/DDJxu5g39Zv/uek6tOfPz2cuJ7o91ftS9v7O6962DDgy/vys4c7QT/WHb5w4Y9M/9suL/+23xdoeM76Kdrd3/0FY5BK8Bdj/RuaALXzvzr/dG91vtv10/HGj/7yc7Eo42rnutsONnh2/aHJ419X9v65kDD689vOZ671Ni18WH5k02Rje+/8cly+q2rNJfkL+Z7JudHE46T0ecPdRfC0cZDo89dvfj3P6d7t3LGfxB95Y/7lMyFT/90reGBx5+e+vXeNzsOHt72gXP3wiPX38nOLaS7l//74d8c3+n477VVt7+iUGfnnpBX7hp5Z/OFFwZ/EDp8drY0f+7S7J7RuMV937F6k5lldb+MfveMH9P0P8WT3G8=
|
||||
eNqdVWtsHFcVtnFaaGkDtDRQQekwVFVVdtYzuzP7sFmn9saP9WtX9jquE6LN7Mxdz9gzc8fz2HodDE2KCmlSwpTSFhUJGj822bhOk1jBaesKAk3S0qQiRUgxAlVJRECplRKIBVLTcmZ3ndhKfnHlx8ycc797zvm+c+62fBYZpoy1yilZs5DBCxa8mM62vIGGbGRa359UkSVhcTwR706O2YZ85mHJsnSzprqa12Uv1pHGy14Bq9VZplqQeKsannUFFWHG01jMzVc+tIVUkWny/cgkazZuIQUMR2kWWUO2IEXBpIc0sILg1TaRQY5u8pAqFpECH/p1i2IxpcqaDF6mZSBeJWsyvGIiD2lhrJQArZzubs/YWjF8cL32WLOF1HjVtRrIMmSURWAVkSkYsl5yILvKBkLWMthQefczYSCFt5BIWJjgCaiEkfPCRp03AAzqZLrAugHpG5aMim9FJ/ehHA1EK2v95OgopAe1lA0kQrBlN8ix7IbTA0iwwG1002heQrwI4LvGJWxazvTKCu/nBQFBQZAmYBGgnUP9I7LuIUSUcWP1ECOmJRaguBoq5u4UBhHSKV6Rs2iytNd5hdd1RRaKOVYPmFibKpNBueHcaC64lFDAnGY5M3EIpT5WnciBIDSC8YZCXvqVYcq0eFlTgGBK4SGqSb1of225QeeFQQChymJzJkubp5f7YNOZ6OCFePcKSN4QJGeCN9QAe2j5d8PWLFlFTj6auPG4svHacXm/l2Hg58AKZDOnCc5EUUy/WrEb9JCjBAwgzkv09FKBFKT1W5Iz5gsH9hjI1EHf6IlJ2GbZ5rZxoAS9cyJf1vnueNsSl3+tuHd8HdDjzCUl20MwQaLVVggf7eMIhqvx0TVciGjuSE5Fy8ckb8rDgaTBa2YGuGhcYj8vSLY2iMRC9KaMz7mMQzZu+NBeFBrWsYmoclTO1KNUV6nDqdi6QyWRUdjo5zV5pHiss9dlEzpa1mbKZtC7CwmHU6rpjHGsf/q6xVWxM+c+pGIZPSt3Rn3ZYW6gQ+5ramSldKNi5pa8l2gpQBVoiqEpmpkbBllncY6y9VKXU6C7rCwgqsjyGMcFXh2mDKijIqsykFH8W55RIBw/DWv2Rg8LDyIYZ3sZji6tN5b7GEiF9Nx8riP5wrBev7nXNTQ2XFwrYwJRoGUxjflUc/ZGexljN21ODS85U7LonHkAXlI+NpjxsT6RQ0LQJ/rYAJ9mQwLyhQU6k6bDoSPuVBEAxZWDjg0L6iTAVLZyzhmPyg+7rRrxM5w/ALnWwkQTFFtE3XZ6HXaTMGsJHSYb5sX9QoYSeEFCVEnBTn5dX2d9Ryxa6IYgoxgPyuiZ+covp1JCJpVWI/Wg6+BIpjFk0X1ie7Il14PD7TGLV5uDPdlWtn+gnull5A1xmjcpJsj5ggEuwIUoxkt7ofOonqgWT4ihFN2hSk1qPNbZnsKxRDOvJ+ykFWsYsTpjCdy5IZlqHGoKMiirp6IN4VYxYfTTfVZftrEzG4tbWbNvyN+c1PmWnvV2S4Mp2bFUTnlMDAZTXYHu9YMjyCfbjZAib0mR6loCJA9T14yUG4+CxqNKbccttV0tIRYLE/GunLW1RAvccnFNydUS3W6FEfyH0d8tWyjSiTV05lkojJ2VxYgYHRhsSMm93f6srWWGWKk5HWxTuGYk6HG2PSAxbG9DAGmtQlhYVhmaZSi6XJwAzYaK4rwe+v8Z1eFHqeVzhIrrpes8r2FTkzOZyW7oKmQ4BUHBtgjXhoEmo01UV32fMxMO0Hw6jBAX5lhWpBHVAKN4Ce3a1Bl375w8r4DwsoJzSPJHyBqW9ZO1hMpHQgGWpouX/tbJ0vX3ZuXU/Ts+U1FcVfD7ySc7u37zo/foz89d+Oatv/jKH5873Hdl28xdUuVtq3/4WemeI4vsP6fPMYur9qy/+piYV0/f8oC37p0Xg9+9dPJb91QclQ7c8tKGtwr/Ovz6wtqZxYvPvzj72tV/L57cFx7df/7clvnvfPv0b/uZtjvmI7886784sfvURjGqPd7zyFsvTHoLVw7OTwUvPlx1nzyz/U/D3t5k9tjPjHBq+u1cj1fr/OC5nUcfOvzhyxUVw2/j0/9gP75r4wvvnvrpauHZO59amLjtkVUH95LizgcPWp7MN/acYF71XF398sSX9Oa19bFPX9pM/HrryObTlz8cab1j9tJY+9lNeGHVj0994fdiXdsba79+fEdL29Ofm9nx7p40/vl29b5EkxFLiJfJseOfer/uB39I/ffuusSFI5mR3330lwFu593nbm/Sh25vpQ/MCKOjD27q2McdZAOdR7/YWlhz5cRXf/L8XICtfGr7hbVvfqQuvv/n2b9979iux7v+fq+9UDUwNn3nLrXyva0f71t4ZmiNh3zy/ib+5K3R+SeOzX2woAfIr5FC76WnB+n/bL68MB5Zc/xssEhJVUXP+fNVdcDP/wBFIfTx
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -1 +1 @@
|
||||
eNp9Vn9wFFcdTxpmlKYBQfzRFNvlZETq7d3t/cxdDDS5/OBIrgm5C/kBMb7bfXe7uf2VfXvJ3cXIFFpqC526Hf2jWlDLkWtDSKD8aCGCdjoWigpIh47BqTpWq05KO8MwFWklvt27CxeTcWdnZ9/7/vp8f763MzsIFcRJYuk4J6pQAbSKF0jbmVXgQAIi9fFRAaqsxGTaWkPhAwmFm36YVVUZ+axWIHMWSYYi4Cy0JFgHKSvNAtWK/2UeGmoyEYlJXSvbNWwSIEIgBpHJt23YREvYlKiafKZOLEDUKRwQ1yGChVyMVTduFwn85L4hKaHQ0EdQuWVAjEqKAHTdPiKUSKcIDhHudfbtpsWE7IsIbYYKFDjA6oIRXhIZuJikYxFJA6Uu5iA4kWYhIhArKThkhMpigo7GZDYpEg+xXwkEFdNIr9kkSAzk8UZMVkmnRAqcyOlcEMk4PLAvp9/kGzapKVkX7EeS2IewegFgvuIV5rn7x0BEK5ysI8NCtSIaMnBAQjdMGInDJCICEGQISeRT+GPQY9wgFAlk+InMBBAZguZUaNDyu7oOxoKtywrOrqJyetaGTcAwstB4GEuCPABpIQYzMcRydC7e/x+NblLlVCN+OY/0jVxYkKpwYsw0YjZhtEZC0OJI9CKOYfOBekRIUcNIqK3BH2gM+Of8ywHqT2B80ZTBksOvA8CxEFBRPvL6dMsFbP45BHPwgKKAlGlkRM/sQIJTIIPrvBCxYsy989RA5n/9lCL9kFbxGjAMp4sAvq0oC1HAI4itiEBYqEKPEZb1qUoC6lDwGgKhIJRlIWBwq/+xZEWGlZCqTcxv30lA0xAXKRRpicGx1g7H0pxsJhgY5YEKx3DPitAYDtpYHEKZBDxO3mhOSjsCZJnnaMNLq1614/keJ3XHFpLH9Boh8UAQVe14KwZRG7C2pfCcEQnK4nZb7EeSJFIBJ/J4bpA8wHhGZYM+VUyQAR3HSsj8DNNGc8ITxTwS0g4GAd0amqcSKDSrHQSK4HYeK95XEqLKCVDL+tsWmssT58xlHRaKwu/ReZpRSqS1g0bUTxUTWMjjRJK5capNRaAKLPrAtBQNTIsMFARfnWcUqkqKpCVsW/uZbZSWpDgHtWuln+nro6N9EaGmvotLDVlsdXE23dwzyDfUbelMyVvi3T1+TyAW3jIUqHVzoD7WBltiJOVx2t1uymW3k5TFZsHgSfcm3uYVkp3hlH+wMS76ORjpo6h4Sg42D7QmOzoRGGyJ2RJNg2JdZ4s3GXTHGkFXi2OzJdno6W7oCHscTcFWO9ftckpbAaWiTeGG9p5aiulo87CgJ+Ia6O/o49Oon5WrqgkMOTHIMTXpONNuqwu29qD6uqZQl9ITdYF4dMjRKVL+TaFUjJEa0nRLqnVz2FaE2e31kLYCbJuzyqY/E4VK46EYU1ntQJXX+1JhwO4axYFUE2hnBlc1/M35bP4cerG1+W5DfDFTjytcOxMCqpmw24kgUAi7ze4iKLfPUeVzOoimYHjcnzcTXrSgj4YV3O5RnN+GQgNlaTYhxiEz5l+0dc7orYPzq8PHpwYJk7KEIJlHpY13ke25E5gM1B/L9SkpKTEgcmnDrHbG6KGhdHKIoRMMww4OCTZv2ungIjBBR4/nRfAQ181gQKSAtAMup3ciTylU8Rj2FUfZRtqo00lSwaHgOYHD8TS++WsA0jIuHOzXFjKoUhziC0PWaWTDdraYA5+1uIx123fVOL1e788XZyqocmAWr9N7ej4XgsVoKLuAXlvIkFfxog2NJwvcJMdo02vxog8Cj91G4Q+sckSdjJf2VgH80pSdomhXxHsqN0NJVU+mjE93EkEa33nUlDZtFkBSn1g1DsrlcGNPq/V7AJ9gYCgRqZd0H1A1ISuQlwAzSUdJGuDjmszVn5at7360Nhjwn+wiiwuJbJVz962sKCGRi0ZHQ1DBidHGaF5KMHj0KnDU30i213Zrx712p4eCUZcr4nZ4bU6GrMNDraBtruwy+tzOAh5jH6S1Y6yjxuRzOh2makIANVVunCbjVvbYaO5M/VXp+Yf2fLbEeMr2PheUXrB9bvdHn94bt5x6ZIm68ZXVndrtgZeu+a/6u9ZM3v/tSrr6ZKDn8Me/6Ix6ygOh9V9a9t33R54fii5bQT/iWnVf/bkz/3im5eSrrT9+cnL5jT+c++Shn5zlN85+Kna+/PqaHTfX165elZ4tf2LH1t7ydx94av9xHwDa8H0T02ltcuu0/VLT5ZlK+P0Tb64r33cz8/W33v6OlplZO+U9VB285yurPrBE9h5941bFyFjlDv/rV279+2DT35N72rY//eWyQzBZ+uFq76UfPpncWfeNZ/v5/7Td0/nhUmFpw9Jtzdt7ve+sObEv9tff33j41xdeeXxm5V9WLkk9uPL0yqenzHeOVF8USPeu99b8qDLzQunEDzaU+3/67vOrrSsy4jNXh/+1+d6K9Uov1KoP101djJd9taZx7/U3t1RYr+w6etpz9sbtB8/9Cc5cfWv5aEXQeeni5filv7Wc6D4n3epwpR+98tvdFTv3LJsJ+ITdy13/vP970Zdnr79PfhRS2q8Pf7COWFL5bAKa31n7ra/tv2DeXXanbxpeoN/48+UNz/3ywuevHXmq8oEN59/ufuyT9775cYd/rCFu3iceoq7eiVrdzfuV0t9Rt8tLSmZny0q2zdz8AldWUvJfxWqIYQ==
|
||||
eNp9VglsFNcZxiKoqUTURIkEFSmZbEmiEM96jt317hrT2utrfeC1vQ6YQpe3M292xp4r82Y2u0YQrlYECGTUQ23TRICNN2wNAcxtnKoHEW0OIaIiAQUapWlJCUoVSNUqVdw3s2t7XRNG9mre/P/7/u8/39uUS0MDSZpaNiypJjQAZ+IFsjflDPicBZG5ZUiBpqjxg7H2rviAZUiXFoumqaNwRQXQJa+mQxVIXk5TKtJ0BScCswK/6zJ0YQaTGp+9PDu31qNAhEAKIk/4e2s9nIZNqaYn7FmONxC1hgTUpxAhQiklmt9ZpRLFZ+qtS7MMDoYJeupTVBU0QwGOnTDRZfVnCQkRgaeYVZ57ATBfAdAMDahIQHRAkrKm8vBeKOxXoLieOBAsIamcCBGBRM3AYSVMEQsclp5yj6HJEPtuIWh41q0u9ygaD2X8IaWbpE8jFUmVHC2IdBxCmCjge8JrPWZWdzb2Ik1NIAyvAKxXusI6U288RJwh6Q4zvKlGRc+7PCDhGCbc5GIRkQQI8oSmyln848pTUhqqBHJ9ReUEUHmCk0zoyopfHQzei63rBq4Aw5SczK71ANfITONxvBMUCWgzOZQTz4sSV4j9vdk4Jk3JdONX8Mj5UAgLMg1JTXnWlXswWzch6O5MnEJPYfPROkRogmukK1YfiTZEI5P+FQj1WpifkHVVCvwdAjgWCirJRxHPsTzBLTLJYJIeMAyQ9axb52T2OUsyII97YSJipZxXT4OB/P/7qSV7IWfiNeB5ydkC5FhJFgQgI4itqECZCeHECO8Nm4YFHSp4DYEysSknQsDjcXBt1kODooZM++D0Fn8DcBzERQpVTuNxrO2RVL+klxM8FGRgwnKiH5l8Hne3Ct0xYuf7INRJIOMUDhX22oeArssS5/pa4dTucHEakI57M8V5p1JIPDpU0z7ajqnURCtiWTyRVIL2BoNe6lCGRCaQVBlPGFIGmNWQ7spHSwU64PowCFmcdvZQYfPBUh0N2fvaANfeNQ0SGJxo7wOGEvCNlH43LNWUFGjnIrGZ5orCSXM51kvT+O/wNGSUVTl7nxv7U6UCEco4nWRh8NqjSWgCrzNavSWj1asDA8ET04xC08iSnIZt23uoIU7T+iRoXy77WiLBCYmkUp3uTNU30L5nrVhlQ7+QMpO6ZWq9fdFMUwcDmnxtXhCKgN4E313ZQ9KVfqYy4A/6QyTtpbyYPNnZ44s1tgUoUaHMjBXt96Nlz2bYZc1WM82KKN0TZVrYZFwWNa2TE1tjQncj11WTCmUygrc9lIyjmNkSEJqbOjq6fcujEVZqkZuCeqaDX86ujNQwXUxznO3TvaoS6OqoIjBlKy3x1QIf6uPame7KNN+/0qrNNtf3dtehWGMipmqtDcuiahvbv8Ib4rTuVAlnhmVJqkg7QPmClPMcnKg0GaopU7QHQqz/9Ykxu3kIB9K00KZBXNvwnXO54om1t71lqi3mD9bhOrfH4qJVTtCVRLMlEwzF+AnaH2Z8YYYiGtviw5GimfhdC/pw3MBNL+D81k+0UY4TLbUP8vnIXVtnzGkdnF+HPj47SJjRNQTJIit7eAXZWTiryWjdSKFbSc1IAVXqd83a+522wAUkqUeLYjy2HUhsnFSQPRCkQgenJM54scecl0RU0NPSsgiTzvh726SehnqfmKyXUXZCe6K+8zgKFElTJEWPZfB8SGtZ0tIL04XEDZyWOEi67TIQZIOnM6SB4yhLioST4f4Wbxu4A1knVSdnaphaH8QXk/20nyo8b5bq4NMbu+f4M4XEhPBz5u5ak2g+Rynko05P10OwhNMAo6CTM+VFjL0UGs5MKJMSb19ahBcJEEgGGVoQKlkfmwR+iuZpJsiCQIgVGMhR8FRhFpOmUw46viXgOHH4fmVm7UvlCsg4M6+apf1sAPta5dwnZIuHXVayTnOcQFWEbkBZA/wbnEByAB/7ZKGC7Vxdz7Katmjk+AqytBTJdr1wt8upGlIlQRjqwomBhp3nZM3i8Qg34FCkgeys6bGPhgIUSEKeSyYFNigkg2QtHosTaJOFO+jM/xyQMfc0Z4+IbLUn7POxnipCAdXBgI+i3BvgxqHC2Xy27PePbb9/lvvMxv/j4zs631avUA+eufnM1rG/eub8dMGVF3fWnG+x71zYO6c2svsPx44+XX9k0Z8a546vrfnP46e+sXvewnT6h+zo6NvzZ3fG6JcPbD2CTp/89PrHF7ad2PWP2zc3k9dvrr/d9NjS9f0rl46t/8ES+pO/zYFLDh/cd+uXbzVdfHTNxx11rfmPeukEOe/E9tGyG8O1r4ovbT/7Ehlm4u+kbr+SPL74j1e+PXpsEf3ds6+xb82/8oD2mfTIlxu2vftZ177fPVi3OfbqK/Nna1Z+w/0bax95snbppgu/2CocevNDovXfO1+LnP3g2o+o5RuPNbUt+Oac4ycXt7wXWoXOhZq2eZ6Y+2j+78wLI9faYbX6BdHpe7+vumwL/a+m1o0XG/kXdu5WPjxzdcv11w89EdlzoHrbmrYjC/Za1O4vdhz+nMp/6akOalXzboq//ss/V59aufDOmj+f3/reLrRFHDj/8MWrL1p7lj+zY5P5s7kLBl8m83ZYbbiz7ZP7zj25f9y8XXkrYjDvjp/bteHHNzz5y9n4ooqHdykL3y8b/9WtJR/9hrrvzvWm5Le+vu7ylt8euPH5B8PEf+c+NNrfMlTz6ZIHTvwk8PNbIwk3VbNnVT1+4er3cd7+BxYmsIY=
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -1 +0,0 @@
|
||||
eNrVVk9vG0UU58+NI59gtEJCQl577V3bjVEOUajaQqMW1VRFbbUazz7vDtmd2c7MxnEjHyg9Iy2foCVRXEUtRQVxgUocOfAFwoHPwpu1nbZO2pReEJYt2/P+zG/e+73f7J3pFijNpXj7IRcGFGUG/+jv7kwV3CpAm7v7GZhERrvnzvZ3C8UPP0iMyXWv0aA5r+uMm6SeUhGzhHJRZzJrcDGUewMZjX+fJkAjTH/34AsNyl2LQZjyZ+tdxbn5uOHVm/Vm0H28xhjkxj0rmIy4iMtH8W2e10gEw5Qa2J+Zyx9pnqecUYux8ZWW4mBdCgEV5vJgEyB3acq34IECneMx4Jt9bagp9J09zAt//jHNQGsaw/eXPluA+/ut9x9fc+0GqXt2O+cYWt73HmJig3Dd/jiH4/v+sDBfBBGbpNztdFv7lxWNM1o+ENJllCXwm02qtWt9lUzdtTSVI/f8bNvy/keH5ESHdQURpuY01eWeUQVM11LjXtli5WE98VedXhD4zscko6ut9krL87xa4rutlRMMJwPYqPppAex9grUtn16hpkZaLbJBFWl5rTZpdnr+Sq/tkXMb/SfXXPSikYzdPrID3AtReUgCGjHw/DPeANrtM34TvKDrddvRwPMhiFr+7lVOywNsLYmljFP49UQklxSPuSjvPV2yYhukhucq9Wjd1nNhL5/kxQDbUcOjbrvYzdWO98tSig20IN+wMZ43XbT1aT8pasRrkk+pIM2Vrkc8r1e97Un3rlI1LvdnoH7q8wx5uAS0ovXXyCmFtr/eubfjzKfH6TlefaXe7To1hwvknGAQInVj7fR2nEEqB6E2EvkBIQg6SCFyera1tWUbbgOY7IqPiSKsuwYTwjbN8hR0mBWp4TlVZjnJ6R44ejjcBkLKQ5xrNV52kCoOmYKK4mHE9dw4RBaiNafjDDm5HJTj6aWgaYjR+niU9l92ag1UseTYaiJHoTFpWPDFkrGcCw0HFUaFmqOj46qsqRSxnXaMD3AKbLgy84VmMKk5I6k2dW4TaCZzsChDLra4AX2E8bY2UYiylSNBbCdfxIRJBtQgUuw3iiE6iiGP7eaFhheqHeUSBfToJIymEBZ5eEvz24gf+RODQlheBXRhFSbBkkc6TJFtGNzsLIyRHIlQQJab8bPoAK023cK7ynW0EA7G1cFa3kq32W55k8l7L1fx1dNUHD+4qhsqzRrYQTdXWCPTsGqszf9I3r89JtVnguA/V+Z9LBhKRzkttjiTSjyqLgyXzQXulCvkmYL/O53di16l98GKVcGHbF4uc+LFd6pUnyycr5LnF0X13Ys7zox7YUJ1glrY9oKgRYdN34dOs93tQDdo+x3Wabc7DIbQHFKGFGM08vzWMPC73UHHY4HfgQ6L2KADqKQZFXyIvLWDy3G0rztHZEfrjNoaf+GKwa91/LpcLfZxAi1DnZs1J2U4cqhIWBdEhWVCxAVDfcOIzRFVM62fMxB/X3+tvc4XCG5jFvSme86Snna4uVfNedNtzCKi53wpC0IVELxHKcqmvfAMGUpFKrVBsrhU6BHYjhK8xDZ1naBGEJMAellmWEPOAUlB5JAowOYDCjep2LdtiJFklqGKWWStkwtDMsa9Iyk+NGRTyFFln7nWyFeFNkTTMS5Ss+S4QKAAiAbLcbs5PkHwrMgwQ0SswDyXzmJhXEP9hvh8vn+P7CygTMgNsT4Di6tz2HZxrQruVQ8CeWHCLaq4vVEsI5xFtO3/LMSWf1HYECuYISt6ztCdjYMzwdfN1041mTx7FkCfm5N/AJjRbLE=
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -1 +0,0 @@
|
||||
eNrVVk9v3EQU58+NI59gZCEhofWud9e7bhb1EIWqLTRqUUNV1FbW7PjZnsaecWfGSbbRHig9I5lP0JIoqaKWooK4QCWOHPgC4cBn4Y13nbbblFTlAlGi3bx/85v3fu9n39nfAKW5FG8/5MKAoszgP/rbO/sKbpWgzd29HEwqo52zZ9Z2SsUPP0iNKfSo06EFb+ucm7SdUZGwlHLRZjLvcBHL3bGMJr/tp0AjLH/34AsNyl1OQJjqJxtd57nFpOO1u+2uHzxeZgwK454RTEZcJNWj5DYvWiSCOKMG9mbu6gdaFBln1GLs3NRSHKxIIaDGXB2sAxQuzfgGPFCgC7wGfL2nDTWlvrOLdeGP3/dz0Jom8N3Fzxpwf731/pOr7ifU0Egm7hp2ANzzUXVIuv0oGPtR7I0DFvS84FTEguBUwOIx+AMPxr9aUFq7CMEombnLWSY33XOzqtX9j54u+M9sFVLDcwGPVihLofFXT4pyjJdrkZxuuYjx9NDbucJpdYAdIomUSQa7V6iaVHsXFU+42LukaJLT6oGQLrOVHl917ZH1SRw7UN339puvT9fSskW8LvmUCtJdCjzieaP6l5xdXdvF60P19DI1LdLrkVWqSM/rDUh3OPK9kd+1MYfk2PuuKIhwrJxmuto1qoSfF8JW8TY4+Wpn6HkPrRGj3bVJAS9P8/iOrtb8sw37cY3nSI65fdaF6t7+cmbcyxusOmyn/dPOyPf7zsfYxNO9wVLP87xW2nd7S8c4fjn2uKbs9w3WCyASkyL+oFfT+ivklEIYf75zb9uZb48zcrz2UjsInJbDBXJOMAiRuol2RtvOOJPjUBuJ84IQBB1nEDkj26zWog+PBix2uY+FIuSkBhPCFs2LDHSYl5nhBVVmscjJEbh6uNwGQspD3Gs1WQyQKgmZgnoYYcT13BnjXNFb0EmOvVhMKvD2UtAsxGz9cpbuv+rWGqhi6UvWVG6GxmRhyRuTsfsYGg4qjEo1R0cndVszKRK77Zjv45htujJzQ9eftpxNqdZ1YQtoJguwKEMuNrgBfYTxtjZRiLJV4JbYSb6ICYuMqUGkOG8UQwwUMU/s4aWGF7odFRIF9OgmjGYQlkV4S/PbiB85lYBCWF4NtPEKk2LLIx1mSGxM7g4bZyQ3RSggL8zkWbaPXluuia5rHRnC8aS+WM9bCrqDnjedvvdqFT99korjH1p1R2V5ByfoFgp7ZDpWjbX5H8n7N/9epmtxddlcpo/kdlGc97AJKAfVfrnBmVTidQXzIZsLjTlWFP9JT3ejk3X7vy2qp3z/RVF998K2M+NemFKdohYOPN/v0bjb78OwOwiGEPiD/pANB4Mhgxi6MWVIMUYjr9+L/X4QjIce8/tDGLKIjYeASppTwWPkrV1cjqt9zTkiO3pn1Nb4DS0GP1bw41JtXMMNtAx1brScjOHKoSLhlBAVDg0Rlwz1DTPWN6maaf2cgfj92mudda5EcKuzpDc9c1b0pMvNo1rOmx5jmoyR86UsCVVA8GWComzaB54hsVSkVhukrkuF3gQ7UYIPsXXdJqgRxKSAUZYR1lFwQKoQGRMFOHxA4Sb1LmwZYiSZVahzmqptcj4mEzw7kuJDQ9aF3Kz9s9AWuVlqQzSdoJGahcAGgQIgGiwL7eH4rsXzMscKEbEC81w5i4VxDe3r4vP5+SOy3UCZkutiZQYWrXPY1rhcJ4/qF4GiNOEGVdw+USwjnCbbzn+WYtvfNDbEDubIipETu7N1cKb4c+O1S02nz94FMObG9G9f2m1A
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -1 +0,0 @@
|
||||
eNrVVt2O20QU5ueOS55gZCEhoThx7Px0g/ZitVRtoasWNVRFbWVN7GN7WHvGnRlvNl3lgtJrJPMELbvaVKuWooK4gUpccsELLBc8C2eceNtmt91ScUOUKMmcn/nOme984zuzLZCKCf72Q8Y1SBpo/KO+uzOTcKsApe/uZ6ATEe6eOzvcLSQ7/CDROleDVovmrKkyppNmSnkcJJTxZiCyFuOR2BuJcPL7LAEaYvq7B18okPZaDFyXPxvvKs7OJy2n2W62O/3Ha0EAubbP8kCEjMflo/g2yxskhCilGvbn5vJHmucpC6jB2PpKCX6wLjiHCnN5sAmQ2zRlW/BAgsqxDPhmX2mqC3VnD/PCn3/MMlCKxvD9pc9qcH+/9f5vJr1SNibTUqT2WpqKsX1+bi/vf3RITnRYlxBiRYymqtzTsoCfhixD8AvzJclixst7+5cljTNaPuDCDmiQwGwt1faVraA8bCbeqjXodDzrY5LRVbe74jqO00g82105wbB3lcpJuT9PPDu7nTOss3w6TIoGcdrkU8pJe6XvEMcZVG9ybmP4+JptwKd27X7feXLN/oRqGorYHuKBg30hLA+Je8aLekEP2p6He9FRt70y6rtuP/Jo5PSdcPcqo+UBnhaJhYhT+PXEntRFn9zSjYpKpqU/GAM2z74IPNZJudvru78sxWzQbcMZtGHpiBjKp1eobhDXJRtUEtdxu6TdG3jtgbNiSn1Y5xxOcjjOlUfrpvl19vJJXozQ3sD+bttIidWe83QJALZMKHiOCBWtv0ZOSTzmv965t2MtpscaWE5zpdnvWw2LceQcD8BH6sbKGuxYo1SMfKUF8gB84HSUQmgNDGMayzZsHmCyKx4mCvGQFGgftmmWp6D8rEg1y6nUy0lO98DRw+HW4FPm41zLybKDkLEfSKja5YdMLYwRkhutOZ1k2NnloByrF5ymPkar41HKe1nVCqgMkmOriRj7Wqd+weolbQjqawbSDwu5QEcnVVtTwWMz7Rjfwakx4VIvFtqdacMaC7mpcpNABSIHg9JnfItpUEcYbysd+ihbOc6GOckXMWGSEdWIFM8bxRAdecRis3mh4IVuh7lAAT2qJKAp+EXu31LsNuLHqYhBIiynAlpbuU6w5aHyUxQODG73amMoxtznkOV68iy6g1aTrvauch0t+KNJVZjrrPTbXdeZTt97uYqvnqbi+MFV1ZJp1sITtHOJPdIto8ZK/4/k/duHwUIV9Imq8BLNxoJwtMtZscUCIflpcrYsjq+SsjeX//B0DXy1Kh9T3TOdzqPqTrKDhSwe3VL/1aX4L0X13Ys71px7fkJVglrYdTodl0Z4L0Gv3e33oN/penhTdbu9ACJoRzRAigU0dDw36nj9/qjnBB2vB70gDEY9QCXNKGcR8tYMLsPRvm4dkR2tc2or/IUrGr/W8etytTjECTQMtW42rDTAkUNFQhohKuwXIi4C1DeM2BxTOdf6BQPx9/XX2ut8geA25kFvuuc86WnFLbwa1ptuo+uIgfWlKAiVQPCJg6JsmgtPk0hIUqkNzpZNuRqDOVGCl9imahLUCKITQC/DE2PIGSAziIiIBDx8QOEm1bBua6IFmWeoYuqsTXIhIhPcOxT8Q002uRhX9rlrg3xVKE0UneAi1UuONQIJQBSYMTCb4+XPsiLDDCExAvNcOoMlYAqaN/jni/0HZKeGMiU3+PocLK4uYJvFtSp4UD0I5IX2t6hk5kYxjLDqaHP+8xDT/rqxPnYwQ1YMrMiej4M1xdfN1041nT57FkCfm9N/AE7ebHI=
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -1 +1 @@
|
||||
eNrtmAtUE1cagMEX3ZajaH2gtRqjrdUykEmGkATjlvfL8BaQlnImMzdkYDITZiYJQVsV7NqtUhuxuuoRFZBQfBfQWlCrFqUVX109iCBoqV0R8dnWZ3UvAVqs9uzprp5T9zjAhJn73//+z3u/kxy7GXA8xTLOGyhGABxOCPCBz8+xcyDTBHhhXokBCHqWLI6OiosvMnFUw2S9IBh5lZcXbqQ8WSNgcMqTYA1eZtSL0OOCF/zfSAOHmmItS1obVswUGwDP42mAF6venCkmWLgSI4hV4kSOEoAIF8E3BMUDEW8yGHDOKmJ1IkEPRDqWplkLxaSpRH5GqJMX4RwQcYAUe4g5lgZQg4kHnPidFA+xgSUBDV+kGQUEYxEDxVBQihc4gBvEKh1O88BDLACDEfoomDg4V+IpeceuBzgJA7CoWM/ygm3Tgy5txgkCQH2AIVgSmmHbmJZNGT1EJNDRuADKoNkMcATMVpYBgBHBacoMSrpm2bbg0GaKwDvHvdJ5ltnQ7TgiWI3g4eGyTl8QGCVGsO3w460MEQUt8QvzirbCBDAi1FMu95RuyUJ4AacYGA0eoXFoVInRMV7Ve8CIExlQE9KdXFtJ1+RNvWVY3rZOgxNRcQ+oxDlCb1uHcwY5Vt77PWdiBMoAbPaA6IeX6x78ZTm7zBNF4e/WBzR3umTb6PhQOe4Uu/0BLUDgrAjBQmW2tZJNPdGiAZMm6G1FqDdWygHeCOsK5JbAaYKJzymGmQF1tfbuAiuMiuhJabPTiOJAmCXbzjhc8BBJpSINzomkEqm3CJWrMEwlVYhCNPEbArqXiX9kUrbGczjD62BignqKwE7oTUwGIMsCHpn+nZ3ph950mg+LFAFZRpYHSLdVtg1JSGxXZyFhgeVdtYawXBrOUNmOZW07HXVgyc6ykISJJPVmi0GizMZklBaYCF1F9xQjx3YuAw1CDLytSKb03tQ90pOEMuirBEEliAT9PAuBVQ9oykDBeDru3e3N24q9JRLJZw8LCGwGYHibHZM4rl29JThggEnrXPtXNZhSqax+tFCPKhkUUSqwzx+U4kFva1Cpgf/sYYFuFYUSfkNWjzRCkbaGCfAhVSIFKEqgSgnmjcowH4VCopMrMFymwHVAQnrLd8B9gCKgls5kGllOQHhAwL1MsNoaPAx4VmfXqWWot0wOPfUVUQxBm0gQZ9IGsp0+8L4iIwdoFic3EzqEwAk9QLrqz2YPnBHppwkLKIuDRgawbAYFFp9ydk9NJXSpWoPa2y+ZNgbLZIHmMEsmhZKJAAPpcotVb5RjiYn+EenSCGNGMsWGUSyC+mBSuRxVyBUI6inxhP2DBCVpgk1MDCCUxlgq05idFKr1jCbN/qbp1oDg8NRgf395FBNLyBnam06T6mmD4B+gxAhvE+tDE4lRmZ7SCGw6GZsRLIRykVh2Km3Q6IMysyISSSI9xlPGhweQmVmmZCmR5gddxAW92stXBAuWgkFXd7cNAtsG6WwaHxXa0zS+ItIRGLXngxumrygUng1RDG31FcV1RhjAT9wA4uA2r45kGdCwBAbGZKZIdURqQEgCF6aXahKn6xQR8XItGZBtDQ8IkEuz6HAuTebn5x/vJ50RGjajV2R8UAUi6Q6OXIIpHKX5q+n/pVXbkpDeuwASZXScXjY7w/IMpdOVxAEOdpWtjKBZEwn3fg6UBAQjsX4zbBVKKeYj9ZFjGIFLMVLijfjDDbVH2y97RnHnwWHHaVh4ZsJWrpepxXALkol9RQZcrZDDHnMclXNLOguVSatxrhi74Dknx9UX/t2/v3CxH9socX3v8r1hOYczbf1mfPv9tsI2vMrji88PRrsvDPRNBzEfnffws92fWhtJWNrnpMw6cyBkd8hV7xf9RyH1/mXqLenVhy5fmq2mz14pShKxKw60Jq73adk1dnvq8dbTCZGLc8xHSm9qTEWlV4c3fNS+9fnnFlIX6Mp/eC0r184Q1ozLk3+VfuRWzrX9ivwj9utBU1POlX9b2Jwf434u+W7dPLdvK++8ujqyNWZTjXAtgZIEl8Zs2qzxr31DPn3Cl3pxLqEcuUZ84LyqffXO7e79+w2ddS14buXEwdEvtM4TR5zVebgmVQ5ebK1P9u1zZjRafGXVOfsLylRNctk7a1LuBkx+vXVa0zLncYsj/ibOX/ayOs1/iumgvuVc/tTo1h1k9p5LlvLCsRPnT3Q/0/Eeip5cOyVVM9U9IZ2UyD80TxpWqr7gp/QYk2zQ9/s05rvdO2dhVYsMb/3811aXNUOzg1NuzzlVesM6f9uctqn3T5hTMtOSgl1diraQuePUufufT25MzVtorz255ONM7wFjp81WfexlqYk8/W7OpNeWrypvcaSmr9ORPQf7h8I8PUZoOuWc+bvU9BYjglc0x5ImopOedBSD0z3s5Nk12nUPyqJ4AdbTL2BlMooEFrIVxYuMLAQ+VZdcL7qCbALPP5q2dnIW3Akhm9Es94DWSGAROazJ6pmP9Lq63vjTJqAFHEd1q9XC59+T7bqHwLOTcWAf86t+D2gFdA843rMclfYIVx87DDY7uT3DwT8bDpYQjsPVdsrZ5Wk8XJ/AsfcQIMvkyj8GyMP/AyAr/48AGcOUTwkgy2T/AyAXoYZHE7ISk2JKrRaXyDBUhgJUIlOgGIorSUKCSQn0yRPyY4AsH6CV4Y8Rsvb9FrJiNRlNEreddxJdVY14h6rqYHNs+F/XRGefX/eK+0Zsm93m7b843xO/P9NmKTef6sehWc3V9hu679G+bsU1A+ddAPG5spo7gwaN3b2y+eVdP7QwN/nzs99e1FofdTWk5oZFHOXafD942OqkxA9c0iRDGl/cMOXr98Mqm46fyE5h6hv/4p5UFFZxuKbwe60oIqiy7IK2ctLXHctPyya5al12S53mHGhbOSs5b/aXAxNuxYEv3Mj5WgvVx61AU/bcyD7vvWYYest18rbBHeTP+5W5zsUh77uEhI/inRaNqs3R7ps6f36B697mhZ+FXG5rWnVv89LCMY1o7dC6D/d7OFFIyPIBspod6vEHIpz2rbpePeh+dFDEUv+WrCVzDdKXxL7o2gOuZ8yjjt/0qDIWuIxYWWtRqLd7RnHat4+4Lv1x5MLR65a3r2y/278ymFhC7tUe9Q87UVSw/HrCd6V/H/HGttq7u5cvq4ja8ZWZta12Xnc06Hje9dIw9aAp263TDtxJbgT78/HBR1vy97465HaFS97G0YfaigMV894P+OfijqO+Ufaq9JRc0+sthoSPmSBz3b0+XcxlaUv0BY+buT74MzGXh8iip+gudPotSv1xHsMZ+NOlwAob1fIMxp7B2DMYe4IwpsSewdjvwpjiafm2UurzBGAM91H6aFESkyuBTAcwOUH6oHJCCSCWSQmcfDpgTKtUSrDHCGOHfwtjcXvSHTD2+oBVHVuqTGafmRGvret45ejwPsieheuubkqeMX7CjrYhP7UsoNM6iid9pz1abS260Vy7aOSco+EX36jtG9ygimuqnn3zSv93Ru7y3JX67u0sy0Gf0PaqxvVHLr051Ko1/pxPztKmhDKr1pY2DNzywzzbN8fPlby1sf5gx+0yl+QEcojhsjaGO5a+fsEJ7fThCzRbmz682DcjzO/imZABTldOzs6PXH9+0ck94tllFUP7XAq7eFjjtLc8bNjccefdjg1cERL9aR45s63+8Mjn5xSIxqxaqq8bPGXS0eETXvpJebykaEV1PZGq+tecBL+84cjAE+NH7hs04nrGFO+UL8gT5v11ThVLx5eUZufeGM+p3pRWWwrar+RMHmUwKeILJFs01wLIidk/LHFuunHMdOQbt8N8u3bYrRE/1m/02adAEbrPssbBC0cfm/ZjB7j3dUV4TKBhr0f2uJgxoZo6KurkpEMy0UrrvcS6jHKXllOXWqTv9imeMCJs+uH8b67tOPvJobjsw/czfNOaXhg+9y6LfPLBilmncgsH+MruuZ91F1dOL+ooqL6enUXlrS+lr5zWbY3Nq41sSO3+LuwrarPcAjP4b+ZKJy4=
|
||||
eNrtmAlUE9cagOFZ0Wp9tC4tWqvT1K2WgZnsgYICgoAGwg5qH04mN8lAMhNnJpAE8bm1LnWLtXX3iWBQBNTiUhXsETcQte4VN9q61gXrw32p7yaA4NOenvdKe/QcL1kmc+/891/v/x3G52cAlqMY2r2QonnAEiQPf3BfjM9nwSgz4PiJDiPg9YwmTxUdF59rZqnq/nqeN3F+vr6EifJhTIAmKB+SMfpm4L6knuB94bXJAFxi8tSMxlq9IEtgBBxH6AAn8BueJSAZuBPNC/wESSzFA4RA4B2S4gDCmY1GgrUijBbh9QDRMgYDk0nROj8kyARlcgjBAoQFGoG3gGUMAEowc4AVZH/iLTAyGmCAN3QmHhUzqJGiKbiK41lAGAV+WsLAAW8BD4wmaCNvZuGzmA+Wna8HhAY6YGaenuF4e/HTJq0mSBJAeYAmGQ1Uw16is1Emb0QDtAaCB96IjeM1BVB5GrjcZi9IB8CEEgYqAzjqn7WvIaDmFEk4533TOIYubDAf5a0m8Ox0gdMiFPqK5u2bgjgrTUZDfYIifFVWGAYawX3kch9sjQXleIKioU841EBA1Rwm1/yW5hMmgkyHktCGENsd9Q8XN1/DcPblSoKMjntKJMGSevtygjVKxSXN77NmmqeMwJ4fonp2u4bJJ9vli3xwHL7WPiXZaZK9yPXl5/qkmI1PSQE8a0VJBgqz52DFjd4yAFrH6+25uES8ggWcCWYXmOCAj/FmbnwejA/YW5HfkGbLooc0BvaMW9e8QTBW9rJ4vdkbwWVIpNmACDGhBMGlfrjcT4wjg5XxhSEN28Q/Nyhr41mC5rQwMKGNqZBP6s10OtAUhDw3/GXO8ENrnOrDVEWBxcRwAG3Qyl6YjMbW1xcaMaikPuNQhtURNGVzbWtf6YwqrCeKXtcwbWIZp0i4OWrk7LlSoby4aSYNqmAvc16kRmhNGVRUiDDDIklTUilhoWK9OtTAWRtXN4anAHoBQ3EMxfAyC8zxDMaKmk31FYPCJMygSIC6op0rlWKbLSgsHGCgjBQMhuuz4YSACSTC4Pjm2RU8kw5ozr4Sl2D1Y2vzNSwwQvOc9jRJEirgKH3+qifSxM5FCrl489PrONBMp1yhkfvm2fkGGcswrtDSuBilNPbqXvBHKsBkUhGpUeBABIBULlVrNBIpKcflGKFViKSiTdA7FAmlONPBxLA89BMJz0Teaq/2NhIWZ90GiHCJSApt9UcomjSYNSDOrB7EOI3g/BETCwwMoVlNalGSIPUArc9ge/6glKggZURIQRxUMoRh0ikw+4S7V2oqqU1VGwOGirRKLmlIkjRRQsptOu2w+Gg+KokgMvW6TCnMx/A0A59GJ3JYCofiMolQJlXgQhzFfTAfWIFoKGWK9VEybOiwZDZcJstMS4jSMHqSVWCjqKi0aGu8IsmKMRFJZiHBYVxk8mDKxtiUehpnZTqAR9g01KBYHtNHG4dIWGtyAg308UnKoKiEIUEmIlacnioHg9Rh5nBz0mA5NJHg9QG+/ghMeQo6PaCh8FBYeGh92Ykby84f0bgcE+Dz9MHrj4TDHhNNG6z+SJzTwwB+E0YQB9tFQBRDg+o50DHmDEoTwMptMoVYMswiiQjHTXprZrLKR6kcrEtUGOO4SIUlTRgp1ki1NmNwTDPPyOQ4ijU4R4qJ5a7kbFL9/9RqQzLa/BxBo02uLmjPpxmOprRaRxysKsDaC0gDY9bAHsICR0gYGhuUYl+nkGJqDMiASCuWiUmNHA2GR3KjtCenTp6zAeUTBph4GaS9RC8KEPiJxSKBP2IkAuRSMYa5Wu44hzNRad1O94Ken7d1c41W8P348bTYbVGnsDfLLn4UFsgc6KN6+7ZHaHDwzSO3D7ULnRGwcJ5fDzK6wEf/uOzruacnfhVZdhdjztRliepi3drm7uw28XLvVdtO/ttWytyqO737Rh31qGjjtbrvb1dm3+OzMuvkZyaHqlUPJk/1LOwUOmFjxWeneiS69yZjixJ7Xj56DyQkdfUKz1Wu289Zw86PXPThFZCd6JMEzDf21Fb2m6buUNPebezch33+FXXWd+1n539SdcXCqnOSey50G4Qs7B+8Sy8YsluNj5vaPbfq2OrSjV5BHq+dseXmrR/edsU2uUrz4aK9U9xl5JzK0neLFrsfUqjnl+uqPFdX1OoPxV5stf6GZ8FxL5265rU59hHCzvNLKy4Vu1kqtnvVnOsUeOCHTcT9cfe2XFb79nV0GfqDO6mm8tnsrZbBhQcrDh2XL1yuir21ou8FsUWRFiJaOi/015NVWSceHN9lfvzBAPfIHN2R6Z4rptsWZqa1n1dbl2rc/ZZuu0B9l+osNWB5Ev2pioO1q8N2GDXdF0wf9Z5Hz6GBi9ftUic99HAGpJVbm8neugAYnRZErhPuOb/JXCNopGGoWEZjJp38paVowtBIXz5NK5quQi0Ux8OMeoJoZhPCM5DSKA4xMRAd/ZrWNmM1yDiwjxoMVie1wfMQkp6BYZ+7QxTIRFxaWprLQpuNprvBBjNQA5alGrZRw9+/90zT1WDYn2kXYNJNe3pDDaEbgOs+w1K633BJi6PnGbc3X8HniwmfDtLViO0n3Nu8jI34T2iRz+C4GJf9bzju9Ts4LnxJcVwsUfxlOC6WCl88HBdiLY7jalxOQFtkClKLiwgg0UjkCgIjCbiXWgww6Z+P43+c6OQiiRaTtCDRlT5LdPRJ7I2yBx+1XXxNEFWi2Vd0euw11UivY9s/y8E0U9ZWV85GDp4eL3gcGB5BZl4Z+8kb82Ze/zlgdnTrsQi6AYn3TelwJl521rjqzun8O1d2z/TcWBM4euvw7LIto++0bhOM7tlhutaq/xjtJ/1klglD9/cguqiEfY7a6rSbUlKPVvm1mnBY37GobkTRCMms5WmbkosLdjoKapYcL/vg0xjDj1nubr/M+bYy/+jdIQYiJjDJY1pMRr8NF/u4vbUoosv7ZF7Foa/79VUpqvQlt9Kvdwib5T7uzpI5Yq2nu2RW6/PT2vd+eLUSWafd8XbI0rjM+9/O+9zj1PQYx8gMsdYjPP29KvG87hd/ndh9v/u0K6NNxocT+mdoPrytHNh1sVL7gb+v0NbPrbZu6a53qj/1pH85tebBuHubzNlzOh229Vmz9eNxV5MXrNya8uXqFZJuKbOnlAS1750w06aOGm7P7SNrNTXlwrf7Sh3Ze5UbHvS897qqXUAuPXqg8jL4YSU60u/MP9cv1i3q8PnkH+2dpUO9DvUQ157Tz+jb+6dA4d/3zTBM2tZmT+aXt5O7T61xq2c8BGwuHdbSjLf5RWY8byRTTxnq0ey/Ue2P8x9Bw796gVZYwZmv4O8V/L2Cv78c/sTCloU/0UsKf9DDfxn8wVi8ePCHS1oc/rRqIUlqcAxI1CKNRKqViOELiEk5JhJKxFL8pYA/mUwjFLYg/JU/A3/KmYed/85LmjY6KtF45LR32Th5sCcy8bVhoZ2yTnTrPhNEJ5cXPH5P8Sj8nbenDMnbu1A25vq+j7u47em23mP5nX6bPYttotIrK8aUPrprLbGenLQvMuzXQxUXDk+q/T4rQeuRWRNeNd+cu8lzxo6vB5T7dWx3fmXVhQ107T/QK7a/2aTnhMZ38oaNOjqcWWWLWV0QGodej74aMFwZUfRjOiQ/cXZJFHMpf9j2ZR8n2SJfz8w7NrePW5fq81PL5xg63ly2p+uRq58+7PXI3HZGx5G9ZuHJ0/dPf7Ny7K2l1aO8c7hW5s65AbfOP+544N1LB3J3k+ZeF+/euHB/fvjJ94kNl9tcLHa7uaQ1F//Ftkmd2G+S2dL29i032yQYizZSNWHlOwKrosuDlJdy3Czzlr27ZHlO8LnA8I8eLnZcrF1wVi7xNXjM1amSD+y79SDhXvm6Y7OWmn/ub1sec71y715rtNyrKmngqPP3GNx3X+D9DWPuDDirG5jeLtjRwT7t6HcdvjPa/614vOra7Bu9kntPHVDoSPTeWdomZHj3Uzcq4rt9oaqhi2pi3jgYNferO80Q8GB7R64BRu0/BO9/TA==
|
||||
File diff suppressed because one or more lines are too long
@@ -20,8 +20,7 @@ LangChain is a framework that consists of a number of packages.
|
||||
|
||||
This package contains base abstractions for different components and ways to compose them together.
|
||||
The interfaces for core components like chat models, vector stores, tools and more are defined here.
|
||||
No third-party integrations are defined here.
|
||||
The dependencies are very lightweight.
|
||||
**No third-party integrations are defined here.** The dependencies are kept purposefully very lightweight.
|
||||
|
||||
## langchain
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain>=0.2.8 langchain-openai langchain-anthropic langchain-google-vertexai"
|
||||
"%pip install -qU langchain langchain-openai langchain-anthropic langchain-google-genai"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -38,7 +38,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": 5,
|
||||
"id": "79e14913-803c-4382-9009-5c6af3d75d35",
|
||||
"metadata": {
|
||||
"execution": {
|
||||
@@ -49,38 +49,15 @@
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/var/folders/4j/2rz3865x6qg07tx43146py8h0000gn/T/ipykernel_95293/571506279.py:4: LangChainBetaWarning: The function `init_chat_model` is in beta. It is actively being worked on, so the API may change.\n",
|
||||
" gpt_4o = init_chat_model(\"gpt-4o\", model_provider=\"openai\", temperature=0)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"GPT-4o: I'm an AI created by OpenAI, and I don't have a personal name. How can I assist you today?\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"GPT-4o: I’m called ChatGPT. How can I assist you today?\n",
|
||||
"\n",
|
||||
"Claude Opus: My name is Claude. It's nice to meet you!\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Gemini 1.5: I am a large language model, trained by Google. \n",
|
||||
"\n",
|
||||
"I don't have a name like a person does. You can call me Bard if you like! 😊 \n",
|
||||
"\n",
|
||||
"Gemini 2.5: I do not have a name. I am a large language model, trained by Google.\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
@@ -88,6 +65,10 @@
|
||||
"source": [
|
||||
"from langchain.chat_models import init_chat_model\n",
|
||||
"\n",
|
||||
"# Don't forget to set your environment variables for the API keys of the respective providers!\n",
|
||||
"# For example, you can set them in your terminal or in a .env file:\n",
|
||||
"# export OPENAI_API_KEY=\"your_openai_api_key\"\n",
|
||||
"\n",
|
||||
"# Returns a langchain_openai.ChatOpenAI instance.\n",
|
||||
"gpt_4o = init_chat_model(\"gpt-4o\", model_provider=\"openai\", temperature=0)\n",
|
||||
"# Returns a langchain_anthropic.ChatAnthropic instance.\n",
|
||||
@@ -96,13 +77,13 @@
|
||||
")\n",
|
||||
"# Returns a langchain_google_vertexai.ChatVertexAI instance.\n",
|
||||
"gemini_15 = init_chat_model(\n",
|
||||
" \"gemini-1.5-pro\", model_provider=\"google_vertexai\", temperature=0\n",
|
||||
" \"gemini-2.5-pro\", model_provider=\"google_genai\", temperature=0\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Since all model integrations implement the ChatModel interface, you can use them in the same way.\n",
|
||||
"print(\"GPT-4o: \" + gpt_4o.invoke(\"what's your name\").content + \"\\n\")\n",
|
||||
"print(\"Claude Opus: \" + claude_opus.invoke(\"what's your name\").content + \"\\n\")\n",
|
||||
"print(\"Gemini 1.5: \" + gemini_15.invoke(\"what's your name\").content + \"\\n\")"
|
||||
"print(\"Gemini 2.5: \" + gemini_15.invoke(\"what's your name\").content + \"\\n\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -117,7 +98,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": null,
|
||||
"id": "0378ccc6-95bc-4d50-be50-fccc193f0a71",
|
||||
"metadata": {
|
||||
"execution": {
|
||||
@@ -131,7 +112,7 @@
|
||||
"source": [
|
||||
"gpt_4o = init_chat_model(\"gpt-4o\", temperature=0)\n",
|
||||
"claude_opus = init_chat_model(\"claude-3-opus-20240229\", temperature=0)\n",
|
||||
"gemini_15 = init_chat_model(\"gemini-1.5-pro\", temperature=0)"
|
||||
"gemini_15 = init_chat_model(\"gemini-2.5-pro\", temperature=0)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -146,7 +127,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 7,
|
||||
"id": "6c037f27-12d7-4e83-811e-4245c0e3ba58",
|
||||
"metadata": {
|
||||
"execution": {
|
||||
@@ -160,10 +141,10 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"I'm an AI created by OpenAI, and I don't have a personal name. How can I assist you today?\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 23, 'prompt_tokens': 11, 'total_tokens': 34}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_25624ae3a5', 'finish_reason': 'stop', 'logprobs': None}, id='run-b41df187-4627-490d-af3c-1c96282d3eb0-0', usage_metadata={'input_tokens': 11, 'output_tokens': 23, 'total_tokens': 34})"
|
||||
"AIMessage(content='I’m called ChatGPT. How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 11, 'total_tokens': 24, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_07871e2ad8', 'id': 'chatcmpl-BwCyyBpMqn96KED6zPhLm4k9SQMiQ', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--fada10c3-4128-406c-b83d-a850d16b365f-0', usage_metadata={'input_tokens': 11, 'output_tokens': 13, 'total_tokens': 24, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -178,7 +159,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 8,
|
||||
"id": "321e3036-abd2-4e1f-bcc6-606efd036954",
|
||||
"metadata": {
|
||||
"execution": {
|
||||
@@ -192,10 +173,10 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"My name is Claude. It's nice to meet you!\", additional_kwargs={}, response_metadata={'id': 'msg_01Fx9P74A7syoFkwE73CdMMY', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 11, 'output_tokens': 15}}, id='run-a0fd2bbd-3b7e-46bf-8d69-a48c7e60b03c-0', usage_metadata={'input_tokens': 11, 'output_tokens': 15, 'total_tokens': 26})"
|
||||
"AIMessage(content=\"My name is Claude. It's nice to meet you!\", additional_kwargs={}, response_metadata={'id': 'msg_01VDGrG9D6yefanbBG9zPJrc', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 11, 'output_tokens': 15, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-5-sonnet-20240620'}, id='run--f0156087-debf-4b4b-9aaa-f3328a81ef92-0', usage_metadata={'input_tokens': 11, 'output_tokens': 15, 'total_tokens': 26, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}})"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -394,9 +375,9 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "poetry-venv-2",
|
||||
"display_name": "langchain",
|
||||
"language": "python",
|
||||
"name": "poetry-venv-2"
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
@@ -408,7 +389,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.9"
|
||||
"version": "3.10.16"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
"\n",
|
||||
"Wrapping your LLM with the standard [`BaseChatModel`](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface allow you to use your LLM in existing LangChain programs with minimal code modifications!\n",
|
||||
"\n",
|
||||
"As an bonus, your LLM will automatically become a LangChain [Runnable](/docs/concepts/runnables/) and will benefit from some optimizations out of the box (e.g., batch via a threadpool), async support, the `astream_events` API, etc.\n",
|
||||
"As a bonus, your LLM will automatically become a LangChain [Runnable](/docs/concepts/runnables/) and will benefit from some optimizations out of the box (e.g., batch via a threadpool), async support, the `astream_events` API, etc.\n",
|
||||
"\n",
|
||||
"## Inputs and outputs\n",
|
||||
"\n",
|
||||
|
||||
@@ -34,6 +34,8 @@ These are the core building blocks you can use when building applications.
|
||||
[Chat Models](/docs/concepts/chat_models) are newer forms of language models that take messages in and output a message.
|
||||
See [supported integrations](/docs/integrations/chat/) for details on getting started with chat models from a specific provider.
|
||||
|
||||
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
|
||||
- [How to: work with local models](/docs/how_to/local_llms)
|
||||
- [How to: do function/tool calling](/docs/how_to/tool_calling)
|
||||
- [How to: get models to return structured output](/docs/how_to/structured_output)
|
||||
- [How to: cache model responses](/docs/how_to/chat_model_caching)
|
||||
@@ -48,8 +50,6 @@ See [supported integrations](/docs/integrations/chat/) for details on getting st
|
||||
- [How to: few shot prompt tool behavior](/docs/how_to/tools_few_shot)
|
||||
- [How to: bind model-specific formatted tools](/docs/how_to/tools_model_specific)
|
||||
- [How to: force a specific tool call](/docs/how_to/tool_choice)
|
||||
- [How to: work with local models](/docs/how_to/local_llms)
|
||||
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
|
||||
- [How to: pass multimodal data directly to models](/docs/how_to/multimodal_inputs/)
|
||||
|
||||
### Messages
|
||||
@@ -362,14 +362,14 @@ relevant to LangChain below:
|
||||
Evaluating performance is a vital part of building LLM-powered applications.
|
||||
LangSmith helps with every step of the process from creating a dataset to defining metrics to running evaluators.
|
||||
|
||||
To learn more, check out the [LangSmith evaluation how-to guides](https://docs.smith.langchain.com/how_to_guides#evaluation).
|
||||
To learn more, check out the [LangSmith evaluation how-to guides](https://docs.smith.langchain.com/evaluation/how_to_guides).
|
||||
|
||||
### Tracing
|
||||
<span data-heading-keywords="trace,tracing"></span>
|
||||
|
||||
Tracing gives you observability inside your chains and agents, and is vital in diagnosing issues.
|
||||
|
||||
- [How to: trace with LangChain](https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langchain)
|
||||
- [How to: add metadata and tags to traces](https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langchain#add-metadata-and-tags-to-traces)
|
||||
- [How to: trace with LangChain](https://docs.smith.langchain.com/observability/how_to_guides/trace_with_langchain)
|
||||
- [How to: add metadata and tags to traces](https://docs.smith.langchain.com/observability/how_to_guides/add_metadata_tags)
|
||||
|
||||
You can see general tracing-related how-tos [in this section of the LangSmith docs](https://docs.smith.langchain.com/how_to_guides/tracing).
|
||||
You can see general tracing-related how-tos [in this section of the LangSmith docs](https://docs.smith.langchain.com/observability/how_to_guides#tracing-configuration).
|
||||
|
||||
@@ -13,15 +13,15 @@
|
||||
"\n",
|
||||
"This has at least two important benefits:\n",
|
||||
"\n",
|
||||
"1. `Privacy`: Your data is not sent to a third party, and it is not subject to the terms of service of a commercial service\n",
|
||||
"2. `Cost`: There is no inference fee, which is important for token-intensive applications (e.g., [long-running simulations](https://twitter.com/RLanceMartin/status/1691097659262820352?s=20), summarization)\n",
|
||||
"1. **Privacy**: Your data is not sent to a third party, and it is not subject to the terms of service of a commercial service\n",
|
||||
"2. **Cost**: There is no inference fee, which is important for token-intensive applications (e.g., [long-running simulations](https://twitter.com/RLanceMartin/status/1691097659262820352?s=20), summarization)\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"Running an LLM locally requires a few things:\n",
|
||||
"\n",
|
||||
"1. `Open-source LLM`: An open-source LLM that can be freely modified and shared \n",
|
||||
"2. `Inference`: Ability to run this LLM on your device w/ acceptable latency\n",
|
||||
"1. **Open-source LLM**: An open-source LLM that can be freely modified and shared \n",
|
||||
"2. **Inference**: Ability to run this LLM on your device w/ acceptable latency\n",
|
||||
"\n",
|
||||
"### Open-source LLMs\n",
|
||||
"\n",
|
||||
@@ -29,8 +29,8 @@
|
||||
"\n",
|
||||
"These LLMs can be assessed across at least two dimensions (see figure):\n",
|
||||
" \n",
|
||||
"1. `Base model`: What is the base-model and how was it trained?\n",
|
||||
"2. `Fine-tuning approach`: Was the base-model fine-tuned and, if so, what [set of instructions](https://cameronrwolfe.substack.com/p/beyond-llama-the-power-of-open-llms#%C2%A7alpaca-an-instruction-following-llama-model) was used?\n",
|
||||
"1. **Base model**: What is the base-model and how was it trained?\n",
|
||||
"2. **Fine-tuning approach**: Was the base-model fine-tuned and, if so, what [set of instructions](https://cameronrwolfe.substack.com/p/beyond-llama-the-power-of-open-llms#%C2%A7alpaca-an-instruction-following-llama-model) was used?\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
@@ -51,8 +51,8 @@
|
||||
"\n",
|
||||
"In general, these frameworks will do a few things:\n",
|
||||
"\n",
|
||||
"1. `Quantization`: Reduce the memory footprint of the raw model weights\n",
|
||||
"2. `Efficient implementation for inference`: Support inference on consumer hardware (e.g., CPU or laptop GPU)\n",
|
||||
"1. **Quantization**: Reduce the memory footprint of the raw model weights\n",
|
||||
"2. **Efficient implementation for inference**: Support inference on consumer hardware (e.g., CPU or laptop GPU)\n",
|
||||
"\n",
|
||||
"In particular, see [this excellent post](https://finbarr.ca/how-is-llama-cpp-possible/) on the importance of quantization.\n",
|
||||
"\n",
|
||||
@@ -679,11 +679,17 @@
|
||||
"\n",
|
||||
"In general, use cases for local LLMs can be driven by at least two factors:\n",
|
||||
"\n",
|
||||
"* `Privacy`: private data (e.g., journals, etc) that a user does not want to share \n",
|
||||
"* `Cost`: text preprocessing (extraction/tagging), summarization, and agent simulations are token-use-intensive tasks\n",
|
||||
"* **Privacy**: private data (e.g., journals, etc) that a user does not want to share \n",
|
||||
"* **Cost**: text preprocessing (extraction/tagging), summarization, and agent simulations are token-use-intensive tasks\n",
|
||||
"\n",
|
||||
"In addition, [here](https://blog.langchain.dev/using-langsmith-to-support-fine-tuning-of-open-source-llms/) is an overview on fine-tuning, which can utilize open-source LLMs."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "14c2c170",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -237,6 +237,157 @@
|
||||
" print(chunk.text(), end=\"|\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a009400a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Extended Thinking \n",
|
||||
"\n",
|
||||
"This guide focuses on implementing Extended Thinking using AWS Bedrock with LangChain's `ChatBedrockConverse` integration.\n",
|
||||
"\n",
|
||||
"### Supported Models\n",
|
||||
"\n",
|
||||
"Extended Thinking is available for the following Claude models on AWS Bedrock:\n",
|
||||
"\n",
|
||||
"| Model | Model ID |\n",
|
||||
"|-------|----------|\n",
|
||||
"| **Claude Opus 4** | `anthropic.claude-opus-4-20250514-v1:0` |\n",
|
||||
"| **Claude Sonnet 4** | `anthropic.claude-sonnet-4-20250514-v1:0` |\n",
|
||||
"| **Claude 3.7 Sonnet** | `us.anthropic.claude-3-7-sonnet-20250219-v1:0` |\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "abc790ca",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=[{'type': 'reasoning_content', 'reasoning_content': {'text': 'The user wants me to translate \"I love programming\" from English to French.\\n\\n\"I love\" translates to \"J\\'aime\" or \"J\\'adore\" in French\\n\"Programming\" translates to \"la programmation\" in French\\n\\nSo the translation would be \"J\\'aime la programmation\" or \"J\\'adore la programmation\"\\n\\nBoth are correct, but \"J\\'aime\" is more commonly used for expressing love/liking something.', 'signature': 'EpgECkgIBRABGAIqQDub6nRpiusjbxZONXVlGXg5ZjUY1Eka1Yp4oBBHmRqGjId+StTBPuwD3CXLyb2rUDRhSc3hTpTM4krVqlFZrIsSDI/WLa1mu38DDqt1HRoMUjm+jF+03MZFD+WQIjBZtHaYiqgY0JQgU0NdXDwwBSZX44gXwuX9EDekh12VM1ysq+WxVtkp0WMU0dKCJo4q/QKpguFFlZtEZjF9PftzOgTIyy+1H5pY+Dsb2pnrGtfAgwTR7PuZ/d8ibY0A8ywjVEZtGm+PtcnCJiK53BWxhGYOtxnfN/RRKtuZhvPQj+QQOWeRWqH+GcbeISCgyTYn5WG75fmVL707byjQZ3IuhMfyZWmiTFE2fc4Jn/bxX7OsU+DbTWv2K1a+g7eW+dvQwYzCBO1hfEn4699/CHII8UAcHh1L3bnxOWGKkeVQ0KMfgfwVb0vuGG4QBYKIDs87QL414i69D68DxqCTZAHK4lMA6Xs7zW+m0MMCct4iHRnJI8kat1mlBEpMz6NRo9KacZJXpLJxofIU4ho7R5/QHccdni0IidNkUtrLBSB3toNJoQEcStts2UR67NHTxn47zk1/hi4v4Ahtw9OEQFONaH6XaG1wjpqEdjQ8/Tmg9eB6ZLoQ4sQfhcMF8Uo3hHbBY8jA3jZ+9pa9VbuVbO6Eup8NX3XXZm2nk50OMWX7hBwgBmlZbEew6pWFu7+13EkYAQ=='}}, {'type': 'text', 'text': \"J'aime la programmation.\"}], additional_kwargs={}, response_metadata={'ResponseMetadata': {'RequestId': '169ca92f-19c9-480c-9fc3-4e5284507e67', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Tue, 22 Jul 2025 04:40:22 GMT', 'content-type': 'application/json', 'content-length': '1498', 'connection': 'keep-alive', 'x-amzn-requestid': '169ca92f-19c9-480c-9fc3-4e5284507e67'}, 'RetryAttempts': 0}, 'stopReason': 'end_turn', 'metrics': {'latencyMs': [2839]}, 'model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0'}, id='run--42e05e5d-ba86-4dce-9e29-2a4ba32c5804-0', usage_metadata={'input_tokens': 58, 'output_tokens': 122, 'total_tokens': 180, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_aws import ChatBedrockConverse\n",
|
||||
"\n",
|
||||
"llm = ChatBedrockConverse(\n",
|
||||
" model_id=\"us.anthropic.claude-sonnet-4-20250514-v1:0\",\n",
|
||||
" region_name=\"us-west-2\",\n",
|
||||
" max_tokens=4096,\n",
|
||||
" additional_model_request_fields={\n",
|
||||
" \"thinking\": {\"type\": \"enabled\", \"budget_tokens\": 1024},\n",
|
||||
" },\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"ai_msg = llm.invoke(messages)\n",
|
||||
"ai_msg"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "7fb27b941602401d91542211134fc71a",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[{'type': 'reasoning_content', 'reasoning_content': {'text': 'The user wants me to translate \"I love programming\" from English to French.\\n\\n\"I love\" translates to \"J\\'aime\" or \"J\\'adore\" in French\\n\"Programming\" translates to \"la programmation\" in French\\n\\nSo the translation would be \"J\\'aime la programmation\" or \"J\\'adore la programmation\"\\n\\nBoth are correct, but \"J\\'aime\" is more commonly used for expressing love/liking something.', 'signature': 'EpgECkgIBRABGAIqQDub6nRpiusjbxZONXVlGXg5ZjUY1Eka1Yp4oBBHmRqGjId+StTBPuwD3CXLyb2rUDRhSc3hTpTM4krVqlFZrIsSDI/WLa1mu38DDqt1HRoMUjm+jF+03MZFD+WQIjBZtHaYiqgY0JQgU0NdXDwwBSZX44gXwuX9EDekh12VM1ysq+WxVtkp0WMU0dKCJo4q/QKpguFFlZtEZjF9PftzOgTIyy+1H5pY+Dsb2pnrGtfAgwTR7PuZ/d8ibY0A8ywjVEZtGm+PtcnCJiK53BWxhGYOtxnfN/RRKtuZhvPQj+QQOWeRWqH+GcbeISCgyTYn5WG75fmVL707byjQZ3IuhMfyZWmiTFE2fc4Jn/bxX7OsU+DbTWv2K1a+g7eW+dvQwYzCBO1hfEn4699/CHII8UAcHh1L3bnxOWGKkeVQ0KMfgfwVb0vuGG4QBYKIDs87QL414i69D68DxqCTZAHK4lMA6Xs7zW+m0MMCct4iHRnJI8kat1mlBEpMz6NRo9KacZJXpLJxofIU4ho7R5/QHccdni0IidNkUtrLBSB3toNJoQEcStts2UR67NHTxn47zk1/hi4v4Ahtw9OEQFONaH6XaG1wjpqEdjQ8/Tmg9eB6ZLoQ4sQfhcMF8Uo3hHbBY8jA3jZ+9pa9VbuVbO6Eup8NX3XXZm2nk50OMWX7hBwgBmlZbEew6pWFu7+13EkYAQ=='}}, {'type': 'text', 'text': \"J'aime la programmation.\"}]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(ai_msg.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f1eb1ce1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### How extended thinking works\n",
|
||||
"\n",
|
||||
"When extended thinking is turned on, Claude creates thinking content blocks where it outputs its internal reasoning. Claude incorporates insights from this reasoning before crafting a final response. The API response will include thinking content blocks, followed by text content blocks."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "951d8206",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[('system',\n",
|
||||
" 'You are a helpful assistant that translates English to French. Translate the user sentence.'),\n",
|
||||
" ('human', 'I love programming.'),\n",
|
||||
" ('ai',\n",
|
||||
" [{'type': 'reasoning_content',\n",
|
||||
" 'reasoning_content': {'text': 'The user wants me to translate \"I love programming\" from English to French.\\n\\n\"I love\" translates to \"J\\'aime\" or \"J\\'adore\" in French\\n\"Programming\" translates to \"la programmation\" in French\\n\\nSo the translation would be \"J\\'aime la programmation\" or \"J\\'adore la programmation\"\\n\\nBoth are correct, but \"J\\'aime\" is more commonly used for expressing love/liking something.',\n",
|
||||
" 'signature': 'EpgECkgIBRABGAIqQDub6nRpiusjbxZONXVlGXg5ZjUY1Eka1Yp4oBBHmRqGjId+StTBPuwD3CXLyb2rUDRhSc3hTpTM4krVqlFZrIsSDI/WLa1mu38DDqt1HRoMUjm+jF+03MZFD+WQIjBZtHaYiqgY0JQgU0NdXDwwBSZX44gXwuX9EDekh12VM1ysq+WxVtkp0WMU0dKCJo4q/QKpguFFlZtEZjF9PftzOgTIyy+1H5pY+Dsb2pnrGtfAgwTR7PuZ/d8ibY0A8ywjVEZtGm+PtcnCJiK53BWxhGYOtxnfN/RRKtuZhvPQj+QQOWeRWqH+GcbeISCgyTYn5WG75fmVL707byjQZ3IuhMfyZWmiTFE2fc4Jn/bxX7OsU+DbTWv2K1a+g7eW+dvQwYzCBO1hfEn4699/CHII8UAcHh1L3bnxOWGKkeVQ0KMfgfwVb0vuGG4QBYKIDs87QL414i69D68DxqCTZAHK4lMA6Xs7zW+m0MMCct4iHRnJI8kat1mlBEpMz6NRo9KacZJXpLJxofIU4ho7R5/QHccdni0IidNkUtrLBSB3toNJoQEcStts2UR67NHTxn47zk1/hi4v4Ahtw9OEQFONaH6XaG1wjpqEdjQ8/Tmg9eB6ZLoQ4sQfhcMF8Uo3hHbBY8jA3jZ+9pa9VbuVbO6Eup8NX3XXZm2nk50OMWX7hBwgBmlZbEew6pWFu7+13EkYAQ=='}},\n",
|
||||
" {'type': 'text', 'text': \"J'aime la programmation.\"}]),\n",
|
||||
" ('human', 'I love AI')]"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"next_messages = messages + [(\"ai\", ai_msg.content), (\"human\", \"I love AI\")]\n",
|
||||
"next_messages"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "9d8c506c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=[{'type': 'reasoning_content', 'reasoning_content': {'text': 'The user wants me to translate \"I love AI\" from English to French. \\n\\n\"I love\" translates to \"J\\'aime\" in French.\\n\"AI\" stands for \"Artificial Intelligence\" which in French is \"Intelligence Artificielle\" or abbreviated as \"IA\".\\n\\nSo the translation would be \"J\\'aime l\\'IA\" (using the abbreviation) or \"J\\'aime l\\'intelligence artificielle\" (using the full term).\\n\\nI think using the abbreviation \"IA\" would be more natural and commonly used, similar to how we use \"AI\" in English.', 'signature': 'EoMFCkgIBRABGAIqQOwp9d0YWm8NctfL9lf1MeWR1OxeAKB3Es19Lei2bdHQ4W0ezTK4wVcm/VLM+7kICX2aB9RAmUD5sJxoKHfdX38SDIR/aSJhHZifGOHqwBoMhzNsyPmB7FFNvNESIjBMVRpRUDTFGn5+nL0x5CjWhKA8H/XFnKYRrUyMYb1n7lCQA7BeEjsaWwxZ3YV9rZsq6APuaXaA40Bt+KnpPOo06r72L/DceliRAw1a6cuT5E0Dv0eIAOYblbXaKYn0jy8UzTUuctOP3As/zT5pK5yC+Rx0d2l9kuP3+COERM98u0R04bWn6qh0HcyE+zNc7c4YWkncjdmOxF/j6OxhcMhZEoX2035v9eUJ9+O/u1xaff08YAEfg7TGWrSIwalpjs1mzWA9ijKg8YyjmXjWnMeFn0z6LDqLaaKc+nC8IN9SLwA/eHpf/ayoEgmogn7gWzijW8MDbnlwpQDS75wK7An3RMEcpWD/OXrKb1EhWKEmOBro5BOTGsfK3ZDveRL0aCBINdOu+AHMQDFXJ04cRDEjs9GE3YC218UcFtS42TFO7/Ct5CYCTknETPx93zcGTOM2VPOZ02Uem1A7Nda/Fa4l2b03EUEtwlgske5K1RbeohN9sclxYsxX5nGJ5sSZurVCk9plkyTG3aiPvbohfVVarVgukKoKwoMDYz5rHVscWlUe+qeqJE/H+KKlhtzO+lWWDN4knqeYsZ55flO5Hq4vT20QCYnF8hcUx07ngGKXuGID9n5kFnLsP8sBUHYKm7bmopFFZvfPcmsqiV9yvG/8Ly9DHbmY5ZwxyrbdJCFT6HD6kq/mEBDftZ6dhmyKMimJBfbTj7d3VAILbRgB'}}, {'type': 'text', 'text': \"J'aime l'IA.\"}], additional_kwargs={}, response_metadata={'ResponseMetadata': {'RequestId': '023799d6-7ed5-4e49-8ad7-7460a49a9a45', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Tue, 22 Jul 2025 04:40:34 GMT', 'content-type': 'application/json', 'content-length': '1737', 'connection': 'keep-alive', 'x-amzn-requestid': '023799d6-7ed5-4e49-8ad7-7460a49a9a45'}, 'RetryAttempts': 0}, 'stopReason': 'end_turn', 'metrics': {'latencyMs': [3473]}, 'model_name': 'us.anthropic.claude-sonnet-4-20250514-v1:0'}, id='run--ca8abc92-60a9-4bd1-93b4-7788496eda7a-0', usage_metadata={'input_tokens': 75, 'output_tokens': 153, 'total_tokens': 228, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"ai_msg = llm.invoke(next_messages)\n",
|
||||
"ai_msg"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "e53e3ebb",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[{'type': 'reasoning_content', 'reasoning_content': {'text': 'The user wants me to translate \"I love AI\" from English to French. \\n\\n\"I love\" translates to \"J\\'aime\" in French.\\n\"AI\" stands for \"Artificial Intelligence\" which in French is \"Intelligence Artificielle\" or abbreviated as \"IA\".\\n\\nSo the translation would be \"J\\'aime l\\'IA\" (using the abbreviation) or \"J\\'aime l\\'intelligence artificielle\" (using the full term).\\n\\nI think using the abbreviation \"IA\" would be more natural and commonly used, similar to how we use \"AI\" in English.', 'signature': 'EoMFCkgIBRABGAIqQOwp9d0YWm8NctfL9lf1MeWR1OxeAKB3Es19Lei2bdHQ4W0ezTK4wVcm/VLM+7kICX2aB9RAmUD5sJxoKHfdX38SDIR/aSJhHZifGOHqwBoMhzNsyPmB7FFNvNESIjBMVRpRUDTFGn5+nL0x5CjWhKA8H/XFnKYRrUyMYb1n7lCQA7BeEjsaWwxZ3YV9rZsq6APuaXaA40Bt+KnpPOo06r72L/DceliRAw1a6cuT5E0Dv0eIAOYblbXaKYn0jy8UzTUuctOP3As/zT5pK5yC+Rx0d2l9kuP3+COERM98u0R04bWn6qh0HcyE+zNc7c4YWkncjdmOxF/j6OxhcMhZEoX2035v9eUJ9+O/u1xaff08YAEfg7TGWrSIwalpjs1mzWA9ijKg8YyjmXjWnMeFn0z6LDqLaaKc+nC8IN9SLwA/eHpf/ayoEgmogn7gWzijW8MDbnlwpQDS75wK7An3RMEcpWD/OXrKb1EhWKEmOBro5BOTGsfK3ZDveRL0aCBINdOu+AHMQDFXJ04cRDEjs9GE3YC218UcFtS42TFO7/Ct5CYCTknETPx93zcGTOM2VPOZ02Uem1A7Nda/Fa4l2b03EUEtwlgske5K1RbeohN9sclxYsxX5nGJ5sSZurVCk9plkyTG3aiPvbohfVVarVgukKoKwoMDYz5rHVscWlUe+qeqJE/H+KKlhtzO+lWWDN4knqeYsZ55flO5Hq4vT20QCYnF8hcUx07ngGKXuGID9n5kFnLsP8sBUHYKm7bmopFFZvfPcmsqiV9yvG/8Ly9DHbmY5ZwxyrbdJCFT6HD6kq/mEBDftZ6dhmyKMimJBfbTj7d3VAILbRgB'}}, {'type': 'text', 'text': \"J'aime l'IA.\"}]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(ai_msg.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a77519e5-897d-41a0-a9bb-55300fa79efc",
|
||||
@@ -379,7 +530,7 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"display_name": ".venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -393,7 +544,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.4"
|
||||
"version": "3.12.9"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -240,6 +240,65 @@
|
||||
"response.content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "382335a6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Accessing the search results metadata\n",
|
||||
"\n",
|
||||
"Perplexity often provides a list of the web pages it consulted (“search_results”).\n",
|
||||
"You don't need to pass any special parameter — the list is placed in\n",
|
||||
"`response.additional_kwargs[\"search_results\"]`.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "2b09214a",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"The tallest mountain in South America is Aconcagua. It has a summit elevation of approximately 6,961 meters (22,838 feet), making it not only the highest peak in South America but also the highest mountain in the Americas, the Western Hemisphere, and the Southern Hemisphere[1][2][4].\n",
|
||||
"\n",
|
||||
"Aconcagua is located in the Principal Cordillera of the Andes mountain range, in Mendoza Province, Argentina, near the border with Chile[1][2][4]. It is of volcanic origin but is not an active volcano[4]. The mountain is part of Aconcagua Provincial Park and features several glaciers, including the large Ventisquero Horcones Inferior glacier[1].\n",
|
||||
"\n",
|
||||
"In summary, Aconcagua stands as the tallest mountain in South America at about 6,961 meters (22,838 feet) in height.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[{'title': 'Aconcagua - Wikipedia',\n",
|
||||
" 'url': 'https://en.wikipedia.org/wiki/Aconcagua',\n",
|
||||
" 'date': None},\n",
|
||||
" {'title': 'The 10 Highest Mountains in South America - Much Better Adventures',\n",
|
||||
" 'url': 'https://www.muchbetteradventures.com/magazine/highest-mountains-south-america/',\n",
|
||||
" 'date': '2023-07-05'}]"
|
||||
]
|
||||
},
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chat = ChatPerplexity(temperature=0, model=\"sonar\")\n",
|
||||
"\n",
|
||||
"response = chat.invoke(\n",
|
||||
" \"What is the tallest mountain in South America?\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Main answer\n",
|
||||
"print(response.content)\n",
|
||||
"\n",
|
||||
"# First two supporting search results\n",
|
||||
"response.additional_kwargs[\"search_results\"][:2]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "13d93dc4",
|
||||
|
||||
@@ -13,7 +13,7 @@
|
||||
"\n",
|
||||
"`Textract` supports `JPEG`, `PNG`, `PDF`, and `TIFF` file formats; more information is available in [the documentation](https://docs.aws.amazon.com/textract/latest/dg/limits-document.html).\n",
|
||||
"\n",
|
||||
"The following samples demonstrate the use of `Amazon Textract` in combination with LangChain as a DocumentLoader."
|
||||
"The following examples demonstrate the use of `Amazon Textract` in combination with LangChain as a DocumentLoader."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -41,7 +41,7 @@
|
||||
"id": "400b25c6-befa-4730-a201-39ff112c8858",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Sample 1\n",
|
||||
"## Example 1: Loading from a local file\n",
|
||||
"\n",
|
||||
"The first example uses a local file, which internally will be sent to Amazon Textract sync API [DetectDocumentText](https://docs.aws.amazon.com/textract/latest/dg/API_DetectDocumentText.html). \n",
|
||||
"\n",
|
||||
@@ -100,8 +100,8 @@
|
||||
"id": "4cf7f19c-3635-453a-9c76-4baf98b8d7f4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Sample 2\n",
|
||||
"The next sample loads a file from an HTTPS endpoint. \n",
|
||||
"## Example 2: Loading from a URL\n",
|
||||
"The next example loads a file from an HTTPS endpoint. \n",
|
||||
"It has to be single page, as Amazon Textract requires all multi-page documents to be stored on S3."
|
||||
]
|
||||
},
|
||||
@@ -150,7 +150,7 @@
|
||||
"id": "3a9cd8ec-e663-4dc7-9db1-d2f575253141",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Sample 3\n",
|
||||
"## Example 3: Loading multi-page PDF documents\n",
|
||||
"\n",
|
||||
"Processing a multi-page document requires the document to be on S3. The sample document resides in a bucket in us-east-2 and Textract needs to be called in that same region to be successful, so we set the region_name on the client and pass that in to the loader to ensure Textract is called from us-east-2. You could also to have your notebook running in us-east-2, setting the AWS_DEFAULT_REGION set to us-east-2 or when running in a different environment, pass in a boto3 Textract client with that region name like in the cell below."
|
||||
]
|
||||
@@ -214,9 +214,15 @@
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"## Sample 4\n",
|
||||
"## Example 4: Customizing the output format\n",
|
||||
"\n",
|
||||
"You have the option to pass an additional parameter called `linearization_config` to the AmazonTextractPDFLoader which will determine how the text output will be linearized by the parser after Textract runs."
|
||||
"When Amazon Textract processes a PDF, it extracts all text, including elements like headers, footers, and page numbers. This extra information can be \"noisy\" and reduce the effectiveness of the output.\n",
|
||||
"\n",
|
||||
"The process of converting a document's 2D layout into a clean, one-dimensional string of text is called linearization.\n",
|
||||
"\n",
|
||||
"The AmazonTextractPDFLoader gives you precise control over this process with the `linearization_config` parameter. You can use it to specify which elements to exclude from the final output.\n",
|
||||
"\n",
|
||||
"The following example shows how to hide headers, footers, and figures, resulting in a much cleaner text block, for more advanced use cases see this [AWS blog post](https://aws.amazon.com/blogs/machine-learning/amazon-textracts-new-layout-feature-introduces-efficiencies-in-general-purpose-and-generative-ai-document-processing-tasks/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -248,7 +254,7 @@
|
||||
"## Using the AmazonTextractPDFLoader in a LangChain chain (e.g. OpenAI)\n",
|
||||
"\n",
|
||||
"The AmazonTextractPDFLoader can be used in a chain the same way the other loaders are used.\n",
|
||||
"Textract itself does have a [Query feature](https://docs.aws.amazon.com/textract/latest/dg/API_Query.html), which offers similar functionality to the QA chain in this sample, which is worth checking out as well."
|
||||
"Textract itself does have a [Query feature](https://docs.aws.amazon.com/textract/latest/dg/API_Query.html), which offers similar functionality to the QA chain in this example, which is worth checking out as well."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
"source": [
|
||||
"# Google SQL for MySQL\n",
|
||||
"\n",
|
||||
"> [Cloud Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers `MySQL`, `PostgreSQL`, and `SQL Server` database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
|
||||
"> [Google Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers `MySQL`, `PostgreSQL`, and `SQL Server` database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use `Google Cloud SQL for MySQL` to store chat message history with the `MySQLChatMessageHistory` class.\n",
|
||||
"\n",
|
||||
|
||||
@@ -42,11 +42,6 @@ from langchain_ai21 import AI21LLM
|
||||
from langchain_ai21 import AI21ContextualAnswers
|
||||
```
|
||||
|
||||
### AI21 Embeddings
|
||||
|
||||
```python
|
||||
from langchain_ai21 import AI21Embeddings
|
||||
```
|
||||
## Text splitters
|
||||
|
||||
### AI21 Semantic Text Splitter
|
||||
|
||||
96
docs/docs/integrations/providers/tensorlake.mdx
Normal file
96
docs/docs/integrations/providers/tensorlake.mdx
Normal file
@@ -0,0 +1,96 @@
|
||||
# Tensorlake
|
||||
|
||||
Tensorlake is the AI Data Cloud that reliably transforms data from unstructured sources into ingestion-ready formats for AI Applications.
|
||||
|
||||
The `langchain-tensorlake` package provides seamless integration between [Tensorlake](https://tensorlake.ai) and [LangChain](https://langchain.com),
|
||||
enabling you to build sophisticated document processing agents with enhanced parsing features, like signature detection.
|
||||
|
||||
## Tensorlake feature overview
|
||||
|
||||
Tensorlake gives you tools to:
|
||||
- Extract: Schema-driven structured data extraction to pull out specific fields from documents.
|
||||
- Parse: Convert documents to markdown to build RAG/Knowledge Graph systems.
|
||||
- Orchestrate: Build programmable workflows for large-scale ingestion and enrichment of Documents, Text, Audio, Video and more.
|
||||
|
||||
Learn more at [docs.tensorlake.ai](https://docs.tensorlake.ai/introduction)
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install -U langchain-tensorlake
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
Follow a [full tutorial](https://docs.tensorlake.ai/examples/tutorials/real-estate-agent-with-langgraph-cli) on how to detect signatures in unstructured documents using the `langchain-tensorlake` tool.
|
||||
|
||||
Or check out this [colab notebook](https://colab.research.google.com/drive/1VRWIPCWYnjcRtQL864Bqm9CJ6g4EpRqs?usp=sharing) for a quick start.
|
||||
|
||||
---
|
||||
## Quick Start
|
||||
|
||||
### 1. Set up your environment
|
||||
|
||||
You should configure credentials for Tensorlake and OpenAI by setting the following environment variables:
|
||||
```
|
||||
export TENSORLAKE_API_KEY="your-tensorlake-api-key"
|
||||
export OPENAI_API_KEY = "your-openai-api-key"
|
||||
```
|
||||
|
||||
Get your Tensorlake API key from the [Tensorlake Cloud Console](https://cloud.tensorlake.ai/). New users get 100 free credits.
|
||||
|
||||
### 2. Import necessary packages
|
||||
|
||||
```python
|
||||
from langchain_tensorlake import document_markdown_tool
|
||||
from langgraph.prebuilt import create_react_agent
|
||||
import asyncio
|
||||
import os
|
||||
```
|
||||
|
||||
### 3. Build a Signature Detection Agent
|
||||
|
||||
```python
|
||||
async def main(question):
|
||||
# Create the agent with the Tensorlake tool
|
||||
agent = create_react_agent(
|
||||
model="openai:gpt-4o-mini",
|
||||
tools=[document_markdown_tool],
|
||||
prompt=(
|
||||
"""
|
||||
I have a document that needs to be parsed. \n\nPlease parse this document and answer the question about it.
|
||||
"""
|
||||
),
|
||||
name="real-estate-agent",
|
||||
)
|
||||
|
||||
# Run the agent
|
||||
result = await agent.ainvoke({"messages": [{"role": "user", "content": question}]})
|
||||
|
||||
# Print the result
|
||||
print(result["messages"][-1].content)
|
||||
```
|
||||
|
||||
*Note:* We highly recommend using `openai` as the agent model to ensure the agent sets the right parsing parameters
|
||||
|
||||
### 4. Example Usage
|
||||
|
||||
```python
|
||||
# Define the path to the document to be parsed
|
||||
path = "path/to/your/document.pdf"
|
||||
|
||||
# Define the question to be asked and create the agent
|
||||
question = f"What contextual information can you extract about the signatures in my document found at {path}?"
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main(question))
|
||||
```
|
||||
|
||||
## Need help?
|
||||
|
||||
Reach out to us on [Slack](https://join.slack.com/t/tensorlakecloud/shared_invite/zt-32fq4nmib-gO0OM5RIar3zLOBm~ZGqKg) or on the
|
||||
[package repository on GitHub](https://github.com/tensorlakeai/langchain-tensorlake) directly.
|
||||
@@ -1,270 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: AI21\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a3d6f34",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# AI21Embeddings\n",
|
||||
"\n",
|
||||
":::caution This service is deprecated. :::\n",
|
||||
"\n",
|
||||
"This will help you get started with AI21 embedding models using LangChain. For detailed documentation on `AI21Embeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/ai21/embeddings/langchain_ai21.embeddings.AI21Embeddings.html).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"import { ItemTable } from \"@theme/FeatureTables\";\n",
|
||||
"\n",
|
||||
"<ItemTable category=\"text_embedding\" item=\"AI21\" />\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access AI21 embedding models you'll need to create an AI21 account, get an API key, and install the `langchain-ai21` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Head to [https://docs.ai21.com/](https://docs.ai21.com/) to sign up to AI21 and generate an API key. Once you've done this set the `AI21_API_KEY` environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "36521c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"AI21_API_KEY\"):\n",
|
||||
" os.environ[\"AI21_API_KEY\"] = getpass.getpass(\"Enter your AI21 API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c84fb993",
|
||||
"metadata": {},
|
||||
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "39a4953b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d9664366",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain AI21 integration lives in the `langchain-ai21` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "64853226",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-ai21"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45dd1724",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "9ea7a09b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_ai21 import AI21Embeddings\n",
|
||||
"\n",
|
||||
"embeddings = AI21Embeddings(\n",
|
||||
" # Can optionally increase or decrease the batch_size\n",
|
||||
" # to improve latency.\n",
|
||||
" # Use larger batch sizes with smaller documents, and\n",
|
||||
" # smaller batch sizes with larger documents.\n",
|
||||
" # batch_size=256,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "77d271b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "d817716b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Create a vector store with a sample text\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"\n",
|
||||
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||
"\n",
|
||||
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||||
" [text],\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Use the vectorstore as a retriever\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"# Retrieve the most similar text\n",
|
||||
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||||
"\n",
|
||||
"# show the retrieved document's content\n",
|
||||
"retrieved_documents[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e02b9855",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Direct Usage\n",
|
||||
"\n",
|
||||
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||||
"\n",
|
||||
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||||
"\n",
|
||||
"### Embed single texts\n",
|
||||
"\n",
|
||||
"You can embed single texts or documents with `embed_query`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "0d2befcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[0.01913362182676792, 0.004960147198289633, -0.01582135073840618, -0.042474791407585144, 0.040200788\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1b5a7d03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embed multiple texts\n",
|
||||
"\n",
|
||||
"You can embed multiple texts with `embed_documents`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "2f4d6e97",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[0.03029559925198555, 0.002908500377088785, -0.02700909972190857, -0.04616579785943031, 0.0382771529\n",
|
||||
"[0.018214847892522812, 0.011460083536803722, -0.03329407051205635, -0.04951060563325882, 0.032756105\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98785c12",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API Reference\n",
|
||||
"\n",
|
||||
"For detailed documentation on `AI21Embeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/ai21/embeddings/langchain_ai21.embeddings.AI21Embeddings.html).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -131,7 +131,7 @@
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
|
||||
@@ -1,265 +1,267 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: Cohere\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a3d6f34",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# CohereEmbeddings\n",
|
||||
"\n",
|
||||
"This will help you get started with Cohere embedding models using LangChain. For detailed documentation on `CohereEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/cohere/embeddings/langchain_cohere.embeddings.CohereEmbeddings.html).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"import { ItemTable } from \"@theme/FeatureTables\";\n",
|
||||
"\n",
|
||||
"<ItemTable category=\"text_embedding\" item=\"Cohere\" />\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access Cohere embedding models you'll need to create a/an Cohere account, get an API key, and install the `langchain-cohere` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Head to [cohere.com](https://cohere.com) to sign up to Cohere and generate an API key. Once you’ve done this set the COHERE_API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "36521c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"COHERE_API_KEY\"):\n",
|
||||
" os.environ[\"COHERE_API_KEY\"] = getpass.getpass(\"Enter your Cohere API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c84fb993",
|
||||
"metadata": {},
|
||||
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "39a4953b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d9664366",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain Cohere integration lives in the `langchain-cohere` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "64853226",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-cohere"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45dd1724",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "9ea7a09b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_cohere import CohereEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = CohereEmbeddings(\n",
|
||||
" model=\"embed-english-v3.0\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "77d271b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "d817716b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Create a vector store with a sample text\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"\n",
|
||||
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||
"\n",
|
||||
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||||
" [text],\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Use the vectorstore as a retriever\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"# Retrieve the most similar text\n",
|
||||
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||||
"\n",
|
||||
"# show the retrieved document's content\n",
|
||||
"retrieved_documents[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e02b9855",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Direct Usage\n",
|
||||
"\n",
|
||||
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||||
"\n",
|
||||
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||||
"\n",
|
||||
"### Embed single texts\n",
|
||||
"\n",
|
||||
"You can embed single texts or documents with `embed_query`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "0d2befcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.022979736, -0.030212402, -0.08886719, -0.08569336, 0.007030487, -0.0010671616, -0.033813477, 0.0\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1b5a7d03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embed multiple texts\n",
|
||||
"\n",
|
||||
"You can embed multiple texts with `embed_documents`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "2f4d6e97",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.028869629, -0.030410767, -0.099121094, -0.07116699, -0.012748718, -0.0059432983, -0.04360962, 0.\n",
|
||||
"[-0.047332764, -0.049957275, -0.07458496, -0.034332275, -0.057922363, -0.0112838745, -0.06994629, 0.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98785c12",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API Reference\n",
|
||||
"\n",
|
||||
"For detailed documentation on `CohereEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/cohere/embeddings/langchain_cohere.embeddings.CohereEmbeddings.html).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.4"
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: Cohere\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a3d6f34",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# CohereEmbeddings\n",
|
||||
"\n",
|
||||
"This will help you get started with Cohere embedding models using LangChain. For detailed documentation on `CohereEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/cohere/embeddings/langchain_cohere.embeddings.CohereEmbeddings.html).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"import { ItemTable } from \"@theme/FeatureTables\";\n",
|
||||
"\n",
|
||||
"<ItemTable category=\"text_embedding\" item=\"Cohere\" />\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access Cohere embedding models you'll need to create a/an Cohere account, get an API key, and install the `langchain-cohere` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Head to [cohere.com](https://cohere.com) to sign up to Cohere and generate an API key. Once you’ve done this set the COHERE_API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "36521c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"COHERE_API_KEY\"):\n",
|
||||
" os.environ[\"COHERE_API_KEY\"] = getpass.getpass(\"Enter your Cohere API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c84fb993",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "39a4953b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d9664366",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain Cohere integration lives in the `langchain-cohere` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "64853226",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-cohere"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45dd1724",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "9ea7a09b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_cohere import CohereEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = CohereEmbeddings(\n",
|
||||
" model=\"embed-english-v3.0\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "77d271b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "d817716b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Create a vector store with a sample text\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"\n",
|
||||
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||
"\n",
|
||||
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||||
" [text],\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Use the vectorstore as a retriever\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"# Retrieve the most similar text\n",
|
||||
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||||
"\n",
|
||||
"# show the retrieved document's content\n",
|
||||
"retrieved_documents[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e02b9855",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Direct Usage\n",
|
||||
"\n",
|
||||
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||||
"\n",
|
||||
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||||
"\n",
|
||||
"### Embed single texts\n",
|
||||
"\n",
|
||||
"You can embed single texts or documents with `embed_query`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "0d2befcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.022979736, -0.030212402, -0.08886719, -0.08569336, 0.007030487, -0.0010671616, -0.033813477, 0.0\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1b5a7d03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embed multiple texts\n",
|
||||
"\n",
|
||||
"You can embed multiple texts with `embed_documents`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "2f4d6e97",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.028869629, -0.030410767, -0.099121094, -0.07116699, -0.012748718, -0.0059432983, -0.04360962, 0.\n",
|
||||
"[-0.047332764, -0.049957275, -0.07458496, -0.034332275, -0.057922363, -0.0112838745, -0.06994629, 0.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98785c12",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API Reference\n",
|
||||
"\n",
|
||||
"For detailed documentation on `CohereEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/cohere/embeddings/langchain_cohere.embeddings.CohereEmbeddings.html).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
||||
@@ -125,7 +125,7 @@
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
@@ -264,7 +264,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.5"
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -1,265 +1,267 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: Fireworks\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a3d6f34",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# FireworksEmbeddings\n",
|
||||
"\n",
|
||||
"This will help you get started with Fireworks embedding models using LangChain. For detailed documentation on `FireworksEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/fireworks/embeddings/langchain_fireworks.embeddings.FireworksEmbeddings.html).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"import { ItemTable } from \"@theme/FeatureTables\";\n",
|
||||
"\n",
|
||||
"<ItemTable category=\"text_embedding\" item=\"Fireworks\" />\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access Fireworks embedding models you'll need to create a Fireworks account, get an API key, and install the `langchain-fireworks` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Head to [fireworks.ai](https://fireworks.ai/) to sign up to Fireworks and generate an API key. Once you’ve done this set the FIREWORKS_API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "36521c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"FIREWORKS_API_KEY\"):\n",
|
||||
" os.environ[\"FIREWORKS_API_KEY\"] = getpass.getpass(\"Enter your Fireworks API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c84fb993",
|
||||
"metadata": {},
|
||||
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "39a4953b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d9664366",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain Fireworks integration lives in the `langchain-fireworks` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "64853226",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-fireworks"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45dd1724",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "9ea7a09b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_fireworks import FireworksEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = FireworksEmbeddings(\n",
|
||||
" model=\"nomic-ai/nomic-embed-text-v1.5\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "77d271b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "d817716b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Create a vector store with a sample text\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"\n",
|
||||
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||
"\n",
|
||||
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||||
" [text],\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Use the vectorstore as a retriever\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"# Retrieve the most similar text\n",
|
||||
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||||
"\n",
|
||||
"# show the retrieved document's content\n",
|
||||
"retrieved_documents[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e02b9855",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Direct Usage\n",
|
||||
"\n",
|
||||
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||||
"\n",
|
||||
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||||
"\n",
|
||||
"### Embed single texts\n",
|
||||
"\n",
|
||||
"You can embed single texts or documents with `embed_query`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "0d2befcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[0.01666259765625, 0.011688232421875, -0.1181640625, -0.10205078125, 0.05438232421875, -0.0890502929\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1b5a7d03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embed multiple texts\n",
|
||||
"\n",
|
||||
"You can embed multiple texts with `embed_documents`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "2f4d6e97",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[0.016632080078125, 0.01165008544921875, -0.1181640625, -0.10186767578125, 0.05438232421875, -0.0890\n",
|
||||
"[-0.02667236328125, 0.036651611328125, -0.1630859375, -0.0904541015625, -0.022430419921875, -0.09545\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3fba556a-b53d-431c-b0c6-ffb1e2fa5a6e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API Reference\n",
|
||||
"\n",
|
||||
"For detailed documentation of all `FireworksEmbeddings` features and configurations head to the [API reference](https://python.langchain.com/api_reference/fireworks/embeddings/langchain_fireworks.embeddings.FireworksEmbeddings.html)."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.4"
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: Fireworks\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a3d6f34",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# FireworksEmbeddings\n",
|
||||
"\n",
|
||||
"This will help you get started with Fireworks embedding models using LangChain. For detailed documentation on `FireworksEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/fireworks/embeddings/langchain_fireworks.embeddings.FireworksEmbeddings.html).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"import { ItemTable } from \"@theme/FeatureTables\";\n",
|
||||
"\n",
|
||||
"<ItemTable category=\"text_embedding\" item=\"Fireworks\" />\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access Fireworks embedding models you'll need to create a Fireworks account, get an API key, and install the `langchain-fireworks` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Head to [fireworks.ai](https://fireworks.ai/) to sign up to Fireworks and generate an API key. Once you’ve done this set the FIREWORKS_API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "36521c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"FIREWORKS_API_KEY\"):\n",
|
||||
" os.environ[\"FIREWORKS_API_KEY\"] = getpass.getpass(\"Enter your Fireworks API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c84fb993",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "39a4953b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d9664366",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain Fireworks integration lives in the `langchain-fireworks` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "64853226",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-fireworks"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45dd1724",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "9ea7a09b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_fireworks import FireworksEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = FireworksEmbeddings(\n",
|
||||
" model=\"nomic-ai/nomic-embed-text-v1.5\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "77d271b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "d817716b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Create a vector store with a sample text\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"\n",
|
||||
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||
"\n",
|
||||
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||||
" [text],\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Use the vectorstore as a retriever\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"# Retrieve the most similar text\n",
|
||||
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||||
"\n",
|
||||
"# show the retrieved document's content\n",
|
||||
"retrieved_documents[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e02b9855",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Direct Usage\n",
|
||||
"\n",
|
||||
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||||
"\n",
|
||||
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||||
"\n",
|
||||
"### Embed single texts\n",
|
||||
"\n",
|
||||
"You can embed single texts or documents with `embed_query`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "0d2befcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[0.01666259765625, 0.011688232421875, -0.1181640625, -0.10205078125, 0.05438232421875, -0.0890502929\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1b5a7d03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embed multiple texts\n",
|
||||
"\n",
|
||||
"You can embed multiple texts with `embed_documents`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "2f4d6e97",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[0.016632080078125, 0.01165008544921875, -0.1181640625, -0.10186767578125, 0.05438232421875, -0.0890\n",
|
||||
"[-0.02667236328125, 0.036651611328125, -0.1630859375, -0.0904541015625, -0.022430419921875, -0.09545\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3fba556a-b53d-431c-b0c6-ffb1e2fa5a6e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API Reference\n",
|
||||
"\n",
|
||||
"For detailed documentation of all `FireworksEmbeddings` features and configurations head to the [API reference](https://python.langchain.com/api_reference/fireworks/embeddings/langchain_fireworks.embeddings.FireworksEmbeddings.html)."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
||||
@@ -173,7 +173,7 @@
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
|
||||
@@ -167,7 +167,7 @@
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
|
||||
@@ -203,7 +203,7 @@
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
@@ -327,7 +327,7 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "langchain_ibm",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -341,9 +341,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.12"
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
|
||||
@@ -132,7 +132,7 @@
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
@@ -286,7 +286,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.5"
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -1,264 +1,266 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: MistralAI\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a3d6f34",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# MistralAIEmbeddings\n",
|
||||
"\n",
|
||||
"This will help you get started with MistralAI embedding models using LangChain. For detailed documentation on `MistralAIEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/mistralai/embeddings/langchain_mistralai.embeddings.MistralAIEmbeddings.html).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"import { ItemTable } from \"@theme/FeatureTables\";\n",
|
||||
"\n",
|
||||
"<ItemTable category=\"text_embedding\" item=\"MistralAI\" />\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access MistralAI embedding models you'll need to create a/an MistralAI account, get an API key, and install the `langchain-mistralai` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Head to [https://console.mistral.ai/](https://console.mistral.ai/) to sign up to MistralAI and generate an API key. Once you've done this set the MISTRALAI_API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "36521c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"MISTRALAI_API_KEY\"):\n",
|
||||
" os.environ[\"MISTRALAI_API_KEY\"] = getpass.getpass(\"Enter your MistralAI API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c84fb993",
|
||||
"metadata": {},
|
||||
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "39a4953b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d9664366",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain MistralAI integration lives in the `langchain-mistralai` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "64853226",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-mistralai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45dd1724",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "9ea7a09b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_mistralai import MistralAIEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = MistralAIEmbeddings(\n",
|
||||
" model=\"mistral-embed\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "77d271b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "d817716b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Create a vector store with a sample text\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"\n",
|
||||
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||
"\n",
|
||||
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||||
" [text],\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Use the vectorstore as a retriever\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"# Retrieve the most similar text\n",
|
||||
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||||
"\n",
|
||||
"# show the retrieved document's content\n",
|
||||
"retrieved_documents[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e02b9855",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Direct Usage\n",
|
||||
"\n",
|
||||
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||||
"\n",
|
||||
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||||
"\n",
|
||||
"### Embed single texts\n",
|
||||
"\n",
|
||||
"You can embed single texts or documents with `embed_query`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "0d2befcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.04443359375, 0.01885986328125, 0.018035888671875, -0.00864410400390625, 0.049652099609375, -0.00\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1b5a7d03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embed multiple texts\n",
|
||||
"\n",
|
||||
"You can embed multiple texts with `embed_documents`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "2f4d6e97",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.04443359375, 0.01885986328125, 0.0180511474609375, -0.0086517333984375, 0.049652099609375, -0.00\n",
|
||||
"[-0.02032470703125, 0.02606201171875, 0.051605224609375, -0.0281982421875, 0.055755615234375, 0.0019\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98785c12",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API Reference\n",
|
||||
"\n",
|
||||
"For detailed documentation on `MistralAIEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/mistralai/embeddings/langchain_mistralai.embeddings.MistralAIEmbeddings.html).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.4"
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: MistralAI\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a3d6f34",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# MistralAIEmbeddings\n",
|
||||
"\n",
|
||||
"This will help you get started with MistralAI embedding models using LangChain. For detailed documentation on `MistralAIEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/mistralai/embeddings/langchain_mistralai.embeddings.MistralAIEmbeddings.html).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"import { ItemTable } from \"@theme/FeatureTables\";\n",
|
||||
"\n",
|
||||
"<ItemTable category=\"text_embedding\" item=\"MistralAI\" />\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access MistralAI embedding models you'll need to create a/an MistralAI account, get an API key, and install the `langchain-mistralai` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Head to [https://console.mistral.ai/](https://console.mistral.ai/) to sign up to MistralAI and generate an API key. Once you've done this set the MISTRALAI_API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "36521c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"MISTRALAI_API_KEY\"):\n",
|
||||
" os.environ[\"MISTRALAI_API_KEY\"] = getpass.getpass(\"Enter your MistralAI API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c84fb993",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "39a4953b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d9664366",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain MistralAI integration lives in the `langchain-mistralai` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "64853226",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-mistralai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45dd1724",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "9ea7a09b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_mistralai import MistralAIEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = MistralAIEmbeddings(\n",
|
||||
" model=\"mistral-embed\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "77d271b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "d817716b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Create a vector store with a sample text\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"\n",
|
||||
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||
"\n",
|
||||
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||||
" [text],\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Use the vectorstore as a retriever\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"# Retrieve the most similar text\n",
|
||||
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||||
"\n",
|
||||
"# show the retrieved document's content\n",
|
||||
"retrieved_documents[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e02b9855",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Direct Usage\n",
|
||||
"\n",
|
||||
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||||
"\n",
|
||||
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||||
"\n",
|
||||
"### Embed single texts\n",
|
||||
"\n",
|
||||
"You can embed single texts or documents with `embed_query`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "0d2befcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.04443359375, 0.01885986328125, 0.018035888671875, -0.00864410400390625, 0.049652099609375, -0.00\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1b5a7d03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embed multiple texts\n",
|
||||
"\n",
|
||||
"You can embed multiple texts with `embed_documents`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "2f4d6e97",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.04443359375, 0.01885986328125, 0.0180511474609375, -0.0086517333984375, 0.049652099609375, -0.00\n",
|
||||
"[-0.02032470703125, 0.02606201171875, 0.051605224609375, -0.0281982421875, 0.055755615234375, 0.0019\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98785c12",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API Reference\n",
|
||||
"\n",
|
||||
"For detailed documentation on `MistralAIEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/mistralai/embeddings/langchain_mistralai.embeddings.MistralAIEmbeddings.html).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
||||
@@ -128,7 +128,7 @@
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
@@ -277,7 +277,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.16"
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -112,7 +112,7 @@
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
@@ -249,7 +249,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.12.3"
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -37,6 +37,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "36521c2a",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -44,15 +45,14 @@
|
||||
"start_time": "2025-03-20T01:53:27.764291Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"NETMIND_API_KEY\"):\n",
|
||||
" os.environ[\"NETMIND_API_KEY\"] = getpass.getpass(\"Enter your Netmind API key: \")"
|
||||
],
|
||||
"outputs": [],
|
||||
"execution_count": 1
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -64,6 +64,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "39a4953b",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -71,12 +72,11 @@
|
||||
"start_time": "2025-03-20T01:53:32.141858Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
],
|
||||
"outputs": [],
|
||||
"execution_count": 2
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -90,6 +90,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "64853226",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -97,22 +98,21 @@
|
||||
"start_time": "2025-03-20T01:53:36.171640Z"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"%pip install -qU langchain-netmind"
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\r\n",
|
||||
"\u001B[1m[\u001B[0m\u001B[34;49mnotice\u001B[0m\u001B[1;39;49m]\u001B[0m\u001B[39;49m A new release of pip is available: \u001B[0m\u001B[31;49m24.0\u001B[0m\u001B[39;49m -> \u001B[0m\u001B[32;49m25.0.1\u001B[0m\r\n",
|
||||
"\u001B[1m[\u001B[0m\u001B[34;49mnotice\u001B[0m\u001B[1;39;49m]\u001B[0m\u001B[39;49m To update, run: \u001B[0m\u001B[32;49mpip install --upgrade pip\u001B[0m\r\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.0\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m25.0.1\u001b[0m\r\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\r\n",
|
||||
"Note: you may need to restart the kernel to use updated packages.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"execution_count": 3
|
||||
"source": [
|
||||
"%pip install -qU langchain-netmind"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -126,6 +126,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "9ea7a09b",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -133,15 +134,14 @@
|
||||
"start_time": "2025-03-20T01:54:30.146876Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_netmind import NetmindEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = NetmindEmbeddings(\n",
|
||||
" model=\"nvidia/NV-Embed-v2\",\n",
|
||||
")"
|
||||
],
|
||||
"outputs": [],
|
||||
"execution_count": 4
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -150,13 +150,14 @@
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "d817716b",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -164,6 +165,18 @@
|
||||
"start_time": "2025-03-20T01:54:34.500805Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Create a vector store with a sample text\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
@@ -183,20 +196,7 @@
|
||||
"\n",
|
||||
"# show the retrieved document's content\n",
|
||||
"retrieved_documents[0].page_content"
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"execution_count": 5
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -216,6 +216,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "0d2befcd",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -223,10 +224,6 @@
|
||||
"start_time": "2025-03-20T01:54:45.196528Z"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
@@ -236,7 +233,10 @@
|
||||
]
|
||||
}
|
||||
],
|
||||
"execution_count": 6
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -250,6 +250,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "2f4d6e97",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
@@ -257,14 +258,6 @@
|
||||
"start_time": "2025-03-20T01:54:52.468719Z"
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
@@ -275,7 +268,14 @@
|
||||
]
|
||||
}
|
||||
],
|
||||
"execution_count": 7
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
@@ -291,12 +291,12 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"metadata": {},
|
||||
"cell_type": "code",
|
||||
"outputs": [],
|
||||
"execution_count": null,
|
||||
"source": "",
|
||||
"id": "adb9e45c34733299"
|
||||
"id": "adb9e45c34733299",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -315,7 +315,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.5"
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -1,285 +1,287 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: Nomic\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a3d6f34",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# NomicEmbeddings\n",
|
||||
"\n",
|
||||
"This will help you get started with Nomic embedding models using LangChain. For detailed documentation on `NomicEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/nomic/embeddings/langchain_nomic.embeddings.NomicEmbeddings.html).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"import { ItemTable } from \"@theme/FeatureTables\";\n",
|
||||
"\n",
|
||||
"<ItemTable category=\"text_embedding\" item=\"Nomic\" />\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access Nomic embedding models you'll need to create a/an Nomic account, get an API key, and install the `langchain-nomic` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Head to [https://atlas.nomic.ai/](https://atlas.nomic.ai/) to sign up to Nomic and generate an API key. Once you've done this set the `NOMIC_API_KEY` environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "36521c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"NOMIC_API_KEY\"):\n",
|
||||
" os.environ[\"NOMIC_API_KEY\"] = getpass.getpass(\"Enter your Nomic API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c84fb993",
|
||||
"metadata": {},
|
||||
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "39a4953b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d9664366",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain Nomic integration lives in the `langchain-nomic` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "64853226",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Note: you may need to restart the kernel to use updated packages.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU langchain-nomic"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45dd1724",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "9ea7a09b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_nomic import NomicEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = NomicEmbeddings(\n",
|
||||
" model=\"nomic-embed-text-v1.5\",\n",
|
||||
" # dimensionality=256,\n",
|
||||
" # Nomic's `nomic-embed-text-v1.5` model was [trained with Matryoshka learning](https://blog.nomic.ai/posts/nomic-embed-matryoshka)\n",
|
||||
" # to enable variable-length embeddings with a single model.\n",
|
||||
" # This means that you can specify the dimensionality of the embeddings at inference time.\n",
|
||||
" # The model supports dimensionality from 64 to 768.\n",
|
||||
" # inference_mode=\"remote\",\n",
|
||||
" # One of `remote`, `local` (Embed4All), or `dynamic` (automatic). Defaults to `remote`.\n",
|
||||
" # api_key=... , # if using remote inference,\n",
|
||||
" # device=\"cpu\",\n",
|
||||
" # The device to use for local embeddings. Choices include\n",
|
||||
" # `cpu`, `gpu`, `nvidia`, `amd`, or a specific device name. See\n",
|
||||
" # the docstring for `GPT4All.__init__` for more info. Typically\n",
|
||||
" # defaults to CPU. Do not use on macOS.\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "77d271b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "d817716b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Create a vector store with a sample text\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"\n",
|
||||
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||
"\n",
|
||||
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||||
" [text],\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Use the vectorstore as a retriever\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"# Retrieve the most similar text\n",
|
||||
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||||
"\n",
|
||||
"# show the retrieved document's content\n",
|
||||
"retrieved_documents[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e02b9855",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Direct Usage\n",
|
||||
"\n",
|
||||
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||||
"\n",
|
||||
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||||
"\n",
|
||||
"### Embed single texts\n",
|
||||
"\n",
|
||||
"You can embed single texts or documents with `embed_query`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "0d2befcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[0.024642944, 0.029083252, -0.14013672, -0.09082031, 0.058898926, -0.07489014, -0.0138168335, 0.0037\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1b5a7d03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embed multiple texts\n",
|
||||
"\n",
|
||||
"You can embed multiple texts with `embed_documents`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "2f4d6e97",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[0.012771606, 0.023727417, -0.12365723, -0.083740234, 0.06530762, -0.07110596, -0.021896362, -0.0068\n",
|
||||
"[-0.019058228, 0.04058838, -0.15222168, -0.06842041, -0.012130737, -0.07128906, -0.04534912, 0.00522\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98785c12",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API Reference\n",
|
||||
"\n",
|
||||
"For detailed documentation on `NomicEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/nomic/embeddings/langchain_nomic.embeddings.NomicEmbeddings.html).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.6"
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: Nomic\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a3d6f34",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# NomicEmbeddings\n",
|
||||
"\n",
|
||||
"This will help you get started with Nomic embedding models using LangChain. For detailed documentation on `NomicEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/nomic/embeddings/langchain_nomic.embeddings.NomicEmbeddings.html).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"import { ItemTable } from \"@theme/FeatureTables\";\n",
|
||||
"\n",
|
||||
"<ItemTable category=\"text_embedding\" item=\"Nomic\" />\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access Nomic embedding models you'll need to create a/an Nomic account, get an API key, and install the `langchain-nomic` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Head to [https://atlas.nomic.ai/](https://atlas.nomic.ai/) to sign up to Nomic and generate an API key. Once you've done this set the `NOMIC_API_KEY` environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "36521c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"NOMIC_API_KEY\"):\n",
|
||||
" os.environ[\"NOMIC_API_KEY\"] = getpass.getpass(\"Enter your Nomic API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c84fb993",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "39a4953b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d9664366",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain Nomic integration lives in the `langchain-nomic` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "64853226",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Note: you may need to restart the kernel to use updated packages.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU langchain-nomic"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45dd1724",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "9ea7a09b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_nomic import NomicEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = NomicEmbeddings(\n",
|
||||
" model=\"nomic-embed-text-v1.5\",\n",
|
||||
" # dimensionality=256,\n",
|
||||
" # Nomic's `nomic-embed-text-v1.5` model was [trained with Matryoshka learning](https://blog.nomic.ai/posts/nomic-embed-matryoshka)\n",
|
||||
" # to enable variable-length embeddings with a single model.\n",
|
||||
" # This means that you can specify the dimensionality of the embeddings at inference time.\n",
|
||||
" # The model supports dimensionality from 64 to 768.\n",
|
||||
" # inference_mode=\"remote\",\n",
|
||||
" # One of `remote`, `local` (Embed4All), or `dynamic` (automatic). Defaults to `remote`.\n",
|
||||
" # api_key=... , # if using remote inference,\n",
|
||||
" # device=\"cpu\",\n",
|
||||
" # The device to use for local embeddings. Choices include\n",
|
||||
" # `cpu`, `gpu`, `nvidia`, `amd`, or a specific device name. See\n",
|
||||
" # the docstring for `GPT4All.__init__` for more info. Typically\n",
|
||||
" # defaults to CPU. Do not use on macOS.\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "77d271b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "d817716b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Create a vector store with a sample text\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"\n",
|
||||
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||
"\n",
|
||||
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||||
" [text],\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Use the vectorstore as a retriever\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"# Retrieve the most similar text\n",
|
||||
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||||
"\n",
|
||||
"# show the retrieved document's content\n",
|
||||
"retrieved_documents[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e02b9855",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Direct Usage\n",
|
||||
"\n",
|
||||
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||||
"\n",
|
||||
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||||
"\n",
|
||||
"### Embed single texts\n",
|
||||
"\n",
|
||||
"You can embed single texts or documents with `embed_query`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "0d2befcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[0.024642944, 0.029083252, -0.14013672, -0.09082031, 0.058898926, -0.07489014, -0.0138168335, 0.0037\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1b5a7d03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embed multiple texts\n",
|
||||
"\n",
|
||||
"You can embed multiple texts with `embed_documents`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "2f4d6e97",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[0.012771606, 0.023727417, -0.12365723, -0.083740234, 0.06530762, -0.07110596, -0.021896362, -0.0068\n",
|
||||
"[-0.019058228, 0.04058838, -0.15222168, -0.06842041, -0.012130737, -0.07128906, -0.04534912, 0.00522\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98785c12",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API Reference\n",
|
||||
"\n",
|
||||
"For detailed documentation on `NomicEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/nomic/embeddings/langchain_nomic.embeddings.NomicEmbeddings.html).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
||||
@@ -1,270 +1,272 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: OpenAI\n",
|
||||
"keywords: [openaiembeddings]\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a3d6f34",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# OpenAIEmbeddings\n",
|
||||
"\n",
|
||||
"This will help you get started with OpenAI embedding models using LangChain. For detailed documentation on `OpenAIEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/openai/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html).\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"import { ItemTable } from \"@theme/FeatureTables\";\n",
|
||||
"\n",
|
||||
"<ItemTable category=\"text_embedding\" item=\"OpenAI\" />\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access OpenAI embedding models you'll need to create a/an OpenAI account, get an API key, and install the `langchain-openai` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Head to [platform.openai.com](https://platform.openai.com) to sign up to OpenAI and generate an API key. Once you’ve done this set the OPENAI_API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "36521c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"OPENAI_API_KEY\"):\n",
|
||||
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Enter your OpenAI API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c84fb993",
|
||||
"metadata": {},
|
||||
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "39a4953b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d9664366",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain OpenAI integration lives in the `langchain-openai` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "64853226",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45dd1724",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "9ea7a09b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_openai import OpenAIEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = OpenAIEmbeddings(\n",
|
||||
" model=\"text-embedding-3-large\",\n",
|
||||
" # With the `text-embedding-3` class\n",
|
||||
" # of models, you can specify the size\n",
|
||||
" # of the embeddings you want returned.\n",
|
||||
" # dimensions=1024\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "77d271b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "d817716b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Create a vector store with a sample text\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"\n",
|
||||
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||
"\n",
|
||||
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||||
" [text],\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Use the vectorstore as a retriever\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"# Retrieve the most similar text\n",
|
||||
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||||
"\n",
|
||||
"# show the retrieved document's content\n",
|
||||
"retrieved_documents[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e02b9855",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Direct Usage\n",
|
||||
"\n",
|
||||
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||||
"\n",
|
||||
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||||
"\n",
|
||||
"### Embed single texts\n",
|
||||
"\n",
|
||||
"You can embed single texts or documents with `embed_query`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "0d2befcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.019276829436421394, 0.0037708976306021214, -0.03294256329536438, 0.0037671267054975033, 0.008175\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1b5a7d03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embed multiple texts\n",
|
||||
"\n",
|
||||
"You can embed multiple texts with `embed_documents`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "2f4d6e97",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.019260549917817116, 0.0037612367887049913, -0.03291035071015358, 0.003757466096431017, 0.0082049\n",
|
||||
"[-0.010181212797760963, 0.023419594392180443, -0.04215526953339577, -0.001532090245746076, -0.023573\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98785c12",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API Reference\n",
|
||||
"\n",
|
||||
"For detailed documentation on `OpenAIEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/openai/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.4"
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: OpenAI\n",
|
||||
"keywords: [openaiembeddings]\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a3d6f34",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# OpenAIEmbeddings\n",
|
||||
"\n",
|
||||
"This will help you get started with OpenAI embedding models using LangChain. For detailed documentation on `OpenAIEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/openai/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html).\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"import { ItemTable } from \"@theme/FeatureTables\";\n",
|
||||
"\n",
|
||||
"<ItemTable category=\"text_embedding\" item=\"OpenAI\" />\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access OpenAI embedding models you'll need to create a/an OpenAI account, get an API key, and install the `langchain-openai` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Head to [platform.openai.com](https://platform.openai.com) to sign up to OpenAI and generate an API key. Once you’ve done this set the OPENAI_API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "36521c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"OPENAI_API_KEY\"):\n",
|
||||
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Enter your OpenAI API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c84fb993",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "39a4953b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d9664366",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain OpenAI integration lives in the `langchain-openai` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "64853226",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45dd1724",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "9ea7a09b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_openai import OpenAIEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = OpenAIEmbeddings(\n",
|
||||
" model=\"text-embedding-3-large\",\n",
|
||||
" # With the `text-embedding-3` class\n",
|
||||
" # of models, you can specify the size\n",
|
||||
" # of the embeddings you want returned.\n",
|
||||
" # dimensions=1024\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "77d271b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "d817716b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Create a vector store with a sample text\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"\n",
|
||||
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||
"\n",
|
||||
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||||
" [text],\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Use the vectorstore as a retriever\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"# Retrieve the most similar text\n",
|
||||
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||||
"\n",
|
||||
"# show the retrieved document's content\n",
|
||||
"retrieved_documents[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e02b9855",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Direct Usage\n",
|
||||
"\n",
|
||||
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||||
"\n",
|
||||
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||||
"\n",
|
||||
"### Embed single texts\n",
|
||||
"\n",
|
||||
"You can embed single texts or documents with `embed_query`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "0d2befcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.019276829436421394, 0.0037708976306021214, -0.03294256329536438, 0.0037671267054975033, 0.008175\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1b5a7d03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embed multiple texts\n",
|
||||
"\n",
|
||||
"You can embed multiple texts with `embed_documents`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "2f4d6e97",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.019260549917817116, 0.0037612367887049913, -0.03291035071015358, 0.003757466096431017, 0.0082049\n",
|
||||
"[-0.010181212797760963, 0.023419594392180443, -0.04215526953339577, -0.001532090245746076, -0.023573\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98785c12",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API Reference\n",
|
||||
"\n",
|
||||
"For detailed documentation on `OpenAIEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/openai/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
||||
@@ -133,7 +133,7 @@
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
@@ -244,7 +244,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.5"
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -141,7 +141,7 @@
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
@@ -252,7 +252,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.5"
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -1,275 +1,277 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: Together AI\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a3d6f34",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# TogetherEmbeddings\n",
|
||||
"\n",
|
||||
"This will help you get started with Together embedding models using LangChain. For detailed documentation on `TogetherEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/together/embeddings/langchain_together.embeddings.TogetherEmbeddings.html).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"import { ItemTable } from \"@theme/FeatureTables\";\n",
|
||||
"\n",
|
||||
"<ItemTable category=\"text_embedding\" item=\"Together\" />\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access Together embedding models you'll need to create a/an Together account, get an API key, and install the `langchain-together` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Head to [https://api.together.xyz/](https://api.together.xyz/) to sign up to Together and generate an API key. Once you've done this set the TOGETHER_API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "36521c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"TOGETHER_API_KEY\"):\n",
|
||||
" os.environ[\"TOGETHER_API_KEY\"] = getpass.getpass(\"Enter your Together API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c84fb993",
|
||||
"metadata": {},
|
||||
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "39a4953b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d9664366",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain Together integration lives in the `langchain-together` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "64853226",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.0\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.2\u001b[0m\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpython -m pip install --upgrade pip\u001b[0m\n",
|
||||
"Note: you may need to restart the kernel to use updated packages.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU langchain-together"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45dd1724",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "9ea7a09b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_together import TogetherEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = TogetherEmbeddings(\n",
|
||||
" model=\"togethercomputer/m2-bert-80M-8k-retrieval\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "77d271b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "d817716b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Create a vector store with a sample text\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"\n",
|
||||
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||
"\n",
|
||||
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||||
" [text],\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Use the vectorstore as a retriever\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"# Retrieve the most similar text\n",
|
||||
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||||
"\n",
|
||||
"# show the retrieved document's content\n",
|
||||
"retrieved_documents[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e02b9855",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Direct Usage\n",
|
||||
"\n",
|
||||
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||||
"\n",
|
||||
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||||
"\n",
|
||||
"### Embed single texts\n",
|
||||
"\n",
|
||||
"You can embed single texts or documents with `embed_query`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "0d2befcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[0.3812227, -0.052848946, -0.10564975, 0.03480297, 0.2878488, 0.0084609175, 0.11605915, 0.05303011, \n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1b5a7d03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embed multiple texts\n",
|
||||
"\n",
|
||||
"You can embed multiple texts with `embed_documents`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "2f4d6e97",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[0.3812227, -0.052848946, -0.10564975, 0.03480297, 0.2878488, 0.0084609175, 0.11605915, 0.05303011, \n",
|
||||
"[0.066308185, -0.032866564, 0.115751594, 0.19082588, 0.14017, -0.26976448, -0.056340694, -0.26923394\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98785c12",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API Reference\n",
|
||||
"\n",
|
||||
"For detailed documentation on `TogetherEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/together/embeddings/langchain_together.embeddings.TogetherEmbeddings.html).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.4"
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: Together AI\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a3d6f34",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# TogetherEmbeddings\n",
|
||||
"\n",
|
||||
"This will help you get started with Together embedding models using LangChain. For detailed documentation on `TogetherEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/together/embeddings/langchain_together.embeddings.TogetherEmbeddings.html).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"import { ItemTable } from \"@theme/FeatureTables\";\n",
|
||||
"\n",
|
||||
"<ItemTable category=\"text_embedding\" item=\"Together\" />\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access Together embedding models you'll need to create a/an Together account, get an API key, and install the `langchain-together` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Head to [https://api.together.xyz/](https://api.together.xyz/) to sign up to Together and generate an API key. Once you've done this set the TOGETHER_API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "36521c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"TOGETHER_API_KEY\"):\n",
|
||||
" os.environ[\"TOGETHER_API_KEY\"] = getpass.getpass(\"Enter your Together API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c84fb993",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "39a4953b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d9664366",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain Together integration lives in the `langchain-together` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "64853226",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.0\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.2\u001b[0m\n",
|
||||
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpython -m pip install --upgrade pip\u001b[0m\n",
|
||||
"Note: you may need to restart the kernel to use updated packages.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU langchain-together"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45dd1724",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "9ea7a09b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_together import TogetherEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = TogetherEmbeddings(\n",
|
||||
" model=\"togethercomputer/m2-bert-80M-8k-retrieval\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "77d271b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "d817716b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Create a vector store with a sample text\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"\n",
|
||||
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||
"\n",
|
||||
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||||
" [text],\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Use the vectorstore as a retriever\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"# Retrieve the most similar text\n",
|
||||
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||||
"\n",
|
||||
"# show the retrieved document's content\n",
|
||||
"retrieved_documents[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e02b9855",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Direct Usage\n",
|
||||
"\n",
|
||||
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||||
"\n",
|
||||
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||||
"\n",
|
||||
"### Embed single texts\n",
|
||||
"\n",
|
||||
"You can embed single texts or documents with `embed_query`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "0d2befcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[0.3812227, -0.052848946, -0.10564975, 0.03480297, 0.2878488, 0.0084609175, 0.11605915, 0.05303011, \n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1b5a7d03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embed multiple texts\n",
|
||||
"\n",
|
||||
"You can embed multiple texts with `embed_documents`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "2f4d6e97",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[0.3812227, -0.052848946, -0.10564975, 0.03480297, 0.2878488, 0.0084609175, 0.11605915, 0.05303011, \n",
|
||||
"[0.066308185, -0.032866564, 0.115751594, 0.19082588, 0.14017, -0.26976448, -0.056340694, -0.26923394\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98785c12",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API Reference\n",
|
||||
"\n",
|
||||
"For detailed documentation on `TogetherEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/together/embeddings/langchain_together.embeddings.TogetherEmbeddings.html).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
||||
@@ -1,277 +1,279 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: ZhipuAI\n",
|
||||
"keywords: [zhipuaiembeddings]\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a3d6f34",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ZhipuAIEmbeddings\n",
|
||||
"\n",
|
||||
"This will help you get started with ZhipuAI embedding models using LangChain. For detailed documentation on `ZhipuAIEmbeddings` features and configuration options, please refer to the [API reference](https://bigmodel.cn/dev/api#vector).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"| Provider | Package |\n",
|
||||
"|:--------:|:-------:|\n",
|
||||
"| [ZhipuAI](/docs/integrations/providers/zhipuai/) | [langchain-community](https://python.langchain.com/api_reference/community/embeddings/langchain_community.embeddings.zhipuai.ZhipuAIEmbeddings.html) |\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access ZhipuAI embedding models you'll need to create a/an ZhipuAI account, get an API key, and install the `zhipuai` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Head to [https://bigmodel.cn/](https://bigmodel.cn/usercenter/apikeys) to sign up to ZhipuAI and generate an API key. Once you've done this set the ZHIPUAI_API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "36521c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"ZHIPUAI_API_KEY\"):\n",
|
||||
" os.environ[\"ZHIPUAI_API_KEY\"] = getpass.getpass(\"Enter your ZhipuAI API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c84fb993",
|
||||
"metadata": {},
|
||||
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "39a4953b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d9664366",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain ZhipuAI integration lives in the `zhipuai` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "64853226",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Note: you may need to restart the kernel to use updated packages.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU zhipuai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45dd1724",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "9ea7a09b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.embeddings import ZhipuAIEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = ZhipuAIEmbeddings(\n",
|
||||
" model=\"embedding-3\",\n",
|
||||
" # With the `embedding-3` class\n",
|
||||
" # of models, you can specify the size\n",
|
||||
" # of the embeddings you want returned.\n",
|
||||
" # dimensions=1024\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "77d271b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "d817716b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Create a vector store with a sample text\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"\n",
|
||||
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||
"\n",
|
||||
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||||
" [text],\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Use the vectorstore as a retriever\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"# Retrieve the most similar text\n",
|
||||
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||||
"\n",
|
||||
"# show the retrieved document's content\n",
|
||||
"retrieved_documents[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e02b9855",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Direct Usage\n",
|
||||
"\n",
|
||||
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||||
"\n",
|
||||
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||||
"\n",
|
||||
"### Embed single texts\n",
|
||||
"\n",
|
||||
"You can embed single texts or documents with `embed_query`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "0d2befcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.022979736, 0.007785797, 0.04598999, 0.012741089, -0.01689148, 0.008277893, 0.016464233, 0.009246\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1b5a7d03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embed multiple texts\n",
|
||||
"\n",
|
||||
"You can embed multiple texts with `embed_documents`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "2f4d6e97",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.022979736, 0.007785797, 0.04598999, 0.012741089, -0.01689148, 0.008277893, 0.016464233, 0.009246\n",
|
||||
"[-0.02330017, -0.013916016, 0.00022411346, 0.017196655, -0.034240723, 0.011131287, 0.011497498, -0.0\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98785c12",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API Reference\n",
|
||||
"\n",
|
||||
"For detailed documentation on `ZhipuAIEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/community/embeddings/langchain_community.embeddings.zhipuai.ZhipuAIEmbeddings.html).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.12.3"
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "raw",
|
||||
"id": "afaf8039",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"---\n",
|
||||
"sidebar_label: ZhipuAI\n",
|
||||
"keywords: [zhipuaiembeddings]\n",
|
||||
"---"
|
||||
]
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a3d6f34",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ZhipuAIEmbeddings\n",
|
||||
"\n",
|
||||
"This will help you get started with ZhipuAI embedding models using LangChain. For detailed documentation on `ZhipuAIEmbeddings` features and configuration options, please refer to the [API reference](https://bigmodel.cn/dev/api#vector).\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"### Integration details\n",
|
||||
"\n",
|
||||
"| Provider | Package |\n",
|
||||
"|:--------:|:-------:|\n",
|
||||
"| [ZhipuAI](/docs/integrations/providers/zhipuai/) | [langchain-community](https://python.langchain.com/api_reference/community/embeddings/langchain_community.embeddings.zhipuai.ZhipuAIEmbeddings.html) |\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"To access ZhipuAI embedding models you'll need to create a/an ZhipuAI account, get an API key, and install the `zhipuai` integration package.\n",
|
||||
"\n",
|
||||
"### Credentials\n",
|
||||
"\n",
|
||||
"Head to [https://bigmodel.cn/](https://bigmodel.cn/usercenter/apikeys) to sign up to ZhipuAI and generate an API key. Once you've done this set the ZHIPUAI_API_KEY environment variable:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "36521c2a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import getpass\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"if not os.getenv(\"ZHIPUAI_API_KEY\"):\n",
|
||||
" os.environ[\"ZHIPUAI_API_KEY\"] = getpass.getpass(\"Enter your ZhipuAI API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c84fb993",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "39a4953b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d9664366",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Installation\n",
|
||||
"\n",
|
||||
"The LangChain ZhipuAI integration lives in the `zhipuai` package:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "64853226",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Note: you may need to restart the kernel to use updated packages.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install -qU zhipuai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "45dd1724",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Instantiation\n",
|
||||
"\n",
|
||||
"Now we can instantiate our model object and generate chat completions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "9ea7a09b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.embeddings import ZhipuAIEmbeddings\n",
|
||||
"\n",
|
||||
"embeddings = ZhipuAIEmbeddings(\n",
|
||||
" model=\"embedding-3\",\n",
|
||||
" # With the `embedding-3` class\n",
|
||||
" # of models, you can specify the size\n",
|
||||
" # of the embeddings you want returned.\n",
|
||||
" # dimensions=1024\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "77d271b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Indexing and Retrieval\n",
|
||||
"\n",
|
||||
"Embedding models are often used in retrieval-augmented generation (RAG) flows, both as part of indexing data as well as later retrieving it. For more detailed instructions, please see our [RAG tutorials](/docs/tutorials/rag).\n",
|
||||
"\n",
|
||||
"Below, see how to index and retrieve data using the `embeddings` object we initialized above. In this example, we will index and retrieve a sample document in the `InMemoryVectorStore`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "d817716b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'LangChain is the framework for building context-aware reasoning applications'"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Create a vector store with a sample text\n",
|
||||
"from langchain_core.vectorstores import InMemoryVectorStore\n",
|
||||
"\n",
|
||||
"text = \"LangChain is the framework for building context-aware reasoning applications\"\n",
|
||||
"\n",
|
||||
"vectorstore = InMemoryVectorStore.from_texts(\n",
|
||||
" [text],\n",
|
||||
" embedding=embeddings,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Use the vectorstore as a retriever\n",
|
||||
"retriever = vectorstore.as_retriever()\n",
|
||||
"\n",
|
||||
"# Retrieve the most similar text\n",
|
||||
"retrieved_documents = retriever.invoke(\"What is LangChain?\")\n",
|
||||
"\n",
|
||||
"# show the retrieved document's content\n",
|
||||
"retrieved_documents[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e02b9855",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Direct Usage\n",
|
||||
"\n",
|
||||
"Under the hood, the vectorstore and retriever implementations are calling `embeddings.embed_documents(...)` and `embeddings.embed_query(...)` to create embeddings for the text(s) used in `from_texts` and retrieval `invoke` operations, respectively.\n",
|
||||
"\n",
|
||||
"You can directly call these methods to get embeddings for your own use cases.\n",
|
||||
"\n",
|
||||
"### Embed single texts\n",
|
||||
"\n",
|
||||
"You can embed single texts or documents with `embed_query`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "0d2befcd",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.022979736, 0.007785797, 0.04598999, 0.012741089, -0.01689148, 0.008277893, 0.016464233, 0.009246\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"single_vector = embeddings.embed_query(text)\n",
|
||||
"print(str(single_vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1b5a7d03",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Embed multiple texts\n",
|
||||
"\n",
|
||||
"You can embed multiple texts with `embed_documents`:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "2f4d6e97",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[-0.022979736, 0.007785797, 0.04598999, 0.012741089, -0.01689148, 0.008277893, 0.016464233, 0.009246\n",
|
||||
"[-0.02330017, -0.013916016, 0.00022411346, 0.017196655, -0.034240723, 0.011131287, 0.011497498, -0.0\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"text2 = (\n",
|
||||
" \"LangGraph is a library for building stateful, multi-actor applications with LLMs\"\n",
|
||||
")\n",
|
||||
"two_vectors = embeddings.embed_documents([text, text2])\n",
|
||||
"for vector in two_vectors:\n",
|
||||
" print(str(vector)[:100]) # Show the first 100 characters of the vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98785c12",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## API Reference\n",
|
||||
"\n",
|
||||
"For detailed documentation on `ZhipuAIEmbeddings` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/community/embeddings/langchain_community.embeddings.zhipuai.ZhipuAIEmbeddings.html).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user