Compare commits

..

38 Commits

Author SHA1 Message Date
ccurme
24eb670af9 docs: add admonition to DSPy integration page in v0.2 docs (#31277) 2025-05-19 10:10:04 -04:00
Erick Friis
b9d892ed56 community: release 0.2.19 (#28057) 2024-11-12 17:26:45 +00:00
Eugene Yurtsev
64c317eba0 community: patch graphqa chains (CVE-2024-8309) (#28050)
Patch for CVE-2024-8309 to the v0.2.x branch of langchain

https://nvd.nist.gov/vuln/detail/cve-2024-8309
2024-11-12 11:59:48 -05:00
Bagatur
76e1dc7e53 langchain,community[patch]: release with bumped core (#27854) 2024-11-04 11:08:48 -08:00
Bagatur
9fdeb74d99 core[patch]: Release 0.2.43 (#27808) 2024-10-31 13:26:39 -07:00
Bagatur
7d481f1010 core[patch]: remove prompt img loading (#27807) 2024-10-31 12:33:33 -07:00
Bagatur
33a53970e1 infra: turn off release attestations (#27766) 2024-10-30 15:28:32 -07:00
Bagatur
283cb50ea6 core[patch]: Release 0.2.42 (#27763) 2024-10-30 14:54:48 -07:00
Bagatur
5e3cee6c98 core[patch]: make get_all_basemodel_annotations public (#27762) 2024-10-30 14:43:23 -07:00
Piyush Jain
807314661d Added mapping to fix CI for #langchain-aws:227. (#27114)
Fixes unit tests for `ChatBedrockConverse` in
https://github.com/langchain-ai/langchain-aws/pull/227
2024-10-04 16:45:59 -07:00
Bagatur
6cfd1e846a core[patch]: Release 0.2.41 (#26687) 2024-09-19 15:15:23 -07:00
Piyush Jain
c5eca37262 core[patch]: Fixed bedrock chat model load. (#26643)
Fixes `load` for `ChatBedrock`, in-turn fixes unit test errors for
`standard-tests` in `langchain-aws`, when working with v0.2 branch.

```
FAILED tests/unit_tests/test_standard.py::TestBedrockStandard::test_serdes - ValueError: Trying to deserialize something that cannot be deserialized in current version of langchain-core: ('langchain', 'cha...
FAILED tests/unit_tests/test_standard.py::TestBedrockAsConverseStandard::test_serdes - ValueError: Trying to deserialize something that cannot be deserialized in current version of langchain-core: ('langchain', 'cha...
```
2024-09-19 15:00:45 -07:00
Erick Friis
70c992bd61 community: poetry lock for cffi dep (#26674) 2024-09-19 17:27:34 +00:00
Erick Friis
51fd70be63 infra: 0.2 release checkout ref for release note (#26604) 2024-09-17 18:48:20 -07:00
Erick Friis
7d2753fbd6 docs: v0.2 canonical (#26514) 2024-09-16 12:58:29 -07:00
Erick Friis
cfd11b2c34 docs: v0.2 deprecation warning (#26509) 2024-09-15 19:33:16 -07:00
Erick Friis
8b7843b343 docs: v0.2 link to latest in dropdown (#26504) 2024-09-15 18:57:35 +00:00
Bagatur
ffefaa6490 docs: point to v2 api ref (#26485) 2024-09-14 03:38:54 +00:00
Erick Friis
8ecef68b10 robocorp: release 0.0.10.post1, point to v0.2 branch (#26473) 2024-09-13 22:57:40 +00:00
Erick Friis
13a6847478 infra: codespell all prs, not by branch [v0.2] (#26475) 2024-09-13 15:56:07 -07:00
Erick Friis
064b891454 infra: v0.2 docs from v0.2 branch (#26449) 2024-09-13 14:03:57 -07:00
Bagatur
d9813bdbbc openai[patch]: Release 0.1.25 (#26439) 2024-09-13 12:00:01 -07:00
liuhetian
7fc9e99e21 openai[patch]: get output_type when using with_structured_output (#26307)
- This allows pydantic to correctly resolve annotations necessary when
using openai new param `json_schema`

Resolves issue: #26250

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-09-13 11:42:01 -07:00
Bagatur
0f2b32ffa9 core[patch]: Release 0.2.40 (#26435) 2024-09-13 09:57:09 -07:00
Bagatur
e32adad17a community[patch]: Release 0.2.17 (#26432) 2024-09-13 09:56:39 -07:00
langchain-infra
8a02fd9c01 core: add additional import mappings to loads (#26406)
Support using additional import mapping. This allows users to override
old mappings/add new imports to the loads function.

- [x ] **Add tests and docs**: If you're adding a new integration,
please include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-09-13 09:39:58 -07:00
Erick Friis
1d98937e8d partners/openai: release 0.1.24 (#26417) 2024-09-12 21:54:13 -07:00
Harrison Chase
28ad244e77 community, openai: support nested dicts (#26414)
needed for thinking tokens

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-12 21:47:47 -07:00
Erick Friis
c0dd293f10 partners/groq: release 0.1.10 (#26393) 2024-09-12 17:41:11 +00:00
Erick Friis
54c85087e2 groq: add back streaming tool calls (#26391)
api no longer throws an error

https://console.groq.com/docs/tool-use#streaming
2024-09-12 10:29:45 -07:00
jessicaou
396c0aee4d docs: Adding LC Academy links (#26164)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Jess Ou <jessou@jesss-mbp.local.meter>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-09-11 23:37:17 +00:00
Bagatur
feb351737c core[patch]: fix empty OpenAI tools when strict=True (#26287)
Fix #26232

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-09-11 16:06:03 -07:00
William FH
d87feb1b04 [Docs] Correct the admonition explaining min langchain-anthropic version in doc (#26359)
0.1.15 instead of just 0.1.5
2024-09-11 23:03:42 +00:00
ccurme
398718e1cb core[patch]: fix regression in convert_to_openai_tool with instances of Tool (#26327)
```python
from langchain_core.tools import Tool
from langchain_core.utils.function_calling import convert_to_openai_tool

def my_function(x: int) -> int:
    return x + 2

tool = Tool(
    name="tool_name",
    func=my_function,
    description="test description",
)
convert_to_openai_tool(tool)
```

Current:
```
{'type': 'function',
 'function': {'name': 'tool_name',
  'description': 'test description',
  'parameters': {'type': 'object',
   'properties': {'args': {'type': 'array', 'items': {}},
    'config': {'type': 'object',
     'properties': {'tags': {'type': 'array', 'items': {'type': 'string'}},
      'metadata': {'type': 'object'},
      'callbacks': {'anyOf': [{'type': 'array', 'items': {}}, {}]},
      'run_name': {'type': 'string'},
      'max_concurrency': {'type': 'integer'},
      'recursion_limit': {'type': 'integer'},
      'configurable': {'type': 'object'},
      'run_id': {'type': 'string', 'format': 'uuid'}}},
    'kwargs': {'type': 'object'}},
   'required': ['config']}}}
```

Here:
```
{'type': 'function',
 'function': {'name': 'tool_name',
  'description': 'test description',
  'parameters': {'properties': {'__arg1': {'title': '__arg1',
     'type': 'string'}},
   'required': ['__arg1'],
   'type': 'object'}}}
```
2024-09-11 15:51:10 -04:00
이규민
7feae62ad7 core[patch]: Support non ASCII characters in tool output if user doesn't output string (#26319)
### simple modify
core: add supporting non english character

target issue is #26315 
same issue on langgraph -
https://github.com/langchain-ai/langgraph/issues/1504
2024-09-11 15:21:00 +00:00
William FH
b993172702 Keyword-like runnable config (#26295) 2024-09-11 07:44:47 -07:00
Bagatur
17659ca2cd core[patch]: Release 0.2.39 (#26279) 2024-09-10 20:11:27 +00:00
Nuno Campos
212c688ee0 core[minor]: Remove serialized manifest from tracing requests for non-llm runs (#26270)
- This takes a long time to compute, isn't used, and currently called on
every invocation of every chain/retriever/etc
2024-09-10 12:58:24 -07:00
1200 changed files with 21909 additions and 40219 deletions

View File

@@ -16,10 +16,6 @@ LANGCHAIN_DIRS = [
"libs/experimental",
]
# for 0.3rc, we are ignoring core dependents
# in order to be able to get CI to pass for individual PRs.
IGNORE_CORE_DEPENDENTS = True
# ignored partners are removed from dependents
# but still run if directly edited
IGNORED_PARTNERS = [
@@ -106,9 +102,9 @@ def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
if dir_ == "libs/core":
return [
{"working-directory": dir_, "python-version": f"3.{v}"}
for v in range(9, 13)
for v in range(8, 13)
]
min_python = "3.9"
min_python = "3.8"
max_python = "3.12"
# custom logic for specific directories
@@ -188,9 +184,6 @@ if __name__ == "__main__":
# for extended testing
found = False
for dir_ in LANGCHAIN_DIRS:
if dir_ == "libs/core" and IGNORE_CORE_DEPENDENTS:
dirs_to_run["extended-test"].add(dir_)
continue
if file.startswith(dir_):
found = True
if found:

View File

@@ -11,7 +11,7 @@ if __name__ == "__main__":
# see if we're releasing an rc
version = toml_data["tool"]["poetry"]["version"]
releasing_rc = "rc" in version or "dev" in version
releasing_rc = "rc" in version
# if not, iterate through dependencies and make sure none allow prereleases
if not releasing_rc:

View File

@@ -15,7 +15,6 @@ MIN_VERSION_LIBS = [
"langchain",
"langchain-text-splitters",
"SQLAlchemy",
"pydantic",
]
SKIP_IF_PULL_REQUEST = ["langchain-core"]

114
.github/workflows/_dependencies.yml vendored Normal file
View File

@@ -0,0 +1,114 @@
name: dependencies
on:
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
langchain-location:
required: false
type: string
description: "Relative path to the langchain library folder"
python-version:
required: true
type: string
description: "Python version to use"
env:
POETRY_VERSION: "1.7.1"
jobs:
build:
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
name: dependency checks ${{ inputs.python-version }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ inputs.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: pydantic-cross-compat
- name: Install dependencies
shell: bash
run: poetry install
- name: Check imports with base dependencies
shell: bash
run: poetry run make check_imports
- name: Install test dependencies
shell: bash
run: poetry install --with test
- name: Install langchain editable
working-directory: ${{ inputs.working-directory }}
if: ${{ inputs.langchain-location }}
env:
LANGCHAIN_LOCATION: ${{ inputs.langchain-location }}
run: |
poetry run pip install -e "$LANGCHAIN_LOCATION"
- name: Install the opposite major version of pydantic
# If normal tests use pydantic v1, here we'll use v2, and vice versa.
shell: bash
# airbyte currently doesn't support pydantic v2
if: ${{ !startsWith(inputs.working-directory, 'libs/partners/airbyte') }}
run: |
# Determine the major part of pydantic version
REGULAR_VERSION=$(poetry run python -c "import pydantic; print(pydantic.__version__)" | cut -d. -f1)
if [[ "$REGULAR_VERSION" == "1" ]]; then
PYDANTIC_DEP=">=2.1,<3"
TEST_WITH_VERSION="2"
elif [[ "$REGULAR_VERSION" == "2" ]]; then
PYDANTIC_DEP="<2"
TEST_WITH_VERSION="1"
else
echo "Unexpected pydantic major version '$REGULAR_VERSION', cannot determine which version to use for cross-compatibility test."
exit 1
fi
# Install via `pip` instead of `poetry add` to avoid changing lockfile,
# which would prevent caching from working: the cache would get saved
# to a different key than where it gets loaded from.
poetry run pip install "pydantic${PYDANTIC_DEP}"
# Ensure that the correct pydantic is installed now.
echo "Checking pydantic version... Expecting ${TEST_WITH_VERSION}"
# Determine the major part of pydantic version
CURRENT_VERSION=$(poetry run python -c "import pydantic; print(pydantic.__version__)" | cut -d. -f1)
# Check that the major part of pydantic version is as expected, if not
# raise an error
if [[ "$CURRENT_VERSION" != "$TEST_WITH_VERSION" ]]; then
echo "Error: expected pydantic version ${CURRENT_VERSION} to have been installed, but found: ${TEST_WITH_VERSION}"
exit 1
fi
echo "Found pydantic version ${CURRENT_VERSION}, as expected"
- name: Run pydantic compatibility tests
# airbyte currently doesn't support pydantic v2
if: ${{ !startsWith(inputs.working-directory, 'libs/partners/airbyte') }}
shell: bash
run: make test
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -85,7 +85,7 @@ jobs:
path: langchain
sparse-checkout: | # this only grabs files for relevant dir
${{ inputs.working-directory }}
ref: master # this scopes to just master branch
ref: ${{ github.ref }} # this scopes to just ref'd branch
fetch-depth: 0 # this fetches entire commit history
- name: Check Tags
id: check-tags
@@ -336,6 +336,8 @@ jobs:
packages-dir: ${{ inputs.working-directory }}/dist/
verbose: true
print-hash: true
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
attestations: false
mark-release:
needs:

View File

@@ -98,3 +98,5 @@ jobs:
# This is *only for CI use* and is *extremely dangerous* otherwise!
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
skip-existing: true
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
attestations: false

View File

@@ -46,7 +46,6 @@ jobs:
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.lint) }}
fail-fast: false
uses: ./.github/workflows/_lint.yml
with:
working-directory: ${{ matrix.job-configs.working-directory }}
@@ -60,7 +59,6 @@ jobs:
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.test) }}
fail-fast: false
uses: ./.github/workflows/_test.yml
with:
working-directory: ${{ matrix.job-configs.working-directory }}
@@ -73,7 +71,6 @@ jobs:
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.test-doc-imports) }}
fail-fast: false
uses: ./.github/workflows/_test_doc_imports.yml
secrets: inherit
with:
@@ -86,13 +83,25 @@ jobs:
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.compile-integration-tests) }}
fail-fast: false
uses: ./.github/workflows/_compile_integration_test.yml
with:
working-directory: ${{ matrix.job-configs.working-directory }}
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
dependencies:
name: cd ${{ matrix.job-configs.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.dependencies != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.dependencies) }}
uses: ./.github/workflows/_dependencies.yml
with:
working-directory: ${{ matrix.job-configs.working-directory }}
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
extended-tests:
name: "cd ${{ matrix.job-configs.working-directory }} / make extended_tests #${{ matrix.job-configs.python-version }}"
needs: [ build ]
@@ -101,7 +110,6 @@ jobs:
matrix:
# note different variable for extended test dirs
job-configs: ${{ fromJson(needs.build.outputs.extended-tests) }}
fail-fast: false
runs-on: ubuntu-latest
defaults:
run:
@@ -141,7 +149,7 @@ jobs:
echo "$STATUS" | grep 'nothing to commit, working tree clean'
ci_success:
name: "CI Success"
needs: [build, lint, test, compile-integration-tests, extended-tests, test-doc-imports]
needs: [build, lint, test, compile-integration-tests, dependencies, extended-tests, test-doc-imports]
if: |
always()
runs-on: ubuntu-latest

View File

@@ -3,7 +3,7 @@ name: CI / cd . / make spell_check
on:
push:
branches: [master, v0.1]
branches: [master, v0.1, v0.2]
pull_request:
permissions:

View File

@@ -17,7 +17,7 @@ jobs:
fail-fast: false
matrix:
python-version:
- "3.9"
- "3.8"
- "3.11"
working-directory:
- "libs/partners/openai"

View File

@@ -36,6 +36,7 @@ api_docs_build:
API_PKG ?= text-splitters
api_docs_quick_preview:
poetry run pip install "pydantic<2"
poetry run python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && poetry run make html
poetry run python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/

View File

@@ -49,7 +49,7 @@ For these applications, LangChain simplifies the entire application lifecycle:
- **`langchain-community`**: Third party integrations.
- Some integrations have been further split into **partner packages** that only rely on **`langchain-core`**. Examples include **`langchain_openai`** and **`langchain_anthropic`**.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **[`LangGraph`](https://langchain-ai.github.io/langgraph/)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it.
- **[`LangGraph`](https://langchain-ai.github.io/langgraph/)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it. To learn more about LangGraph, check out our first LangChain Academy course, *Introduction to LangGraph*, available [here](https://academy.langchain.com/courses/intro-to-langgraph).
### Productionization:

View File

@@ -90,8 +90,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"# Please manually enter OpenAI Key"
]
},

View File

@@ -33,8 +33,8 @@ install-py-deps:
python3 -m venv .venv
$(PYTHON) -m pip install --upgrade pip
$(PYTHON) -m pip install --upgrade uv
$(PYTHON) -m uv pip install --pre -r vercel_requirements.txt
$(PYTHON) -m uv pip install --pre --editable $(PARTNER_DEPS_LIST)
$(PYTHON) -m uv pip install -r vercel_requirements.txt
$(PYTHON) -m uv pip install --editable $(PARTNER_DEPS_LIST)
generate-files:
mkdir -p $(INTERMEDIATE_DIR)
@@ -82,7 +82,7 @@ vercel-build: install-vercel-deps build generate-references
mv $(OUTPUT_NEW_DOCS_DIR) docs
rm -rf build
mkdir static/api_reference
git clone --depth=1 https://github.com/baskaryan/langchain-api-docs-build.git
git clone --depth=1 -b v0.2 https://github.com/baskaryan/langchain-api-docs-build.git
mv langchain-api-docs-build/api_reference_build/html/* static/api_reference/
rm -rf langchain-api-docs-build
NODE_OPTIONS="--max-old-space-size=5000" yarn run docusaurus build

View File

@@ -1,5 +1,5 @@
autodoc_pydantic>=2,<3
sphinx>=8,<9
autodoc_pydantic>=1,<2
sphinx<=7
myst-parser>=3
sphinx-autobuild>=2024
pydata-sphinx-theme>=0.15
@@ -8,4 +8,4 @@ myst-nb>=1.1.1
pyyaml
sphinx-design
sphinx-copybutton
beautifulsoup4
beautifulsoup4

View File

@@ -17,10 +17,7 @@ def process_toc_h3_elements(html_content: str) -> str:
# Process each element
for element in toc_h3_elements:
try:
element = element.a.code.span
except Exception:
continue
element = element.a.code.span
# Get the text content of the element
content = element.get_text()

View File

@@ -15,7 +15,7 @@
:member-order: groupwise
:show-inheritance: True
:special-members: __call__
:exclude-members: construct, copy, dict, from_orm, parse_file, parse_obj, parse_raw, schema, schema_json, update_forward_refs, validate, json, is_lc_serializable, to_json, to_json_not_implemented, lc_secrets, lc_attributes, lc_id, get_lc_namespace, model_construct, model_copy, model_dump, model_dump_json, model_parametrized_name, model_post_init, model_rebuild, model_validate, model_validate_json, model_validate_strings, model_extra, model_fields_set, model_json_schema
:exclude-members: construct, copy, dict, from_orm, parse_file, parse_obj, parse_raw, schema, schema_json, update_forward_refs, validate, json, is_lc_serializable, to_json, to_json_not_implemented, lc_secrets, lc_attributes, lc_id, get_lc_namespace
{% block attributes %}

View File

@@ -15,7 +15,7 @@
:member-order: groupwise
:show-inheritance: True
:special-members: __call__
:exclude-members: construct, copy, dict, from_orm, parse_file, parse_obj, parse_raw, schema, schema_json, update_forward_refs, validate, json, is_lc_serializable, to_json_not_implemented, lc_secrets, lc_attributes, lc_id, get_lc_namespace, astream_log, transform, atransform, get_output_schema, get_prompts, config_schema, map, pick, pipe, with_listeners, with_alisteners, with_config, with_fallbacks, with_types, with_retry, InputType, OutputType, config_specs, output_schema, get_input_schema, get_graph, get_name, input_schema, name, bind, assign, as_tool, get_config_jsonschema, get_input_jsonschema, get_output_jsonschema, model_construct, model_copy, model_dump, model_dump_json, model_parametrized_name, model_post_init, model_rebuild, model_validate, model_validate_json, model_validate_strings, to_json, model_extra, model_fields_set, model_json_schema
:exclude-members: construct, copy, dict, from_orm, parse_file, parse_obj, parse_raw, schema, schema_json, update_forward_refs, validate, json, is_lc_serializable, to_json_not_implemented, lc_secrets, lc_attributes, lc_id, get_lc_namespace, astream_log, transform, atransform, get_output_schema, get_prompts, config_schema, map, pick, pipe, with_listeners, with_alisteners, with_config, with_fallbacks, with_types, with_retry, InputType, OutputType, config_specs, output_schema, get_input_schema, get_graph, get_name, input_schema, name, bind, assign, as_tool
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃

View File

@@ -945,7 +945,7 @@ Here's an example:
```python
from typing import Optional
from pydantic import BaseModel, Field
from langchain_core.pydantic_v1 import BaseModel, Field
class Joke(BaseModel):
@@ -1062,7 +1062,7 @@ a `tool_calls` field containing `args` that match the desired shape.
There are several acceptable formats you can use to bind tools to a model in LangChain. Here's one example:
```python
from pydantic import BaseModel, Field
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI
class ResponseFormatter(BaseModel):

View File

@@ -18,23 +18,8 @@
"cell_type": "code",
"execution_count": 1,
"id": "994d6c74",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:08:00.190093Z",
"iopub.status.busy": "2024-09-10T20:08:00.189665Z",
"iopub.status.idle": "2024-09-10T20:08:05.438015Z",
"shell.execute_reply": "2024-09-10T20:08:05.437685Z"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"USER_AGENT environment variable not set, consider setting it to identify your requests.\n"
]
}
],
"metadata": {},
"outputs": [],
"source": [
"# Build a sample vectorDB\n",
"from langchain_chroma import Chroma\n",
@@ -69,14 +54,7 @@
"cell_type": "code",
"execution_count": 2,
"id": "edbca101",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:08:05.439930Z",
"iopub.status.busy": "2024-09-10T20:08:05.439810Z",
"iopub.status.idle": "2024-09-10T20:08:05.553766Z",
"shell.execute_reply": "2024-09-10T20:08:05.553520Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers.multi_query import MultiQueryRetriever\n",
@@ -93,14 +71,7 @@
"cell_type": "code",
"execution_count": 3,
"id": "9e6d3b69",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:08:05.555359Z",
"iopub.status.busy": "2024-09-10T20:08:05.555262Z",
"iopub.status.idle": "2024-09-10T20:08:05.557046Z",
"shell.execute_reply": "2024-09-10T20:08:05.556825Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"# Set logging for the queries\n",
@@ -114,20 +85,13 @@
"cell_type": "code",
"execution_count": 4,
"id": "bc93dc2b-9407-48b0-9f9a-338247e7eb69",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:08:05.558176Z",
"iopub.status.busy": "2024-09-10T20:08:05.558100Z",
"iopub.status.idle": "2024-09-10T20:08:07.250342Z",
"shell.execute_reply": "2024-09-10T20:08:07.249711Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:langchain.retrievers.multi_query:Generated queries: ['1. How can Task Decomposition be achieved through different methods?', '2. What strategies are commonly used for Task Decomposition?', '3. What are the various ways to break down tasks in Task Decomposition?']\n"
"INFO:langchain.retrievers.multi_query:Generated queries: ['1. How can Task Decomposition be achieved through different methods?', '2. What strategies are commonly used for Task Decomposition?', '3. What are the various techniques for breaking down tasks in Task Decomposition?']\n"
]
},
{
@@ -173,21 +137,14 @@
"cell_type": "code",
"execution_count": 5,
"id": "d9afb0ca",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:08:07.253875Z",
"iopub.status.busy": "2024-09-10T20:08:07.253600Z",
"iopub.status.idle": "2024-09-10T20:08:07.277848Z",
"shell.execute_reply": "2024-09-10T20:08:07.277487Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
"\n",
"from langchain_core.output_parsers import BaseOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"# Output parser will split the LLM result into a list of queries\n",
@@ -223,20 +180,13 @@
"cell_type": "code",
"execution_count": 6,
"id": "59c75c56-dbd7-4887-b9ba-0b5b21069f51",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:08:07.280001Z",
"iopub.status.busy": "2024-09-10T20:08:07.279861Z",
"iopub.status.idle": "2024-09-10T20:08:09.579525Z",
"shell.execute_reply": "2024-09-10T20:08:09.578837Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:langchain.retrievers.multi_query:Generated queries: ['1. Can you provide insights on regression from the course material?', '2. How is regression discussed in the course content?', '3. What information does the course offer regarding regression?', '4. In what way is regression covered in the course?', \"5. What are the course's teachings on regression?\"]\n"
"INFO:langchain.retrievers.multi_query:Generated queries: ['1. Can you provide insights on regression from the course material?', '2. How is regression discussed in the course content?', '3. What information does the course offer about regression?', '4. In what way is regression covered in the course?', '5. What are the teachings of the course regarding regression?']\n"
]
},
{
@@ -278,7 +228,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -45,8 +45,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()"
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{

View File

@@ -49,8 +49,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()"
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{
@@ -184,7 +183,7 @@
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_z0OU2CytqENVrRTI6T8DkI3u', 'function': {'arguments': '{\"location\": \"San Francisco, CA\", \"unit\": \"celsius\"}', 'name': 'get_current_weather'}, 'type': 'function'}, {'id': 'call_ft96IJBh0cMKkQWrZjNg4bsw', 'function': {'arguments': '{\"location\": \"New York, NY\", \"unit\": \"celsius\"}', 'name': 'get_current_weather'}, 'type': 'function'}, {'id': 'call_tfbtGgCLmuBuWgZLvpPwvUMH', 'function': {'arguments': '{\"location\": \"Los Angeles, CA\", \"unit\": \"celsius\"}', 'name': 'get_current_weather'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 84, 'prompt_tokens': 85, 'total_tokens': 169}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': 'fp_77a673219d', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-d57ad5fa-b52a-4822-bc3e-74f838697e18-0', tool_calls=[{'name': 'get_current_weather', 'args': {'location': 'San Francisco, CA', 'unit': 'celsius'}, 'id': 'call_z0OU2CytqENVrRTI6T8DkI3u'}, {'name': 'get_current_weather', 'args': {'location': 'New York, NY', 'unit': 'celsius'}, 'id': 'call_ft96IJBh0cMKkQWrZjNg4bsw'}, {'name': 'get_current_weather', 'args': {'location': 'Los Angeles, CA', 'unit': 'celsius'}, 'id': 'call_tfbtGgCLmuBuWgZLvpPwvUMH'}])"
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_z0OU2CytqENVrRTI6T8DkI3u', 'function': {'arguments': '{\"location\": \"San Francisco, CA\", \"unit\": \"celsius\"}', 'name': 'get_current_weather'}, 'type': 'function'}, {'id': 'call_ft96IJBh0cMKkQWrZjNg4bsw', 'function': {'arguments': '{\"location\": \"New York, NY\", \"unit\": \"celsius\"}', 'name': 'get_current_weather'}, 'type': 'function'}, {'id': 'call_tfbtGgCLmuBuWgZLvpPwvUMH', 'function': {'arguments': '{\"location\": \"Los Angeles, CA\", \"unit\": \"celsius\"}', 'name': 'get_current_weather'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 84, 'prompt_tokens': 85, 'total_tokens': 169}, 'model_name': 'gpt-3.5-turbo-1106', 'system_fingerprint': 'fp_77a673219d', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-d57ad5fa-b52a-4822-bc3e-74f838697e18-0', tool_calls=[{'name': 'get_current_weather', 'args': {'location': 'San Francisco, CA', 'unit': 'celsius'}, 'id': 'call_z0OU2CytqENVrRTI6T8DkI3u'}, {'name': 'get_current_weather', 'args': {'location': 'New York, NY', 'unit': 'celsius'}, 'id': 'call_ft96IJBh0cMKkQWrZjNg4bsw'}, {'name': 'get_current_weather', 'args': {'location': 'Los Angeles, CA', 'unit': 'celsius'}, 'id': 'call_tfbtGgCLmuBuWgZLvpPwvUMH'}])"
]
},
"execution_count": 5,
@@ -193,7 +192,7 @@
}
],
"source": [
"model = ChatOpenAI(model=\"gpt-4o-mini\").bind(tools=tools)\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo-1106\").bind(tools=tools)\n",
"model.invoke(\"What's the weather in SF, NYC and LA?\")"
]
},

View File

@@ -50,8 +50,7 @@
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI()"
]

View File

@@ -26,32 +26,10 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "165b0de6-9ae3-4e3d-aa98-4fc8a97c4a06",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:22:32.858670Z",
"iopub.status.busy": "2024-09-10T20:22:32.858278Z",
"iopub.status.idle": "2024-09-10T20:22:33.009452Z",
"shell.execute_reply": "2024-09-10T20:22:33.007022Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"zsh:1: 0.2.8 not found\r\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain>=0.2.8 langchain-openai langchain-anthropic langchain-google-vertexai"
]
@@ -66,48 +44,19 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 5,
"id": "79e14913-803c-4382-9009-5c6af3d75d35",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:22:33.015729Z",
"iopub.status.busy": "2024-09-10T20:22:33.015241Z",
"iopub.status.idle": "2024-09-10T20:22:39.391716Z",
"shell.execute_reply": "2024-09-10T20:22:39.390438Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/var/folders/4j/2rz3865x6qg07tx43146py8h0000gn/T/ipykernel_95293/571506279.py:4: LangChainBetaWarning: The function `init_chat_model` is in beta. It is actively being worked on, so the API may change.\n",
" gpt_4o = init_chat_model(\"gpt-4o\", model_provider=\"openai\", temperature=0)\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"GPT-4o: I'm an AI created by OpenAI, and I don't have a personal name. How can I assist you today?\n",
"\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Claude Opus: My name is Claude. It's nice to meet you!\n",
"\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Gemini 1.5: I am a large language model, trained by Google. \n",
"GPT-4o: I'm an AI created by OpenAI, and I don't have a personal name. You can call me Assistant! How can I help you today?\n",
"\n",
"I don't have a name like a person does. You can call me Bard if you like! 😊 \n",
"Claude Opus: My name is Claude. It's nice to meet you!\n",
"\n",
"Gemini 1.5: I am a large language model, trained by Google. I do not have a name. \n",
"\n",
"\n"
]
@@ -145,16 +94,9 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "0378ccc6-95bc-4d50-be50-fccc193f0a71",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:22:39.396908Z",
"iopub.status.busy": "2024-09-10T20:22:39.396563Z",
"iopub.status.idle": "2024-09-10T20:22:39.444959Z",
"shell.execute_reply": "2024-09-10T20:22:39.444646Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"gpt_4o = init_chat_model(\"gpt-4o\", temperature=0)\n",
@@ -174,24 +116,17 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"id": "6c037f27-12d7-4e83-811e-4245c0e3ba58",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:22:39.446901Z",
"iopub.status.busy": "2024-09-10T20:22:39.446773Z",
"iopub.status.idle": "2024-09-10T20:22:40.301906Z",
"shell.execute_reply": "2024-09-10T20:22:40.300918Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"I'm an AI created by OpenAI, and I don't have a personal name. How can I assist you today?\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 23, 'prompt_tokens': 11, 'total_tokens': 34}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_25624ae3a5', 'finish_reason': 'stop', 'logprobs': None}, id='run-b41df187-4627-490d-af3c-1c96282d3eb0-0', usage_metadata={'input_tokens': 11, 'output_tokens': 23, 'total_tokens': 34})"
"AIMessage(content=\"I'm an AI language model created by OpenAI, and I don't have a personal name. You can call me Assistant or any other name you prefer! How can I assist you today?\", response_metadata={'token_usage': {'completion_tokens': 37, 'prompt_tokens': 11, 'total_tokens': 48}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_d576307f90', 'finish_reason': 'stop', 'logprobs': None}, id='run-5428ab5c-b5c0-46de-9946-5d4ca40dbdc8-0', usage_metadata={'input_tokens': 11, 'output_tokens': 37, 'total_tokens': 48})"
]
},
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -206,24 +141,17 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "321e3036-abd2-4e1f-bcc6-606efd036954",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:22:40.316030Z",
"iopub.status.busy": "2024-09-10T20:22:40.315628Z",
"iopub.status.idle": "2024-09-10T20:22:41.199134Z",
"shell.execute_reply": "2024-09-10T20:22:41.198173Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"My name is Claude. It's nice to meet you!\", additional_kwargs={}, response_metadata={'id': 'msg_01Fx9P74A7syoFkwE73CdMMY', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 11, 'output_tokens': 15}}, id='run-a0fd2bbd-3b7e-46bf-8d69-a48c7e60b03c-0', usage_metadata={'input_tokens': 11, 'output_tokens': 15, 'total_tokens': 26})"
"AIMessage(content=\"My name is Claude. It's nice to meet you!\", response_metadata={'id': 'msg_012XvotUJ3kGLXJUWKBVxJUi', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 11, 'output_tokens': 15}}, id='run-1ad1eefe-f1c6-4244-8bc6-90e2cb7ee554-0', usage_metadata={'input_tokens': 11, 'output_tokens': 15, 'total_tokens': 26})"
]
},
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@@ -246,24 +174,17 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 9,
"id": "814a2289-d0db-401e-b555-d5116112b413",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:22:41.203346Z",
"iopub.status.busy": "2024-09-10T20:22:41.203004Z",
"iopub.status.idle": "2024-09-10T20:22:41.891450Z",
"shell.execute_reply": "2024-09-10T20:22:41.890539Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"I'm an AI created by OpenAI, and I don't have a personal name. How can I assist you today?\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 23, 'prompt_tokens': 11, 'total_tokens': 34}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_25624ae3a5', 'finish_reason': 'stop', 'logprobs': None}, id='run-3380f977-4b89-4f44-bc02-b64043b3166f-0', usage_metadata={'input_tokens': 11, 'output_tokens': 23, 'total_tokens': 34})"
"AIMessage(content=\"I'm an AI language model created by OpenAI, and I don't have a personal name. You can call me Assistant or any other name you prefer! How can I assist you today?\", response_metadata={'token_usage': {'completion_tokens': 37, 'prompt_tokens': 11, 'total_tokens': 48}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_ce0793330f', 'finish_reason': 'stop', 'logprobs': None}, id='run-3923e328-7715-4cd6-b215-98e4b6bf7c9d-0', usage_metadata={'input_tokens': 11, 'output_tokens': 37, 'total_tokens': 48})"
]
},
"execution_count": 6,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
@@ -281,24 +202,17 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 10,
"id": "6c8755ba-c001-4f5a-a497-be3f1db83244",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:22:41.896413Z",
"iopub.status.busy": "2024-09-10T20:22:41.895967Z",
"iopub.status.idle": "2024-09-10T20:22:42.767565Z",
"shell.execute_reply": "2024-09-10T20:22:42.766619Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"My name is Claude. It's nice to meet you!\", additional_kwargs={}, response_metadata={'id': 'msg_01EFKSWpmsn2PSYPQa4cNHWb', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 11, 'output_tokens': 15}}, id='run-3c58f47c-41b9-4e56-92e7-fb9602e3787c-0', usage_metadata={'input_tokens': 11, 'output_tokens': 15, 'total_tokens': 26})"
"AIMessage(content=\"My name is Claude. It's nice to meet you!\", response_metadata={'id': 'msg_01RyYR64DoMPNCfHeNnroMXm', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 11, 'output_tokens': 15}}, id='run-22446159-3723-43e6-88df-b84797e7751d-0', usage_metadata={'input_tokens': 11, 'output_tokens': 15, 'total_tokens': 26})"
]
},
"execution_count": 7,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
@@ -328,37 +242,28 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 7,
"id": "067dabee-1050-4110-ae24-c48eba01e13b",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:22:42.771941Z",
"iopub.status.busy": "2024-09-10T20:22:42.771606Z",
"iopub.status.idle": "2024-09-10T20:22:43.909206Z",
"shell.execute_reply": "2024-09-10T20:22:43.908496Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'GetPopulation',\n",
" 'args': {'location': 'Los Angeles, CA'},\n",
" 'id': 'call_Ga9m8FAArIyEjItHmztPYA22',\n",
" 'type': 'tool_call'},\n",
" 'id': 'call_sYT3PFMufHGWJD32Hi2CTNUP'},\n",
" {'name': 'GetPopulation',\n",
" 'args': {'location': 'New York, NY'},\n",
" 'id': 'call_jh2dEvBaAHRaw5JUDthOs7rt',\n",
" 'type': 'tool_call'}]"
" 'id': 'call_j1qjhxRnD3ffQmRyqjlI1Lnk'}]"
]
},
"execution_count": 8,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class GetWeather(BaseModel):\n",
@@ -383,31 +288,22 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 8,
"id": "e57dfe9f-cd24-4e37-9ce9-ccf8daf78f89",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:22:43.912746Z",
"iopub.status.busy": "2024-09-10T20:22:43.912447Z",
"iopub.status.idle": "2024-09-10T20:22:46.437049Z",
"shell.execute_reply": "2024-09-10T20:22:46.436093Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'GetPopulation',\n",
" 'args': {'location': 'Los Angeles, CA'},\n",
" 'id': 'toolu_01JMufPf4F4t2zLj7miFeqXp',\n",
" 'type': 'tool_call'},\n",
" 'id': 'toolu_01CxEHxKtVbLBrvzFS7GQ5xR'},\n",
" {'name': 'GetPopulation',\n",
" 'args': {'location': 'New York City, NY'},\n",
" 'id': 'toolu_01RQBHcE8kEEbYTuuS8WqY1u',\n",
" 'type': 'tool_call'}]"
" 'id': 'toolu_013A79qt5toWSsKunFBDZd5S'}]"
]
},
"execution_count": 9,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}

View File

@@ -71,7 +71,7 @@
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"openai_response = llm.invoke(\"hello\")\n",
"openai_response.usage_metadata"
]
@@ -182,13 +182,13 @@
"content=' you' id='run-adb20c31-60c7-43a2-99b2-d4a53ca5f623'\n",
"content=' today' id='run-adb20c31-60c7-43a2-99b2-d4a53ca5f623'\n",
"content='?' id='run-adb20c31-60c7-43a2-99b2-d4a53ca5f623'\n",
"content='' response_metadata={'finish_reason': 'stop', 'model_name': 'gpt-4o-mini'} id='run-adb20c31-60c7-43a2-99b2-d4a53ca5f623'\n",
"content='' response_metadata={'finish_reason': 'stop', 'model_name': 'gpt-3.5-turbo-0125'} id='run-adb20c31-60c7-43a2-99b2-d4a53ca5f623'\n",
"content='' id='run-adb20c31-60c7-43a2-99b2-d4a53ca5f623' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}\n"
]
}
],
"source": [
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n",
"\n",
"aggregate = None\n",
"for chunk in llm.stream(\"hello\", stream_usage=True):\n",
@@ -252,7 +252,7 @@
"content=' you' id='run-8e758550-94b0-4cca-a298-57482793c25d'\n",
"content=' today' id='run-8e758550-94b0-4cca-a298-57482793c25d'\n",
"content='?' id='run-8e758550-94b0-4cca-a298-57482793c25d'\n",
"content='' response_metadata={'finish_reason': 'stop', 'model_name': 'gpt-4o-mini'} id='run-8e758550-94b0-4cca-a298-57482793c25d'\n"
"content='' response_metadata={'finish_reason': 'stop', 'model_name': 'gpt-3.5-turbo-0125'} id='run-8e758550-94b0-4cca-a298-57482793c25d'\n"
]
}
],
@@ -289,7 +289,7 @@
}
],
"source": [
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Joke(BaseModel):\n",
@@ -300,7 +300,7 @@
"\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"gpt-4o-mini\",\n",
" model=\"gpt-3.5-turbo-0125\",\n",
" stream_usage=True,\n",
")\n",
"# Under the hood, .with_structured_output binds tools to the\n",
@@ -362,7 +362,7 @@
"from langchain_community.callbacks.manager import get_openai_callback\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"gpt-4o-mini\",\n",
" model=\"gpt-3.5-turbo-0125\",\n",
" temperature=0,\n",
" stream_usage=True,\n",
")\n",

View File

@@ -77,7 +77,7 @@
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"chat = ChatOpenAI(model=\"gpt-4o-mini\")"
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")"
]
},
{
@@ -191,7 +191,7 @@
{
"data": {
"text/plain": [
"AIMessage(content='You just asked me to translate the sentence \"I love programming\" from English to French.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 61, 'total_tokens': 79}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5cbb21c2-9c30-4031-8ea8-bfc497989535-0', usage_metadata={'input_tokens': 61, 'output_tokens': 18, 'total_tokens': 79})"
"AIMessage(content='You just asked me to translate the sentence \"I love programming\" from English to French.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 61, 'total_tokens': 79}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5cbb21c2-9c30-4031-8ea8-bfc497989535-0', usage_metadata={'input_tokens': 61, 'output_tokens': 18, 'total_tokens': 79})"
]
},
"execution_count": 5,
@@ -312,7 +312,7 @@
{
"data": {
"text/plain": [
"AIMessage(content='\"J\\'adore la programmation.\"', response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 39, 'total_tokens': 48}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-648b0822-b0bb-47a2-8e7d-7d34744be8f2-0', usage_metadata={'input_tokens': 39, 'output_tokens': 9, 'total_tokens': 48})"
"AIMessage(content='\"J\\'adore la programmation.\"', response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 39, 'total_tokens': 48}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-648b0822-b0bb-47a2-8e7d-7d34744be8f2-0', usage_metadata={'input_tokens': 39, 'output_tokens': 9, 'total_tokens': 48})"
]
},
"execution_count": 8,
@@ -342,7 +342,7 @@
{
"data": {
"text/plain": [
"AIMessage(content='You asked me to translate the sentence \"I love programming\" from English to French.', response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 63, 'total_tokens': 80}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5950435c-1dc2-43a6-836f-f989fd62c95e-0', usage_metadata={'input_tokens': 63, 'output_tokens': 17, 'total_tokens': 80})"
"AIMessage(content='You asked me to translate the sentence \"I love programming\" from English to French.', response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 63, 'total_tokens': 80}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5950435c-1dc2-43a6-836f-f989fd62c95e-0', usage_metadata={'input_tokens': 63, 'output_tokens': 17, 'total_tokens': 80})"
]
},
"execution_count": 9,
@@ -421,7 +421,7 @@
{
"data": {
"text/plain": [
"AIMessage(content='Your name is Nemo.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 66, 'total_tokens': 72}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-f8aabef8-631a-4238-a39b-701e881fbe47-0', usage_metadata={'input_tokens': 66, 'output_tokens': 6, 'total_tokens': 72})"
"AIMessage(content='Your name is Nemo.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 66, 'total_tokens': 72}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-f8aabef8-631a-4238-a39b-701e881fbe47-0', usage_metadata={'input_tokens': 66, 'output_tokens': 6, 'total_tokens': 72})"
]
},
"execution_count": 22,
@@ -501,7 +501,7 @@
{
"data": {
"text/plain": [
"AIMessage(content='P. Sherman is a fictional character from the animated movie \"Finding Nemo\" who lives at 42 Wallaby Way, Sydney.', response_metadata={'token_usage': {'completion_tokens': 27, 'prompt_tokens': 53, 'total_tokens': 80}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5642ef3a-fdbe-43cf-a575-d1785976a1b9-0', usage_metadata={'input_tokens': 53, 'output_tokens': 27, 'total_tokens': 80})"
"AIMessage(content='P. Sherman is a fictional character from the animated movie \"Finding Nemo\" who lives at 42 Wallaby Way, Sydney.', response_metadata={'token_usage': {'completion_tokens': 27, 'prompt_tokens': 53, 'total_tokens': 80}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5642ef3a-fdbe-43cf-a575-d1785976a1b9-0', usage_metadata={'input_tokens': 53, 'output_tokens': 27, 'total_tokens': 80})"
]
},
"execution_count": 24,
@@ -529,9 +529,9 @@
" HumanMessage(content='How are you today?'),\n",
" AIMessage(content='Fine thanks!'),\n",
" HumanMessage(content=\"What's my name?\"),\n",
" AIMessage(content='Your name is Nemo.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 66, 'total_tokens': 72}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-f8aabef8-631a-4238-a39b-701e881fbe47-0', usage_metadata={'input_tokens': 66, 'output_tokens': 6, 'total_tokens': 72}),\n",
" AIMessage(content='Your name is Nemo.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 66, 'total_tokens': 72}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-f8aabef8-631a-4238-a39b-701e881fbe47-0', usage_metadata={'input_tokens': 66, 'output_tokens': 6, 'total_tokens': 72}),\n",
" HumanMessage(content='Where does P. Sherman live?'),\n",
" AIMessage(content='P. Sherman is a fictional character from the animated movie \"Finding Nemo\" who lives at 42 Wallaby Way, Sydney.', response_metadata={'token_usage': {'completion_tokens': 27, 'prompt_tokens': 53, 'total_tokens': 80}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5642ef3a-fdbe-43cf-a575-d1785976a1b9-0', usage_metadata={'input_tokens': 53, 'output_tokens': 27, 'total_tokens': 80})]"
" AIMessage(content='P. Sherman is a fictional character from the animated movie \"Finding Nemo\" who lives at 42 Wallaby Way, Sydney.', response_metadata={'token_usage': {'completion_tokens': 27, 'prompt_tokens': 53, 'total_tokens': 80}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5642ef3a-fdbe-43cf-a575-d1785976a1b9-0', usage_metadata={'input_tokens': 53, 'output_tokens': 27, 'total_tokens': 80})]"
]
},
"execution_count": 25,
@@ -565,7 +565,7 @@
{
"data": {
"text/plain": [
"AIMessage(content=\"I'm sorry, but I don't have access to your personal information, so I don't know your name. How else may I assist you today?\", response_metadata={'token_usage': {'completion_tokens': 31, 'prompt_tokens': 74, 'total_tokens': 105}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-0ab03495-1f7c-4151-9070-56d2d1c565ff-0', usage_metadata={'input_tokens': 74, 'output_tokens': 31, 'total_tokens': 105})"
"AIMessage(content=\"I'm sorry, but I don't have access to your personal information, so I don't know your name. How else may I assist you today?\", response_metadata={'token_usage': {'completion_tokens': 31, 'prompt_tokens': 74, 'total_tokens': 105}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-0ab03495-1f7c-4151-9070-56d2d1c565ff-0', usage_metadata={'input_tokens': 74, 'output_tokens': 31, 'total_tokens': 105})"
]
},
"execution_count": 27,

View File

@@ -71,7 +71,7 @@
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"chat = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0.2)"
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\", temperature=0.2)"
]
},
{

View File

@@ -70,7 +70,7 @@
"\n",
"# Choose the LLM that will drive the agent\n",
"# Only certain models support this\n",
"chat = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)"
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\", temperature=0)"
]
},
{

View File

@@ -58,8 +58,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()"
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{
@@ -282,8 +281,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"ANTHROPIC_API_KEY\" not in os.environ:\n",
" os.environ[\"ANTHROPIC_API_KEY\"] = getpass()"
"os.environ[\"ANTHROPIC_API_KEY\"] = getpass()"
]
},
{

View File

@@ -258,7 +258,7 @@
"from langchain.retrievers.document_compressors import LLMListwiseRerank\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"\n",
"_filter = LLMListwiseRerank.from_llm(llm, top_n=1)\n",
"compression_retriever = ContextualCompressionRetriever(\n",

View File

@@ -190,7 +190,7 @@
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class GSchema(BaseModel):\n",
@@ -285,7 +285,7 @@
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)"
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{
@@ -362,11 +362,11 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_W8cnfOjwqEn4cFcg19LN9mYD', 'function': {'arguments': '{\"__arg1\":\"dogs\"}', 'name': 'pet_info_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 60, 'total_tokens': 79}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-d7f81de9-1fb7-4caf-81ed-16dcdb0b2ab4-0', tool_calls=[{'name': 'pet_info_retriever', 'args': {'__arg1': 'dogs'}, 'id': 'call_W8cnfOjwqEn4cFcg19LN9mYD'}], usage_metadata={'input_tokens': 60, 'output_tokens': 19, 'total_tokens': 79})]}}\n",
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_W8cnfOjwqEn4cFcg19LN9mYD', 'function': {'arguments': '{\"__arg1\":\"dogs\"}', 'name': 'pet_info_retriever'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 60, 'total_tokens': 79}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-d7f81de9-1fb7-4caf-81ed-16dcdb0b2ab4-0', tool_calls=[{'name': 'pet_info_retriever', 'args': {'__arg1': 'dogs'}, 'id': 'call_W8cnfOjwqEn4cFcg19LN9mYD'}], usage_metadata={'input_tokens': 60, 'output_tokens': 19, 'total_tokens': 79})]}}\n",
"----\n",
"{'tools': {'messages': [ToolMessage(content=\"[Document(id='86f835fe-4bbe-4ec6-aeb4-489a8b541707', page_content='Dogs are great companions, known for their loyalty and friendliness.')]\", name='pet_info_retriever', tool_call_id='call_W8cnfOjwqEn4cFcg19LN9mYD')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='Dogs are known for being great companions, known for their loyalty and friendliness.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 134, 'total_tokens': 152}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-9ca5847a-a5eb-44c0-a774-84cc2c5bbc5b-0', usage_metadata={'input_tokens': 134, 'output_tokens': 18, 'total_tokens': 152})]}}\n",
"{'agent': {'messages': [AIMessage(content='Dogs are known for being great companions, known for their loyalty and friendliness.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 134, 'total_tokens': 152}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-9ca5847a-a5eb-44c0-a774-84cc2c5bbc5b-0', usage_metadata={'input_tokens': 134, 'output_tokens': 18, 'total_tokens': 152})]}}\n",
"----\n"
]
}
@@ -497,11 +497,11 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_17iLPWvOD23zqwd1QVQ00Y63', 'function': {'arguments': '{\"question\":\"What are dogs known for according to pirates?\",\"answer_style\":\"quote\"}', 'name': 'pet_expert'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 28, 'prompt_tokens': 59, 'total_tokens': 87}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-7fef44f3-7bba-4e63-8c51-2ad9c5e65e2e-0', tool_calls=[{'name': 'pet_expert', 'args': {'question': 'What are dogs known for according to pirates?', 'answer_style': 'quote'}, 'id': 'call_17iLPWvOD23zqwd1QVQ00Y63'}], usage_metadata={'input_tokens': 59, 'output_tokens': 28, 'total_tokens': 87})]}}\n",
"{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_17iLPWvOD23zqwd1QVQ00Y63', 'function': {'arguments': '{\"question\":\"What are dogs known for according to pirates?\",\"answer_style\":\"quote\"}', 'name': 'pet_expert'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 28, 'prompt_tokens': 59, 'total_tokens': 87}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-7fef44f3-7bba-4e63-8c51-2ad9c5e65e2e-0', tool_calls=[{'name': 'pet_expert', 'args': {'question': 'What are dogs known for according to pirates?', 'answer_style': 'quote'}, 'id': 'call_17iLPWvOD23zqwd1QVQ00Y63'}], usage_metadata={'input_tokens': 59, 'output_tokens': 28, 'total_tokens': 87})]}}\n",
"----\n",
"{'tools': {'messages': [ToolMessage(content='\"Dogs are known for their loyalty and friendliness, making them great companions for pirates on long sea voyages.\"', name='pet_expert', tool_call_id='call_17iLPWvOD23zqwd1QVQ00Y63')]}}\n",
"----\n",
"{'agent': {'messages': [AIMessage(content='According to pirates, dogs are known for their loyalty and friendliness, making them great companions for pirates on long sea voyages.', response_metadata={'token_usage': {'completion_tokens': 27, 'prompt_tokens': 119, 'total_tokens': 146}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5a30edc3-7be0-4743-b980-ca2f8cad9b8d-0', usage_metadata={'input_tokens': 119, 'output_tokens': 27, 'total_tokens': 146})]}}\n",
"{'agent': {'messages': [AIMessage(content='According to pirates, dogs are known for their loyalty and friendliness, making them great companions for pirates on long sea voyages.', response_metadata={'token_usage': {'completion_tokens': 27, 'prompt_tokens': 119, 'total_tokens': 146}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5a30edc3-7be0-4743-b980-ca2f8cad9b8d-0', usage_metadata={'input_tokens': 119, 'output_tokens': 27, 'total_tokens': 146})]}}\n",
"----\n"
]
}

View File

@@ -13,7 +13,7 @@
"|---------------|---------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n",
"| name | str | Must be unique within a set of tools provided to an LLM or agent. |\n",
"| description | str | Describes what the tool does. Used as context by the LLM or agent. |\n",
"| args_schema | pydantic.BaseModel | Optional but recommended, and required if using callback handlers. It can be used to provide more information (e.g., few-shot examples) or validation for expected parameters. |\n",
"| args_schema | langchain.pydantic_v1.BaseModel | Optional but recommended, and required if using callback handlers. It can be used to provide more information (e.g., few-shot examples) or validation for expected parameters. |\n",
"| return_direct | boolean | Only relevant for agents. When True, after invoking the given tool, the agent will stop and return the result direcly to the user. |\n",
"\n",
"LangChain supports the creation of tools from:\n",
@@ -48,14 +48,7 @@
"cell_type": "code",
"execution_count": 1,
"id": "cc7005cd-072f-4d37-8453-6297468e5192",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:52.645451Z",
"iopub.status.busy": "2024-09-10T20:25:52.645081Z",
"iopub.status.idle": "2024-09-10T20:25:53.030958Z",
"shell.execute_reply": "2024-09-10T20:25:53.030669Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -95,14 +88,7 @@
"cell_type": "code",
"execution_count": 2,
"id": "0c0991db-b997-4611-be37-4346e660506b",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.032544Z",
"iopub.status.busy": "2024-09-10T20:25:53.032420Z",
"iopub.status.idle": "2024-09-10T20:25:53.035349Z",
"shell.execute_reply": "2024-09-10T20:25:53.035123Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import tool\n",
@@ -126,29 +112,22 @@
"cell_type": "code",
"execution_count": 3,
"id": "5626423f-053e-4a66-adca-1d794d835397",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.036658Z",
"iopub.status.busy": "2024-09-10T20:25:53.036574Z",
"iopub.status.idle": "2024-09-10T20:25:53.041154Z",
"shell.execute_reply": "2024-09-10T20:25:53.040964Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'description': 'Multiply a by the maximum of b.',\n",
" 'properties': {'a': {'description': 'scale factor',\n",
" 'title': 'A',\n",
"{'title': 'multiply_by_maxSchema',\n",
" 'description': 'Multiply a by the maximum of b.',\n",
" 'type': 'object',\n",
" 'properties': {'a': {'title': 'A',\n",
" 'description': 'scale factor',\n",
" 'type': 'string'},\n",
" 'b': {'description': 'list of ints over which to take maximum',\n",
" 'items': {'type': 'integer'},\n",
" 'title': 'B',\n",
" 'type': 'array'}},\n",
" 'required': ['a', 'b'],\n",
" 'title': 'multiply_by_maxSchema',\n",
" 'type': 'object'}"
" 'b': {'title': 'B',\n",
" 'description': 'list of ints over which to take maximum',\n",
" 'type': 'array',\n",
" 'items': {'type': 'integer'}}},\n",
" 'required': ['a', 'b']}"
]
},
"execution_count": 3,
@@ -184,14 +163,7 @@
"cell_type": "code",
"execution_count": 4,
"id": "9216d03a-f6ea-4216-b7e1-0661823a4c0b",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.042516Z",
"iopub.status.busy": "2024-09-10T20:25:53.042427Z",
"iopub.status.idle": "2024-09-10T20:25:53.045217Z",
"shell.execute_reply": "2024-09-10T20:25:53.045010Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -199,13 +171,13 @@
"text": [
"multiplication-tool\n",
"Multiply two numbers.\n",
"{'a': {'description': 'first number', 'title': 'A', 'type': 'integer'}, 'b': {'description': 'second number', 'title': 'B', 'type': 'integer'}}\n",
"{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}\n",
"True\n"
]
}
],
"source": [
"from pydantic import BaseModel, Field\n",
"from langchain.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class CalculatorInput(BaseModel):\n",
@@ -246,26 +218,19 @@
"cell_type": "code",
"execution_count": 5,
"id": "336f5538-956e-47d5-9bde-b732559f9e61",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.046526Z",
"iopub.status.busy": "2024-09-10T20:25:53.046456Z",
"iopub.status.idle": "2024-09-10T20:25:53.050045Z",
"shell.execute_reply": "2024-09-10T20:25:53.049836Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'description': 'The foo.',\n",
" 'properties': {'bar': {'description': 'The bar.',\n",
" 'title': 'Bar',\n",
"{'title': 'fooSchema',\n",
" 'description': 'The foo.',\n",
" 'type': 'object',\n",
" 'properties': {'bar': {'title': 'Bar',\n",
" 'description': 'The bar.',\n",
" 'type': 'string'},\n",
" 'baz': {'description': 'The baz.', 'title': 'Baz', 'type': 'integer'}},\n",
" 'required': ['bar', 'baz'],\n",
" 'title': 'fooSchema',\n",
" 'type': 'object'}"
" 'baz': {'title': 'Baz', 'description': 'The baz.', 'type': 'integer'}},\n",
" 'required': ['bar', 'baz']}"
]
},
"execution_count": 5,
@@ -312,14 +277,7 @@
"cell_type": "code",
"execution_count": 6,
"id": "564fbe6f-11df-402d-b135-ef6ff25e1e63",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.051302Z",
"iopub.status.busy": "2024-09-10T20:25:53.051218Z",
"iopub.status.idle": "2024-09-10T20:25:53.059704Z",
"shell.execute_reply": "2024-09-10T20:25:53.059490Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -362,14 +320,7 @@
"cell_type": "code",
"execution_count": 7,
"id": "6bc055d4-1fbe-4db5-8881-9c382eba6b1b",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.060971Z",
"iopub.status.busy": "2024-09-10T20:25:53.060883Z",
"iopub.status.idle": "2024-09-10T20:25:53.064615Z",
"shell.execute_reply": "2024-09-10T20:25:53.064408Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -378,7 +329,7 @@
"6\n",
"Calculator\n",
"multiply numbers\n",
"{'a': {'description': 'first number', 'title': 'A', 'type': 'integer'}, 'b': {'description': 'second number', 'title': 'B', 'type': 'integer'}}\n"
"{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}\n"
]
}
],
@@ -422,32 +373,17 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 9,
"id": "8ef593c5-cf72-4c10-bfc9-7d21874a0c24",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.065797Z",
"iopub.status.busy": "2024-09-10T20:25:53.065733Z",
"iopub.status.idle": "2024-09-10T20:25:53.130458Z",
"shell.execute_reply": "2024-09-10T20:25:53.130229Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/var/folders/4j/2rz3865x6qg07tx43146py8h0000gn/T/ipykernel_95770/2548361071.py:14: LangChainBetaWarning: This API is in beta and may change in the future.\n",
" as_tool = chain.as_tool(\n"
]
},
{
"data": {
"text/plain": [
"{'answer_style': {'title': 'Answer Style', 'type': 'string'}}"
]
},
"execution_count": 8,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
@@ -492,26 +428,19 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 10,
"id": "1dad8f8e",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.131904Z",
"iopub.status.busy": "2024-09-10T20:25:53.131803Z",
"iopub.status.idle": "2024-09-10T20:25:53.136797Z",
"shell.execute_reply": "2024-09-10T20:25:53.136563Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from typing import Optional, Type\n",
"\n",
"from langchain.pydantic_v1 import BaseModel\n",
"from langchain_core.callbacks import (\n",
" AsyncCallbackManagerForToolRun,\n",
" CallbackManagerForToolRun,\n",
")\n",
"from langchain_core.tools import BaseTool\n",
"from pydantic import BaseModel\n",
"\n",
"\n",
"class CalculatorInput(BaseModel):\n",
@@ -519,11 +448,9 @@
" b: int = Field(description=\"second number\")\n",
"\n",
"\n",
"# Note: It's important that every field has type hints. BaseTool is a\n",
"# Pydantic class and not having type hints can lead to unexpected behavior.\n",
"class CustomCalculatorTool(BaseTool):\n",
" name: str = \"Calculator\"\n",
" description: str = \"useful for when you need to answer questions about math\"\n",
" name = \"Calculator\"\n",
" description = \"useful for when you need to answer questions about math\"\n",
" args_schema: Type[BaseModel] = CalculatorInput\n",
" return_direct: bool = True\n",
"\n",
@@ -550,16 +477,9 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 11,
"id": "bb551c33",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.138074Z",
"iopub.status.busy": "2024-09-10T20:25:53.138007Z",
"iopub.status.idle": "2024-09-10T20:25:53.141360Z",
"shell.execute_reply": "2024-09-10T20:25:53.141158Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -567,7 +487,7 @@
"text": [
"Calculator\n",
"useful for when you need to answer questions about math\n",
"{'a': {'description': 'first number', 'title': 'A', 'type': 'integer'}, 'b': {'description': 'second number', 'title': 'B', 'type': 'integer'}}\n",
"{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}\n",
"True\n",
"6\n",
"6\n"
@@ -608,16 +528,9 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 12,
"id": "6615cb77-fd4c-4676-8965-f92cc71d4944",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.142587Z",
"iopub.status.busy": "2024-09-10T20:25:53.142504Z",
"iopub.status.idle": "2024-09-10T20:25:53.147205Z",
"shell.execute_reply": "2024-09-10T20:25:53.146995Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -647,16 +560,9 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 13,
"id": "bb2af583-eadd-41f4-a645-bf8748bd3dcd",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.148383Z",
"iopub.status.busy": "2024-09-10T20:25:53.148307Z",
"iopub.status.idle": "2024-09-10T20:25:53.152684Z",
"shell.execute_reply": "2024-09-10T20:25:53.152486Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -699,16 +605,9 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 14,
"id": "4ad0932c-8610-4278-8c57-f9218f654c8a",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.153849Z",
"iopub.status.busy": "2024-09-10T20:25:53.153773Z",
"iopub.status.idle": "2024-09-10T20:25:53.158312Z",
"shell.execute_reply": "2024-09-10T20:25:53.158130Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -751,16 +650,9 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 15,
"id": "7094c0e8-6192-4870-a942-aad5b5ae48fd",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.159440Z",
"iopub.status.busy": "2024-09-10T20:25:53.159364Z",
"iopub.status.idle": "2024-09-10T20:25:53.160922Z",
"shell.execute_reply": "2024-09-10T20:25:53.160712Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import ToolException\n",
@@ -781,16 +673,9 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 16,
"id": "b4d22022-b105-4ccc-a15b-412cb9ea3097",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.162046Z",
"iopub.status.busy": "2024-09-10T20:25:53.161968Z",
"iopub.status.idle": "2024-09-10T20:25:53.165236Z",
"shell.execute_reply": "2024-09-10T20:25:53.165052Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
@@ -798,7 +683,7 @@
"'Error: There is no city by the name of foobar.'"
]
},
"execution_count": 15,
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
@@ -822,16 +707,9 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 17,
"id": "3fad1728-d367-4e1b-9b54-3172981271cf",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.166372Z",
"iopub.status.busy": "2024-09-10T20:25:53.166294Z",
"iopub.status.idle": "2024-09-10T20:25:53.169739Z",
"shell.execute_reply": "2024-09-10T20:25:53.169553Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
@@ -839,7 +717,7 @@
"\"There is no such city, but it's probably above 0K there!\""
]
},
"execution_count": 16,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@@ -863,16 +741,9 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 18,
"id": "ebfe7c1f-318d-4e58-99e1-f31e69473c46",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.170937Z",
"iopub.status.busy": "2024-09-10T20:25:53.170859Z",
"iopub.status.idle": "2024-09-10T20:25:53.174498Z",
"shell.execute_reply": "2024-09-10T20:25:53.174304Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
@@ -880,7 +751,7 @@
"'The following errors occurred during tool execution: `Error: There is no city by the name of foobar.`'"
]
},
"execution_count": 17,
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
@@ -920,16 +791,9 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 1,
"id": "14905425-0334-43a0-9de9-5bcf622ede0e",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.175683Z",
"iopub.status.busy": "2024-09-10T20:25:53.175605Z",
"iopub.status.idle": "2024-09-10T20:25:53.178798Z",
"shell.execute_reply": "2024-09-10T20:25:53.178601Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"import random\n",
@@ -956,16 +820,9 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 9,
"id": "0f2e1528-404b-46e6-b87c-f0957c4b9217",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.179881Z",
"iopub.status.busy": "2024-09-10T20:25:53.179807Z",
"iopub.status.idle": "2024-09-10T20:25:53.182100Z",
"shell.execute_reply": "2024-09-10T20:25:53.181940Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
@@ -973,7 +830,7 @@
"'Successfully generated array of 10 random ints in [0, 9].'"
]
},
"execution_count": 19,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
@@ -992,24 +849,17 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 3,
"id": "cc197777-26eb-46b3-a83b-c2ce116c6311",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.183238Z",
"iopub.status.busy": "2024-09-10T20:25:53.183170Z",
"iopub.status.idle": "2024-09-10T20:25:53.185752Z",
"shell.execute_reply": "2024-09-10T20:25:53.185567Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ToolMessage(content='Successfully generated array of 10 random ints in [0, 9].', name='generate_random_ints', tool_call_id='123', artifact=[4, 8, 2, 4, 1, 0, 9, 5, 8, 1])"
"ToolMessage(content='Successfully generated array of 10 random ints in [0, 9].', name='generate_random_ints', tool_call_id='123', artifact=[1, 4, 2, 5, 3, 9, 0, 4, 7, 7])"
]
},
"execution_count": 20,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -1035,16 +885,9 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 6,
"id": "fe1a09d1-378b-4b91-bb5e-0697c3d7eb92",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.186884Z",
"iopub.status.busy": "2024-09-10T20:25:53.186803Z",
"iopub.status.idle": "2024-09-10T20:25:53.190718Z",
"shell.execute_reply": "2024-09-10T20:25:53.190494Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.tools import BaseTool\n",
@@ -1074,24 +917,17 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 8,
"id": "8c3d16f6-1c4a-48ab-b05a-38547c592e79",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:25:53.191872Z",
"iopub.status.busy": "2024-09-10T20:25:53.191794Z",
"iopub.status.idle": "2024-09-10T20:25:53.194396Z",
"shell.execute_reply": "2024-09-10T20:25:53.194184Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ToolMessage(content='Generated 3 floats in [0.1, 3.3333], rounded to 4 decimals.', name='generate_random_floats', tool_call_id='123', artifact=[1.5566, 0.5134, 2.7914])"
"ToolMessage(content='Generated 3 floats in [0.1, 3.3333], rounded to 4 decimals.', name='generate_random_floats', tool_call_id='123', artifact=[1.4277, 0.7578, 2.4871])"
]
},
"execution_count": 22,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}

View File

@@ -90,8 +90,7 @@
"import getpass\n",
"import os\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")"
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")"
]
},
{

View File

@@ -29,16 +29,9 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"id": "89579144-bcb3-490a-8036-86a0a6bcd56b",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:26:41.780410Z",
"iopub.status.busy": "2024-09-10T20:26:41.780102Z",
"iopub.status.idle": "2024-09-10T20:26:42.147112Z",
"shell.execute_reply": "2024-09-10T20:26:42.146838Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
@@ -74,24 +67,17 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "610c3025-ea63-4cd7-88bd-c8cbcb4d8a3f",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:26:42.148746Z",
"iopub.status.busy": "2024-09-10T20:26:42.148621Z",
"iopub.status.idle": "2024-09-10T20:26:42.162044Z",
"shell.execute_reply": "2024-09-10T20:26:42.161794Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptValue(messages=[SystemMessage(content=\"You are an expert extraction algorithm. Only extract relevant information from the text. If you do not know the value of an attribute asked to extract, return null for the attribute's value.\", additional_kwargs={}, response_metadata={}), HumanMessage(content='testing 1 2 3', additional_kwargs={}, response_metadata={}), HumanMessage(content='this is some text', additional_kwargs={}, response_metadata={})])"
"ChatPromptValue(messages=[SystemMessage(content=\"You are an expert extraction algorithm. Only extract relevant information from the text. If you do not know the value of an attribute asked to extract, return null for the attribute's value.\"), HumanMessage(content='testing 1 2 3'), HumanMessage(content='this is some text')])"
]
},
"execution_count": 2,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -118,22 +104,15 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "d875a49a-d2cb-4b9e-b5bf-41073bc3905c",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:26:42.163477Z",
"iopub.status.busy": "2024-09-10T20:26:42.163391Z",
"iopub.status.idle": "2024-09-10T20:26:42.324449Z",
"shell.execute_reply": "2024-09-10T20:26:42.324206Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from typing import List, Optional\n",
"\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_openai import ChatOpenAI\n",
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
"class Person(BaseModel):\n",
@@ -183,16 +162,9 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"id": "08356810-77ce-4e68-99d9-faa0326f2cee",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:26:42.326100Z",
"iopub.status.busy": "2024-09-10T20:26:42.326016Z",
"iopub.status.idle": "2024-09-10T20:26:42.329260Z",
"shell.execute_reply": "2024-09-10T20:26:42.329014Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"import uuid\n",
@@ -205,7 +177,7 @@
" SystemMessage,\n",
" ToolMessage,\n",
")\n",
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Example(TypedDict):\n",
@@ -266,16 +238,9 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "7f59a745-5c81-4011-a4c5-a33ec1eca7ef",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:26:42.330580Z",
"iopub.status.busy": "2024-09-10T20:26:42.330488Z",
"iopub.status.idle": "2024-09-10T20:26:42.332813Z",
"shell.execute_reply": "2024-09-10T20:26:42.332598Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"examples = [\n",
@@ -308,29 +273,22 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"id": "976bb7b8-09c4-4a3e-80df-49a483705c08",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:26:42.333955Z",
"iopub.status.busy": "2024-09-10T20:26:42.333876Z",
"iopub.status.idle": "2024-09-10T20:26:42.336841Z",
"shell.execute_reply": "2024-09-10T20:26:42.336635Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"system: content=\"You are an expert extraction algorithm. Only extract relevant information from the text. If you do not know the value of an attribute asked to extract, return null for the attribute's value.\" additional_kwargs={} response_metadata={}\n",
"human: content=\"The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.\" additional_kwargs={} response_metadata={}\n",
"ai: content='' additional_kwargs={} response_metadata={} tool_calls=[{'name': 'Data', 'args': {'people': []}, 'id': '240159b1-1405-4107-a07c-3c6b91b3d5b7', 'type': 'tool_call'}]\n",
"tool: content='You have correctly called this tool.' tool_call_id='240159b1-1405-4107-a07c-3c6b91b3d5b7'\n",
"human: content='Fiona traveled far from France to Spain.' additional_kwargs={} response_metadata={}\n",
"ai: content='' additional_kwargs={} response_metadata={} tool_calls=[{'name': 'Data', 'args': {'people': [{'name': 'Fiona', 'hair_color': None, 'height_in_meters': None}]}, 'id': '3fc521e4-d1d2-4c20-bf40-e3d72f1068da', 'type': 'tool_call'}]\n",
"tool: content='You have correctly called this tool.' tool_call_id='3fc521e4-d1d2-4c20-bf40-e3d72f1068da'\n",
"human: content='this is some text' additional_kwargs={} response_metadata={}\n"
"system: content=\"You are an expert extraction algorithm. Only extract relevant information from the text. If you do not know the value of an attribute asked to extract, return null for the attribute's value.\"\n",
"human: content=\"The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.\"\n",
"ai: content='' tool_calls=[{'name': 'Person', 'args': {'name': None, 'hair_color': None, 'height_in_meters': None}, 'id': 'b843ba77-4c9c-48ef-92a4-54e534f24521'}]\n",
"tool: content='You have correctly called this tool.' tool_call_id='b843ba77-4c9c-48ef-92a4-54e534f24521'\n",
"human: content='Fiona traveled far from France to Spain.'\n",
"ai: content='' tool_calls=[{'name': 'Person', 'args': {'name': 'Fiona', 'hair_color': None, 'height_in_meters': None}, 'id': '46f00d6b-50e5-4482-9406-b07bb10340f6'}]\n",
"tool: content='You have correctly called this tool.' tool_call_id='46f00d6b-50e5-4482-9406-b07bb10340f6'\n",
"human: content='this is some text'\n"
]
}
],
@@ -362,16 +320,9 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"id": "df2e1ee1-69e8-4c4d-b349-95f2e320317b",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:26:42.338001Z",
"iopub.status.busy": "2024-09-10T20:26:42.337915Z",
"iopub.status.idle": "2024-09-10T20:26:42.349121Z",
"shell.execute_reply": "2024-09-10T20:26:42.348908Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
@@ -392,16 +343,9 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 9,
"id": "dbfea43d-769b-42e9-a76f-ce722f7d6f93",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:26:42.350335Z",
"iopub.status.busy": "2024-09-10T20:26:42.350264Z",
"iopub.status.idle": "2024-09-10T20:26:42.424894Z",
"shell.execute_reply": "2024-09-10T20:26:42.424623Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"runnable = prompt | llm.with_structured_output(\n",
@@ -423,49 +367,18 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 10,
"id": "66545cab-af2a-40a4-9dc9-b4110458b7d3",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:26:42.426258Z",
"iopub.status.busy": "2024-09-10T20:26:42.426187Z",
"iopub.status.idle": "2024-09-10T20:26:46.151633Z",
"shell.execute_reply": "2024-09-10T20:26:46.150690Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"people=[]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"people=[]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"people=[]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"people=[]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"people=[Person(name='earth', hair_color='null', height_in_meters='null')]\n",
"people=[Person(name='earth', hair_color='null', height_in_meters='null')]\n",
"people=[]\n",
"people=[Person(name='earth', hair_color='null', height_in_meters='null')]\n",
"people=[]\n"
]
}
@@ -488,49 +401,18 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 11,
"id": "1c09d805-ec16-4123-aef9-6a5b59499b5c",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:26:46.155346Z",
"iopub.status.busy": "2024-09-10T20:26:46.155110Z",
"iopub.status.idle": "2024-09-10T20:26:51.810359Z",
"shell.execute_reply": "2024-09-10T20:26:51.809636Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"people=[]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"people=[]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"people=[]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"people=[]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"people=[]\n",
"people=[]\n",
"people=[]\n",
"people=[]\n",
"people=[]\n"
]
}
@@ -553,16 +435,9 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 12,
"id": "a9b7a762-1b75-4f9f-b9d9-6732dd05802c",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:26:51.813309Z",
"iopub.status.busy": "2024-09-10T20:26:51.813150Z",
"iopub.status.idle": "2024-09-10T20:26:53.474153Z",
"shell.execute_reply": "2024-09-10T20:26:53.473522Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
@@ -570,7 +445,7 @@
"Data(people=[Person(name='Harrison', hair_color='black', height_in_meters=None)])"
]
},
"execution_count": 11,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@@ -601,7 +476,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -23,56 +23,16 @@
"id": "57969139-ad0a-487e-97d8-cb30e2af9742",
"metadata": {},
"source": [
"## Setup\n",
"## Set up\n",
"\n",
"First we'll install the dependencies needed for this guide:"
"We need some example data! Let's download an article about [cars from wikipedia](https://en.wikipedia.org/wiki/Car) and load it as a LangChain [Document](https://python.langchain.com/v0.2/api_reference/core/documents/langchain_core.documents.base.Document.html)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a3b4d838-5be4-4207-8a4a-9ef5624c48f2",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:19.850767Z",
"iopub.status.busy": "2024-09-10T20:35:19.850427Z",
"iopub.status.idle": "2024-09-10T20:35:21.432233Z",
"shell.execute_reply": "2024-09-10T20:35:21.431606Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-community lxml faiss-cpu langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "ac000b03-33fc-414f-8f2c-3850df621a35",
"metadata": {},
"source": [
"Now we need some example data! Let's download an article about [cars from wikipedia](https://en.wikipedia.org/wiki/Car) and load it as a LangChain [Document](https://python.langchain.com/v0.2/api_reference/core/documents/langchain_core.documents.base.Document.html)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "84460db2-36e1-4037-bfa6-2a11883c2ba5",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:21.434882Z",
"iopub.status.busy": "2024-09-10T20:35:21.434571Z",
"iopub.status.idle": "2024-09-10T20:35:22.214545Z",
"shell.execute_reply": "2024-09-10T20:35:22.214253Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"import re\n",
@@ -95,22 +55,15 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "fcb6917b-123d-4630-a0ce-ed8b293d482d",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:22.216143Z",
"iopub.status.busy": "2024-09-10T20:35:22.216039Z",
"iopub.status.idle": "2024-09-10T20:35:22.218117Z",
"shell.execute_reply": "2024-09-10T20:35:22.217854Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"80427\n"
"79174\n"
]
}
],
@@ -134,20 +87,13 @@
"cell_type": "code",
"execution_count": 4,
"id": "a3b288ed-87a6-4af0-aac8-20921dc370d4",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:22.219468Z",
"iopub.status.busy": "2024-09-10T20:35:22.219395Z",
"iopub.status.idle": "2024-09-10T20:35:22.340594Z",
"shell.execute_reply": "2024-09-10T20:35:22.340319Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from typing import List, Optional\n",
"\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class KeyDevelopment(BaseModel):\n",
@@ -210,14 +156,7 @@
"cell_type": "code",
"execution_count": 5,
"id": "109f4f05-d0ff-431d-93d9-8f5aa34979a6",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:22.342277Z",
"iopub.status.busy": "2024-09-10T20:35:22.342171Z",
"iopub.status.idle": "2024-09-10T20:35:22.532302Z",
"shell.execute_reply": "2024-09-10T20:35:22.532034Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
@@ -232,14 +171,7 @@
"cell_type": "code",
"execution_count": 6,
"id": "aa4ae224-6d3d-4fe2-b210-7db19a9fe580",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:22.533795Z",
"iopub.status.busy": "2024-09-10T20:35:22.533708Z",
"iopub.status.idle": "2024-09-10T20:35:22.610573Z",
"shell.execute_reply": "2024-09-10T20:35:22.610307Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"extractor = prompt | llm.with_structured_output(\n",
@@ -262,14 +194,7 @@
"cell_type": "code",
"execution_count": 7,
"id": "27b8a373-14b3-45ea-8bf5-9749122ad927",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:22.612123Z",
"iopub.status.busy": "2024-09-10T20:35:22.612052Z",
"iopub.status.idle": "2024-09-10T20:35:22.753493Z",
"shell.execute_reply": "2024-09-10T20:35:22.753179Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_text_splitters import TokenTextSplitter\n",
@@ -302,14 +227,7 @@
"cell_type": "code",
"execution_count": 8,
"id": "6ba766b5-8d6c-48e6-8d69-f391a66b65d2",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:22.755067Z",
"iopub.status.busy": "2024-09-10T20:35:22.754987Z",
"iopub.status.idle": "2024-09-10T20:35:36.691130Z",
"shell.execute_reply": "2024-09-10T20:35:36.690500Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"# Limit just to the first 3 chunks\n",
@@ -336,27 +254,21 @@
"cell_type": "code",
"execution_count": 9,
"id": "c3f77470-ce6c-477f-8957-650913218632",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:36.694799Z",
"iopub.status.busy": "2024-09-10T20:35:36.694458Z",
"iopub.status.idle": "2024-09-10T20:35:36.701416Z",
"shell.execute_reply": "2024-09-10T20:35:36.700993Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[KeyDevelopment(year=1769, description='Nicolas-Joseph Cugnot built the first full-scale, self-propelled mechanical vehicle, a steam-powered tricycle.', evidence='Nicolas-Joseph Cugnot is widely credited with building the first full-scale, self-propelled mechanical vehicle in about 1769; he created a steam-powered tricycle.'),\n",
" KeyDevelopment(year=1807, description=\"Nicéphore Niépce and his brother Claude created what was probably the world's first internal combustion engine.\", evidence=\"In 1807, Nicéphore Niépce and his brother Claude created what was probably the world's first internal combustion engine (which they called a Pyréolophore), but installed it in a boat on the river Saone in France.\"),\n",
" KeyDevelopment(year=1886, description='Carl Benz patented the Benz Patent-Motorwagen, marking the birth of the modern car.', evidence='In November 1881, French inventor Gustave Trouvé demonstrated a three-wheeled car powered by electricity at the International Exposition of Electricity. Although several other German engineers (including Gottlieb Daimler, Wilhelm Maybach, and Siegfried Marcus) were working on cars at about the same time, the year 1886 is regarded as the birth year of the modern car—a practical, marketable automobile for everyday use—when the German Carl Benz patented his Benz Patent-Motorwagen; he is generally acknowledged as the inventor of the car.'),\n",
" KeyDevelopment(year=1886, description='Carl Benz began promotion of his vehicle, marking the introduction of the first commercially available automobile.', evidence='Benz began promotion of the vehicle on 3 July 1886.'),\n",
" KeyDevelopment(year=1888, description=\"Bertha Benz undertook the first road trip by car to prove the road-worthiness of her husband's invention.\", evidence=\"In August 1888, Bertha Benz, the wife and business partner of Carl Benz, undertook the first road trip by car, to prove the road-worthiness of her husband's invention.\"),\n",
"[KeyDevelopment(year=1966, description='The Toyota Corolla began production, becoming the best-selling series of automobile in history.', evidence='The Toyota Corolla, which has been in production since 1966, is the best-selling series of automobile in history.'),\n",
" KeyDevelopment(year=1769, description='Nicolas-Joseph Cugnot built the first steam-powered road vehicle.', evidence='The French inventor Nicolas-Joseph Cugnot built the first steam-powered road vehicle in 1769.'),\n",
" KeyDevelopment(year=1808, description='François Isaac de Rivaz designed and constructed the first internal combustion-powered automobile.', evidence='the Swiss inventor François Isaac de Rivaz designed and constructed the first internal combustion-powered automobile in 1808.'),\n",
" KeyDevelopment(year=1886, description='Carl Benz patented his Benz Patent-Motorwagen, inventing the modern car.', evidence='The modern car—a practical, marketable automobile for everyday use—was invented in 1886, when the German inventor Carl Benz patented his Benz Patent-Motorwagen.'),\n",
" KeyDevelopment(year=1908, description='Ford Model T, one of the first cars affordable by the masses, began production.', evidence='One of the first cars affordable by the masses was the Ford Model T, begun in 1908, an American car manufactured by the Ford Motor Company.'),\n",
" KeyDevelopment(year=1888, description=\"Bertha Benz undertook the first road trip by car to prove the road-worthiness of her husband's invention.\", evidence=\"In August 1888, Bertha Benz, the wife of Carl Benz, undertook the first road trip by car, to prove the road-worthiness of her husband's invention.\"),\n",
" KeyDevelopment(year=1896, description='Benz designed and patented the first internal-combustion flat engine, called boxermotor.', evidence='In 1896, Benz designed and patented the first internal-combustion flat engine, called boxermotor.'),\n",
" KeyDevelopment(year=1897, description='The first motor car in central Europe and one of the first factory-made cars in the world, the Präsident automobil, was produced by Nesselsdorfer Wagenbau.', evidence='The first motor car in central Europe and one of the first factory-made cars in the world, was produced by Czech company Nesselsdorfer Wagenbau (later renamed to Tatra) in 1897, the Präsident automobil.'),\n",
" KeyDevelopment(year=1901, description='Ransom Olds started large-scale, production-line manufacturing of affordable cars at his Oldsmobile factory in Lansing, Michigan.', evidence='Large-scale, production-line manufacturing of affordable cars was started by Ransom Olds in 1901 at his Oldsmobile factory in Lansing, Michigan.'),\n",
" KeyDevelopment(year=1913, description=\"Henry Ford introduced the world's first moving assembly line for cars at the Highland Park Ford Plant.\", evidence=\"This concept was greatly expanded by Henry Ford, beginning in 1913 with the world's first moving assembly line for cars at the Highland Park Ford Plant.\")]"
" KeyDevelopment(year=1897, description='Nesselsdorfer Wagenbau produced the Präsident automobil, one of the first factory-made cars in the world.', evidence='The first motor car in central Europe and one of the first factory-made cars in the world, was produced by Czech company Nesselsdorfer Wagenbau (later renamed to Tatra) in 1897, the Präsident automobil.'),\n",
" KeyDevelopment(year=1890, description='Daimler Motoren Gesellschaft (DMG) was founded by Daimler and Maybach in Cannstatt.', evidence='Daimler and Maybach founded Daimler Motoren Gesellschaft (DMG) in Cannstatt in 1890.'),\n",
" KeyDevelopment(year=1891, description='Auguste Doriot and Louis Rigoulot completed the longest trip by a petrol-driven vehicle with a Daimler powered Peugeot Type 3.', evidence='In 1891, Auguste Doriot and his Peugeot colleague Louis Rigoulot completed the longest trip by a petrol-driven vehicle when their self-designed and built Daimler powered Peugeot Type 3 completed 2,100 kilometres (1,300 mi) from Valentigney to Paris and Brest and back again.')]"
]
},
"execution_count": 9,
@@ -403,14 +315,7 @@
"cell_type": "code",
"execution_count": 10,
"id": "aaf37c82-625b-4fa1-8e88-73303f08ac16",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:36.703897Z",
"iopub.status.busy": "2024-09-10T20:35:36.703718Z",
"iopub.status.idle": "2024-09-10T20:35:38.451523Z",
"shell.execute_reply": "2024-09-10T20:35:38.450925Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import FAISS\n",
@@ -439,14 +344,7 @@
"cell_type": "code",
"execution_count": 11,
"id": "47aad00b-7013-4f7f-a1b0-02ef269093bf",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:38.455094Z",
"iopub.status.busy": "2024-09-10T20:35:38.454851Z",
"iopub.status.idle": "2024-09-10T20:35:38.458315Z",
"shell.execute_reply": "2024-09-10T20:35:38.457940Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"rag_extractor = {\n",
@@ -458,14 +356,7 @@
"cell_type": "code",
"execution_count": 12,
"id": "68f2de01-0cd8-456e-a959-db236189d41b",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:38.460115Z",
"iopub.status.busy": "2024-09-10T20:35:38.459949Z",
"iopub.status.idle": "2024-09-10T20:35:43.195532Z",
"shell.execute_reply": "2024-09-10T20:35:43.194254Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"results = rag_extractor.invoke(\"Key developments associated with cars\")"
@@ -475,21 +366,15 @@
"cell_type": "code",
"execution_count": 13,
"id": "1788e2d6-77bb-417f-827c-eb96c035164e",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:43.200497Z",
"iopub.status.busy": "2024-09-10T20:35:43.200037Z",
"iopub.status.idle": "2024-09-10T20:35:43.206773Z",
"shell.execute_reply": "2024-09-10T20:35:43.205426Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"year=2006 description='Car-sharing services in the US experienced double-digit growth in revenue and membership.' evidence='in the US, some car-sharing services have experienced double-digit growth in revenue and membership growth between 2006 and 2007.'\n",
"year=2020 description='56 million cars were manufactured worldwide, with China producing the most.' evidence='In 2020, there were 56 million cars manufactured worldwide, down from 67 million the previous year. The automotive industry in China produces by far the most (20 million in 2020).'\n"
"year=1869 description='Mary Ward became one of the first documented car fatalities in Parsonstown, Ireland.' evidence='Mary Ward became one of the first documented car fatalities in 1869 in Parsonstown, Ireland,'\n",
"year=1899 description=\"Henry Bliss one of the US's first pedestrian car casualties in New York City.\" evidence=\"Henry Bliss one of the US's first pedestrian car casualties in 1899 in New York City.\"\n",
"year=2030 description='All fossil fuel vehicles will be banned in Amsterdam.' evidence='all fossil fuel vehicles will be banned in Amsterdam from 2030.'\n"
]
}
],
@@ -531,7 +416,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -27,16 +27,9 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"id": "25487939-8713-4ec7-b774-e4a761ac8298",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:44.442501Z",
"iopub.status.busy": "2024-09-10T20:35:44.442044Z",
"iopub.status.idle": "2024-09-10T20:35:44.872217Z",
"shell.execute_reply": "2024-09-10T20:35:44.871897Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
@@ -69,23 +62,16 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "497eb023-c043-443d-ac62-2d4ea85fe1b0",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:44.873979Z",
"iopub.status.busy": "2024-09-10T20:35:44.873840Z",
"iopub.status.idle": "2024-09-10T20:35:44.878966Z",
"shell.execute_reply": "2024-09-10T20:35:44.878718Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from typing import List, Optional\n",
"\n",
"from langchain_core.output_parsers import PydanticOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from pydantic import BaseModel, Field, validator\n",
"from langchain_core.pydantic_v1 import BaseModel, Field, validator\n",
"\n",
"\n",
"class Person(BaseModel):\n",
@@ -128,16 +114,9 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "20b99ffb-a114-49a9-a7be-154c525f8ada",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:44.880355Z",
"iopub.status.busy": "2024-09-10T20:35:44.880277Z",
"iopub.status.idle": "2024-09-10T20:35:44.881834Z",
"shell.execute_reply": "2024-09-10T20:35:44.881601Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"query = \"Anna is 23 years old and she is 6 feet tall\""
@@ -145,16 +124,9 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"id": "4f3a66ce-de19-4571-9e54-67504ae3fba7",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:44.883138Z",
"iopub.status.busy": "2024-09-10T20:35:44.883049Z",
"iopub.status.idle": "2024-09-10T20:35:44.885139Z",
"shell.execute_reply": "2024-09-10T20:35:44.884801Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -168,7 +140,7 @@
"\n",
"Here is the output schema:\n",
"```\n",
"{\"$defs\": {\"Person\": {\"description\": \"Information about a person.\", \"properties\": {\"name\": {\"description\": \"The name of the person\", \"title\": \"Name\", \"type\": \"string\"}, \"height_in_meters\": {\"description\": \"The height of the person expressed in meters.\", \"title\": \"Height In Meters\", \"type\": \"number\"}}, \"required\": [\"name\", \"height_in_meters\"], \"title\": \"Person\", \"type\": \"object\"}}, \"description\": \"Identifying information about all people in a text.\", \"properties\": {\"people\": {\"items\": {\"$ref\": \"#/$defs/Person\"}, \"title\": \"People\", \"type\": \"array\"}}, \"required\": [\"people\"]}\n",
"{\"description\": \"Identifying information about all people in a text.\", \"properties\": {\"people\": {\"title\": \"People\", \"type\": \"array\", \"items\": {\"$ref\": \"#/definitions/Person\"}}}, \"required\": [\"people\"], \"definitions\": {\"Person\": {\"title\": \"Person\", \"description\": \"Information about a person.\", \"type\": \"object\", \"properties\": {\"name\": {\"title\": \"Name\", \"description\": \"The name of the person\", \"type\": \"string\"}, \"height_in_meters\": {\"title\": \"Height In Meters\", \"description\": \"The height of the person expressed in meters.\", \"type\": \"number\"}}, \"required\": [\"name\", \"height_in_meters\"]}}}\n",
"```\n",
"Human: Anna is 23 years old and she is 6 feet tall\n"
]
@@ -188,16 +160,9 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "7e0041eb-37dc-4384-9fe3-6dd8c356371e",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:44.886765Z",
"iopub.status.busy": "2024-09-10T20:35:44.886675Z",
"iopub.status.idle": "2024-09-10T20:35:46.835960Z",
"shell.execute_reply": "2024-09-10T20:35:46.835282Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
@@ -205,7 +170,7 @@
"People(people=[Person(name='Anna', height_in_meters=1.83)])"
]
},
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@@ -244,16 +209,9 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"id": "b1f11912-c1bb-4a2a-a482-79bf3996961f",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:46.839577Z",
"iopub.status.busy": "2024-09-10T20:35:46.839233Z",
"iopub.status.idle": "2024-09-10T20:35:46.849663Z",
"shell.execute_reply": "2024-09-10T20:35:46.849177Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"import json\n",
@@ -263,7 +221,7 @@
"from langchain_anthropic.chat_models import ChatAnthropic\n",
"from langchain_core.messages import AIMessage\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from pydantic import BaseModel, Field, validator\n",
"from langchain_core.pydantic_v1 import BaseModel, Field, validator\n",
"\n",
"\n",
"class Person(BaseModel):\n",
@@ -321,23 +279,16 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"id": "9260d5e8-3b6c-4639-9f3b-fb2f90239e4b",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:46.851870Z",
"iopub.status.busy": "2024-09-10T20:35:46.851698Z",
"iopub.status.idle": "2024-09-10T20:35:46.854786Z",
"shell.execute_reply": "2024-09-10T20:35:46.854424Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"System: Answer the user query. Output your answer as JSON that matches the given schema: ```json\n",
"{'$defs': {'Person': {'description': 'Information about a person.', 'properties': {'name': {'description': 'The name of the person', 'title': 'Name', 'type': 'string'}, 'height_in_meters': {'description': 'The height of the person expressed in meters.', 'title': 'Height In Meters', 'type': 'number'}}, 'required': ['name', 'height_in_meters'], 'title': 'Person', 'type': 'object'}}, 'description': 'Identifying information about all people in a text.', 'properties': {'people': {'items': {'$ref': '#/$defs/Person'}, 'title': 'People', 'type': 'array'}}, 'required': ['people'], 'title': 'People', 'type': 'object'}\n",
"{'title': 'People', 'description': 'Identifying information about all people in a text.', 'type': 'object', 'properties': {'people': {'title': 'People', 'type': 'array', 'items': {'$ref': '#/definitions/Person'}}}, 'required': ['people'], 'definitions': {'Person': {'title': 'Person', 'description': 'Information about a person.', 'type': 'object', 'properties': {'name': {'title': 'Name', 'description': 'The name of the person', 'type': 'string'}, 'height_in_meters': {'title': 'Height In Meters', 'description': 'The height of the person expressed in meters.', 'type': 'number'}}, 'required': ['name', 'height_in_meters']}}}\n",
"```. Make sure to wrap the answer in ```json and ``` tags\n",
"Human: Anna is 23 years old and she is 6 feet tall\n"
]
@@ -350,32 +301,17 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 9,
"id": "c523301d-ae0e-45e3-b195-7fd28c67a5c4",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-10T20:35:46.856945Z",
"iopub.status.busy": "2024-09-10T20:35:46.856769Z",
"iopub.status.idle": "2024-09-10T20:35:48.373728Z",
"shell.execute_reply": "2024-09-10T20:35:48.373079Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/bagatur/langchain/.venv/lib/python3.11/site-packages/pydantic/_internal/_fields.py:201: UserWarning: Field name \"schema\" in \"PromptInput\" shadows an attribute in parent \"BaseModel\"\n",
" warnings.warn(\n"
]
},
{
"data": {
"text/plain": [
"[{'people': [{'name': 'Anna', 'height_in_meters': 1.83}]}]"
]
},
"execution_count": 8,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
@@ -413,7 +349,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -90,7 +90,7 @@
"outputs": [],
"source": [
"# Note that we set max_retries = 0 to avoid retrying on RateLimits, etc\n",
"openai_llm = ChatOpenAI(model=\"gpt-4o-mini\", max_retries=0)\n",
"openai_llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", max_retries=0)\n",
"anthropic_llm = ChatAnthropic(model=\"claude-3-haiku-20240307\")\n",
"llm = openai_llm.with_fallbacks([anthropic_llm])"
]

View File

@@ -66,8 +66,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()"
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{
@@ -87,7 +86,7 @@
{
"data": {
"text/plain": [
"AIMessage(content='The expression \"2 🦜 9\" is not a standard mathematical operation or equation. It appears to be a combination of the number 2 and the parrot emoji 🦜 followed by the number 9. It does not have a specific mathematical meaning.', response_metadata={'token_usage': {'completion_tokens': 54, 'prompt_tokens': 17, 'total_tokens': 71}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-aad12dda-5c47-4a1e-9949-6fe94e03242a-0', usage_metadata={'input_tokens': 17, 'output_tokens': 54, 'total_tokens': 71})"
"AIMessage(content='The expression \"2 🦜 9\" is not a standard mathematical operation or equation. It appears to be a combination of the number 2 and the parrot emoji 🦜 followed by the number 9. It does not have a specific mathematical meaning.', response_metadata={'token_usage': {'completion_tokens': 54, 'prompt_tokens': 17, 'total_tokens': 71}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-aad12dda-5c47-4a1e-9949-6fe94e03242a-0', usage_metadata={'input_tokens': 17, 'output_tokens': 54, 'total_tokens': 71})"
]
},
"execution_count": 4,
@@ -98,7 +97,7 @@
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0.0)\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0.0)\n",
"\n",
"model.invoke(\"What is 2 🦜 9?\")"
]
@@ -213,7 +212,7 @@
{
"data": {
"text/plain": [
"AIMessage(content='11', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 60, 'total_tokens': 61}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5ec4e051-262f-408e-ad00-3f2ebeb561c3-0', usage_metadata={'input_tokens': 60, 'output_tokens': 1, 'total_tokens': 61})"
"AIMessage(content='11', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 60, 'total_tokens': 61}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5ec4e051-262f-408e-ad00-3f2ebeb561c3-0', usage_metadata={'input_tokens': 60, 'output_tokens': 1, 'total_tokens': 61})"
]
},
"execution_count": 8,
@@ -419,7 +418,7 @@
{
"data": {
"text/plain": [
"AIMessage(content='6', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 60, 'total_tokens': 61}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-d1863e5e-17cd-4e9d-bf7a-b9f118747a65-0', usage_metadata={'input_tokens': 60, 'output_tokens': 1, 'total_tokens': 61})"
"AIMessage(content='6', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 60, 'total_tokens': 61}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-d1863e5e-17cd-4e9d-bf7a-b9f118747a65-0', usage_metadata={'input_tokens': 60, 'output_tokens': 1, 'total_tokens': 61})"
]
},
"execution_count": 13,
@@ -428,7 +427,7 @@
}
],
"source": [
"chain = final_prompt | ChatOpenAI(model=\"gpt-4o-mini\", temperature=0.0)\n",
"chain = final_prompt | ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0.0)\n",
"\n",
"chain.invoke({\"input\": \"What's 3 🦜 3?\"})"
]

View File

@@ -136,7 +136,7 @@
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"# Note that the docstrings here are crucial, as they will be passed along\n",
@@ -191,7 +191,7 @@
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)"
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{
@@ -696,7 +696,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -54,8 +54,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()"
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{

View File

@@ -163,8 +163,8 @@
"from typing import List, Optional\n",
"\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_openai import ChatOpenAI\n",
"from pydantic import BaseModel, Field\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n",
"\n",

View File

@@ -177,15 +177,14 @@
"source": [
"from typing import Optional, Type\n",
"\n",
"# Import things that are needed generically\n",
"from langchain.pydantic_v1 import BaseModel, Field\n",
"from langchain_core.callbacks import (\n",
" AsyncCallbackManagerForToolRun,\n",
" CallbackManagerForToolRun,\n",
")\n",
"from langchain_core.tools import BaseTool\n",
"\n",
"# Import things that are needed generically\n",
"from pydantic import BaseModel, Field\n",
"\n",
"description_query = \"\"\"\n",
"MATCH (m:Movie|Person)\n",
"WHERE m.title CONTAINS $candidate OR m.name CONTAINS $candidate\n",
@@ -227,15 +226,14 @@
"source": [
"from typing import Optional, Type\n",
"\n",
"# Import things that are needed generically\n",
"from langchain.pydantic_v1 import BaseModel, Field\n",
"from langchain_core.callbacks import (\n",
" AsyncCallbackManagerForToolRun,\n",
" CallbackManagerForToolRun,\n",
")\n",
"from langchain_core.tools import BaseTool\n",
"\n",
"# Import things that are needed generically\n",
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
"class InformationInput(BaseModel):\n",
" entity: str = Field(description=\"movie or a person mentioned in the question\")\n",

View File

@@ -25,8 +25,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"# Please manually enter OpenAI Key"
]
},

View File

@@ -94,7 +94,7 @@
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\").bind(logprobs=True)\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\").bind(logprobs=True)\n",
"\n",
"msg = llm.invoke((\"human\", \"how are you today\"))\n",
"\n",

View File

@@ -11,30 +11,12 @@
"\n",
"The `merge_message_runs` utility makes it easy to merge consecutive messages of the same type.\n",
"\n",
"### Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "198ce37f-4466-45a2-8878-d75cd01a5d23",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-core langchain-anthropic"
]
},
{
"cell_type": "markdown",
"id": "b5c3ca6e-e5b3-4151-8307-9101713a20ae",
"metadata": {},
"source": [
"## Basic usage"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 1,
"id": "1a215bbb-c05c-40b0-a6fd-d94884d517df",
"metadata": {},
"outputs": [
@@ -42,11 +24,11 @@
"name": "stdout",
"output_type": "stream",
"text": [
"SystemMessage(content=\"you're a good assistant.\\nyou always respond with a joke.\", additional_kwargs={}, response_metadata={})\n",
"SystemMessage(content=\"you're a good assistant.\\nyou always respond with a joke.\")\n",
"\n",
"HumanMessage(content=[{'type': 'text', 'text': \"i wonder why it's called langchain\"}, 'and who is harrison chasing anyways'], additional_kwargs={}, response_metadata={})\n",
"HumanMessage(content=[{'type': 'text', 'text': \"i wonder why it's called langchain\"}, 'and who is harrison chasing anyways'])\n",
"\n",
"AIMessage(content='Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!\\nWhy, he\\'s probably chasing after the last cup of coffee in the office!', additional_kwargs={}, response_metadata={})\n"
"AIMessage(content='Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!\\nWhy, he\\'s probably chasing after the last cup of coffee in the office!')\n"
]
}
],
@@ -81,6 +63,38 @@
"Notice that if the contents of one of the messages to merge is a list of content blocks then the merged message will have a list of content blocks. And if both messages to merge have string contents then those are concatenated with a newline character."
]
},
{
"cell_type": "markdown",
"id": "11f7e8d3",
"metadata": {},
"source": [
"The `merge_message_runs` utility also works with messages composed together using the overloaded `+` operation:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b51855c5",
"metadata": {},
"outputs": [],
"source": [
"messages = (\n",
" SystemMessage(\"you're a good assistant.\")\n",
" + SystemMessage(\"you always respond with a joke.\")\n",
" + HumanMessage([{\"type\": \"text\", \"text\": \"i wonder why it's called langchain\"}])\n",
" + HumanMessage(\"and who is harrison chasing anyways\")\n",
" + AIMessage(\n",
" 'Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!'\n",
" )\n",
" + AIMessage(\n",
" \"Why, he's probably chasing after the last cup of coffee in the office!\"\n",
" )\n",
")\n",
"\n",
"merged = merge_message_runs(messages)\n",
"print(\"\\n\\n\".join([repr(x) for x in merged]))"
]
},
{
"cell_type": "markdown",
"id": "1b2eee74-71c8-4168-b968-bca580c25d18",
@@ -93,30 +107,23 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 3,
"id": "6d5a0283-11f8-435b-b27b-7b18f7693592",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content=[], additional_kwargs={}, response_metadata={'id': 'msg_01KNGUMTuzBVfwNouLDpUMwf', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 84, 'output_tokens': 3}}, id='run-b908b198-9c24-450b-9749-9d4a8182937b-0', usage_metadata={'input_tokens': 84, 'output_tokens': 3, 'total_tokens': 87})"
"AIMessage(content=[], response_metadata={'id': 'msg_01D6R8Naum57q8qBau9vLBUX', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 84, 'output_tokens': 3}}, id='run-ac0c465b-b54f-4b8b-9295-e5951250d653-0', usage_metadata={'input_tokens': 84, 'output_tokens': 3, 'total_tokens': 87})"
]
},
"execution_count": 9,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%pip install -qU langchain-anthropic\n",
"# pip install -U langchain-anthropic\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\", temperature=0)\n",
@@ -139,19 +146,19 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 4,
"id": "460817a6-c327-429d-958e-181a8c46059c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[SystemMessage(content=\"you're a good assistant.\\nyou always respond with a joke.\", additional_kwargs={}, response_metadata={}),\n",
" HumanMessage(content=[{'type': 'text', 'text': \"i wonder why it's called langchain\"}, 'and who is harrison chasing anyways'], additional_kwargs={}, response_metadata={}),\n",
" AIMessage(content='Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!\\nWhy, he\\'s probably chasing after the last cup of coffee in the office!', additional_kwargs={}, response_metadata={})]"
"[SystemMessage(content=\"you're a good assistant.\\nyou always respond with a joke.\"),\n",
" HumanMessage(content=[{'type': 'text', 'text': \"i wonder why it's called langchain\"}, 'and who is harrison chasing anyways']),\n",
" AIMessage(content='Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!\\nWhy, he\\'s probably chasing after the last cup of coffee in the office!')]"
]
},
"execution_count": 10,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -160,53 +167,6 @@
"merger.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "4178837d-b155-492d-9404-d567accc1fa0",
"metadata": {},
"source": [
"`merge_message_runs` can also be placed after a prompt:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "620530ab-ed05-4899-b984-bfa4cd738465",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='A convergent series is an infinite series whose partial sums approach a finite value as more terms are added. In other words, the sequence of partial sums has a limit.\\n\\nMore formally, an infinite series Σ an (where an are the terms of the series) is said to be convergent if the sequence of partial sums:\\n\\nS1 = a1\\nS2 = a1 + a2 \\nS3 = a1 + a2 + a3\\n...\\nSn = a1 + a2 + a3 + ... + an\\n...\\n\\nconverges to some finite number S as n goes to infinity. We write:\\n\\nlim n→∞ Sn = S\\n\\nThe finite number S is called the sum of the convergent infinite series.\\n\\nIf the sequence of partial sums does not approach any finite limit, the infinite series is said to be divergent.\\n\\nSome key properties:\\n- A series converges if and only if the sequence of its partial sums is a Cauchy sequence.\\n- Absolute/conditional convergence criteria help determine if a given series converges.\\n- Convergent series have many important applications in mathematics, physics, engineering etc.', additional_kwargs={}, response_metadata={'id': 'msg_01MfV6y2hep7ZNvDz24A36U4', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 29, 'output_tokens': 267}}, id='run-9d925f58-021e-4bd0-94fc-f8f5e91010a4-0', usage_metadata={'input_tokens': 29, 'output_tokens': 267, 'total_tokens': 296})"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\"system\", \"You're great a {skill}\"),\n",
" (\"system\", \"You're also great at explaining things\"),\n",
" (\"human\", \"{query}\"),\n",
" ]\n",
")\n",
"chain = prompt | merger | llm\n",
"chain.invoke({\"skill\": \"math\", \"query\": \"what's the definition of a convergent series\"})"
]
},
{
"cell_type": "markdown",
"id": "51ba533a-43c7-4e5f-bd91-a4ec23ceeb34",
"metadata": {},
"source": [
"LangSmith Trace: https://smith.langchain.com/public/432150b6-9909-40a7-8ae7-944b7e657438/r/f4ad5fb2-4d38-42a6-b780-25f62617d53f"
]
},
{
"cell_type": "markdown",
"id": "4548d916-ce21-4dc6-8f19-eedb8003ace6",
@@ -234,7 +194,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -65,11 +65,9 @@
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API key:\\n\")"
"os.environ[\"OPENAI_API_KEY\"] = \"sk-...\""
]
},
{
@@ -1327,7 +1325,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.11.2"
}
},
"nbformat": 4,

View File

@@ -440,7 +440,7 @@
"source": [
"from typing import List\n",
"\n",
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class HypotheticalQuestions(BaseModel):\n",

View File

@@ -24,8 +24,8 @@
"from typing import List\n",
"\n",
"from langchain_core.output_parsers import PydanticOutputParser\n",
"from langchain_openai import ChatOpenAI\n",
"from pydantic import BaseModel, Field"
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_openai import ChatOpenAI"
]
},
{

View File

@@ -47,8 +47,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()"
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{
@@ -72,8 +71,8 @@
"source": [
"from langchain_core.output_parsers import JsonOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_openai import ChatOpenAI\n",
"from pydantic import BaseModel, Field\n",
"\n",
"model = ChatOpenAI(temperature=0)\n",
"\n",

View File

@@ -20,8 +20,8 @@
"from langchain.output_parsers import OutputFixingParser\n",
"from langchain_core.output_parsers import PydanticOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import ChatOpenAI, OpenAI\n",
"from pydantic import BaseModel, Field"
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_openai import ChatOpenAI, OpenAI"
]
},
{

View File

@@ -35,17 +35,17 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 6,
"id": "1594b2bf-2a6f-47bb-9a81-38930f8e606b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Joke(setup='Why did the tomato turn red?', punchline='Because it saw the salad dressing!')"
"Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')"
]
},
"execution_count": 1,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@@ -53,8 +53,8 @@
"source": [
"from langchain_core.output_parsers import PydanticOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.pydantic_v1 import BaseModel, Field, validator\n",
"from langchain_openai import OpenAI\n",
"from pydantic import BaseModel, Field, model_validator\n",
"\n",
"model = OpenAI(model_name=\"gpt-3.5-turbo-instruct\", temperature=0.0)\n",
"\n",
@@ -65,13 +65,11 @@
" punchline: str = Field(description=\"answer to resolve the joke\")\n",
"\n",
" # You can add custom validation logic easily with Pydantic.\n",
" @model_validator(mode=\"before\")\n",
" @classmethod\n",
" def question_ends_with_question_mark(cls, values: dict) -> dict:\n",
" setup = values[\"setup\"]\n",
" if setup[-1] != \"?\":\n",
" @validator(\"setup\")\n",
" def question_ends_with_question_mark(cls, field):\n",
" if field[-1] != \"?\":\n",
" raise ValueError(\"Badly formed question!\")\n",
" return values\n",
" return field\n",
"\n",
"\n",
"# Set up a parser + inject instructions into the prompt template.\n",
@@ -241,9 +239,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-311",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "poetry-venv-311"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -255,7 +253,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.1"
}
},
"nbformat": 4,

View File

@@ -41,8 +41,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"ANTHROPIC_API_KEY\" not in os.environ:\n",
" os.environ[\"ANTHROPIC_API_KEY\"] = getpass()"
"os.environ[\"ANTHROPIC_API_KEY\"] = getpass()"
]
},
{

View File

@@ -39,8 +39,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()"
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{
@@ -71,8 +70,8 @@
"source": [
"from langchain.output_parsers import YamlOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_openai import ChatOpenAI\n",
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
"# Define your desired data structure.\n",

View File

@@ -60,8 +60,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()"
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{

View File

@@ -46,8 +46,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()"
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{

View File

@@ -269,7 +269,7 @@
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class CitedAnswer(BaseModel):\n",

View File

@@ -24,16 +24,9 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 13,
"id": "8ca446a0",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:32:35.834087Z",
"iopub.status.busy": "2024-09-11T02:32:35.833763Z",
"iopub.status.idle": "2024-09-11T02:32:36.588973Z",
"shell.execute_reply": "2024-09-11T02:32:36.588677Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from typing import Optional\n",
@@ -47,7 +40,7 @@
")\n",
"from langchain_community.query_constructors.chroma import ChromaTranslator\n",
"from langchain_community.query_constructors.elasticsearch import ElasticsearchTranslator\n",
"from pydantic import BaseModel"
"from langchain_core.pydantic_v1 import BaseModel"
]
},
{
@@ -60,16 +53,9 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 11,
"id": "64055006",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:32:36.590665Z",
"iopub.status.busy": "2024-09-11T02:32:36.590527Z",
"iopub.status.idle": "2024-09-11T02:32:36.592985Z",
"shell.execute_reply": "2024-09-11T02:32:36.592763Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"class Search(BaseModel):\n",
@@ -80,16 +66,9 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 12,
"id": "44eb6d98",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:32:36.594147Z",
"iopub.status.busy": "2024-09-11T02:32:36.594072Z",
"iopub.status.idle": "2024-09-11T02:32:36.595777Z",
"shell.execute_reply": "2024-09-11T02:32:36.595563Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"search_query = Search(query=\"RAG\", start_year=2022, author=\"LangChain\")"
@@ -97,16 +76,9 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 15,
"id": "e8ba6705",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:32:36.596902Z",
"iopub.status.busy": "2024-09-11T02:32:36.596824Z",
"iopub.status.idle": "2024-09-11T02:32:36.598805Z",
"shell.execute_reply": "2024-09-11T02:32:36.598629Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"def construct_comparisons(query: Search):\n",
@@ -132,16 +104,9 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 16,
"id": "6a79c9da",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:32:36.599989Z",
"iopub.status.busy": "2024-09-11T02:32:36.599909Z",
"iopub.status.idle": "2024-09-11T02:32:36.601521Z",
"shell.execute_reply": "2024-09-11T02:32:36.601306Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"comparisons = construct_comparisons(search_query)"
@@ -149,16 +114,9 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 17,
"id": "2d0e9689",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:32:36.602688Z",
"iopub.status.busy": "2024-09-11T02:32:36.602603Z",
"iopub.status.idle": "2024-09-11T02:32:36.604171Z",
"shell.execute_reply": "2024-09-11T02:32:36.603981Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"_filter = Operation(operator=Operator.AND, arguments=comparisons)"
@@ -166,16 +124,9 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 18,
"id": "e4c0b2ce",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:32:36.605267Z",
"iopub.status.busy": "2024-09-11T02:32:36.605190Z",
"iopub.status.idle": "2024-09-11T02:32:36.607993Z",
"shell.execute_reply": "2024-09-11T02:32:36.607796Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
@@ -184,7 +135,7 @@
" {'term': {'metadata.author.keyword': 'LangChain'}}]}}"
]
},
"execution_count": 7,
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
@@ -195,16 +146,9 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 19,
"id": "d75455ae",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:32:36.609091Z",
"iopub.status.busy": "2024-09-11T02:32:36.609012Z",
"iopub.status.idle": "2024-09-11T02:32:36.611075Z",
"shell.execute_reply": "2024-09-11T02:32:36.610869Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
@@ -212,7 +156,7 @@
"{'$and': [{'start_year': {'$gt': 2022}}, {'author': {'$eq': 'LangChain'}}]}"
]
},
"execution_count": 8,
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
@@ -238,7 +182,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.1"
}
},
"nbformat": 4,

View File

@@ -35,14 +35,7 @@
"cell_type": "code",
"execution_count": 1,
"id": "e168ef5c-e54e-49a6-8552-5502854a6f01",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:33:48.329739Z",
"iopub.status.busy": "2024-09-11T02:33:48.329033Z",
"iopub.status.idle": "2024-09-11T02:33:48.334555Z",
"shell.execute_reply": "2024-09-11T02:33:48.334086Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"# %pip install -qU langchain-core langchain-openai"
@@ -60,23 +53,15 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "40e2979e-a818-4b96-ac25-039336f94319",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:33:48.337140Z",
"iopub.status.busy": "2024-09-11T02:33:48.336958Z",
"iopub.status.idle": "2024-09-11T02:33:48.342671Z",
"shell.execute_reply": "2024-09-11T02:33:48.342281Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.\n",
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
@@ -95,21 +80,14 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 37,
"id": "0b51dd76-820d-41a4-98c8-893f6fe0d1ea",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:33:48.345004Z",
"iopub.status.busy": "2024-09-11T02:33:48.344838Z",
"iopub.status.idle": "2024-09-11T02:33:48.413166Z",
"shell.execute_reply": "2024-09-11T02:33:48.412908Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from typing import List, Optional\n",
"\n",
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"sub_queries_description = \"\"\"\\\n",
"If the original question contains multiple distinct sub-questions, \\\n",
@@ -143,16 +121,9 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 64,
"id": "783c03c3-8c72-4f88-9cf4-5829ce6745d6",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:33:48.414805Z",
"iopub.status.busy": "2024-09-11T02:33:48.414700Z",
"iopub.status.idle": "2024-09-11T02:33:49.023858Z",
"shell.execute_reply": "2024-09-11T02:33:49.023547Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
@@ -172,7 +143,7 @@
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"structured_llm = llm.with_structured_output(Search)\n",
"query_analyzer = {\"question\": RunnablePassthrough()} | prompt | structured_llm"
]
@@ -187,24 +158,17 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 65,
"id": "0bcfce06-6f0c-4f9d-a1fc-dc29342d2aae",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:33:49.025536Z",
"iopub.status.busy": "2024-09-11T02:33:49.025437Z",
"iopub.status.idle": "2024-09-11T02:33:50.170550Z",
"shell.execute_reply": "2024-09-11T02:33:50.169835Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='difference between web voyager and reflection agents', sub_queries=['what is web voyager', 'what are reflection agents', 'do both web voyager and reflection agents use langgraph?'], publish_year=None)"
"Search(query='web voyager vs reflection agents', sub_queries=['difference between web voyager and reflection agents', 'do web voyager and reflection agents use langgraph'], publish_year=None)"
]
},
"execution_count": 5,
"execution_count": 65,
"metadata": {},
"output_type": "execute_result"
}
@@ -229,16 +193,9 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 53,
"id": "15b4923d-a08e-452d-8889-9a09a57d1095",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:33:50.180367Z",
"iopub.status.busy": "2024-09-11T02:33:50.173961Z",
"iopub.status.idle": "2024-09-11T02:33:50.186703Z",
"shell.execute_reply": "2024-09-11T02:33:50.186090Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"examples = []"
@@ -246,16 +203,9 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 54,
"id": "da5330e6-827a-40e5-982b-b23b6286b758",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:33:50.189822Z",
"iopub.status.busy": "2024-09-11T02:33:50.189617Z",
"iopub.status.idle": "2024-09-11T02:33:50.195116Z",
"shell.execute_reply": "2024-09-11T02:33:50.194617Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"question = \"What's chat langchain, is it a langchain template?\"\n",
@@ -268,16 +218,9 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 55,
"id": "580e857a-27df-4ecf-a19c-458dc9244ec8",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:33:50.198178Z",
"iopub.status.busy": "2024-09-11T02:33:50.198002Z",
"iopub.status.idle": "2024-09-11T02:33:50.204115Z",
"shell.execute_reply": "2024-09-11T02:33:50.202534Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"question = \"How to build multi-agent system and stream intermediate steps from it\"\n",
@@ -295,16 +238,9 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 56,
"id": "fa63310d-69e3-4701-825c-fbb01f8a5a16",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:33:50.207416Z",
"iopub.status.busy": "2024-09-11T02:33:50.207196Z",
"iopub.status.idle": "2024-09-11T02:33:50.212484Z",
"shell.execute_reply": "2024-09-11T02:33:50.211974Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"question = \"LangChain agents vs LangGraph?\"\n",
@@ -330,16 +266,9 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 57,
"id": "68b03709-9a60-4acf-b96c-cafe1056c6f3",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:33:50.215540Z",
"iopub.status.busy": "2024-09-11T02:33:50.215250Z",
"iopub.status.idle": "2024-09-11T02:33:50.224108Z",
"shell.execute_reply": "2024-09-11T02:33:50.223490Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"import uuid\n",
@@ -384,16 +313,9 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 58,
"id": "d9bf9f87-3e6b-4fc2-957b-949b077fab54",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:33:50.227215Z",
"iopub.status.busy": "2024-09-11T02:33:50.226993Z",
"iopub.status.idle": "2024-09-11T02:33:50.231333Z",
"shell.execute_reply": "2024-09-11T02:33:50.230742Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import MessagesPlaceholder\n",
@@ -407,24 +329,17 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 62,
"id": "e565ccb0-3530-4782-b56b-d1f6d0a8e559",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:33:50.233833Z",
"iopub.status.busy": "2024-09-11T02:33:50.233646Z",
"iopub.status.idle": "2024-09-11T02:33:51.318133Z",
"shell.execute_reply": "2024-09-11T02:33:51.317640Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query=\"What's the difference between web voyager and reflection agents? Do both use langgraph?\", sub_queries=['What is web voyager', 'What are reflection agents', 'Do web voyager and reflection agents use langgraph?'], publish_year=None)"
"Search(query='Difference between web voyager and reflection agents, do they both use LangGraph?', sub_queries=['What is Web Voyager', 'What are Reflection agents', 'Do Web Voyager and Reflection agents use LangGraph'], publish_year=None)"
]
},
"execution_count": 12,
"execution_count": 62,
"metadata": {},
"output_type": "execute_result"
}
@@ -462,7 +377,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.1"
}
},
"nbformat": 4,

View File

@@ -33,20 +33,12 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "e168ef5c-e54e-49a6-8552-5502854a6f01",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"outputs": [],
"source": [
"%pip install -qU langchain langchain-community langchain-openai faker langchain-chroma"
"# %pip install -qU langchain langchain-community langchain-openai faker langchain-chroma"
]
},
{
@@ -61,23 +53,15 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "40e2979e-a818-4b96-ac25-039336f94319",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:34:54.036110Z",
"iopub.status.busy": "2024-09-11T02:34:54.035829Z",
"iopub.status.idle": "2024-09-11T02:34:54.038746Z",
"shell.execute_reply": "2024-09-11T02:34:54.038430Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.\n",
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
@@ -96,16 +80,9 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 1,
"id": "e5ba65c2",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:34:54.040738Z",
"iopub.status.busy": "2024-09-11T02:34:54.040515Z",
"iopub.status.idle": "2024-09-11T02:34:54.622643Z",
"shell.execute_reply": "2024-09-11T02:34:54.622382Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from faker import Faker\n",
@@ -125,24 +102,17 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"id": "c901ea97",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:34:54.624195Z",
"iopub.status.busy": "2024-09-11T02:34:54.624106Z",
"iopub.status.idle": "2024-09-11T02:34:54.627231Z",
"shell.execute_reply": "2024-09-11T02:34:54.626971Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Jacob Adams'"
"'Hayley Gonzalez'"
]
},
"execution_count": 4,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
@@ -153,24 +123,17 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 3,
"id": "b0d42ae2",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:34:54.628545Z",
"iopub.status.busy": "2024-09-11T02:34:54.628460Z",
"iopub.status.idle": "2024-09-11T02:34:54.630474Z",
"shell.execute_reply": "2024-09-11T02:34:54.630282Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Eric Acevedo'"
"'Jesse Knight'"
]
},
"execution_count": 5,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -191,33 +154,19 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"id": "0ae69afc",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:34:54.631758Z",
"iopub.status.busy": "2024-09-11T02:34:54.631678Z",
"iopub.status.idle": "2024-09-11T02:34:54.666448Z",
"shell.execute_reply": "2024-09-11T02:34:54.666216Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel, Field, model_validator"
"from langchain_core.pydantic_v1 import BaseModel, Field"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"id": "6c9485ce",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:34:54.667852Z",
"iopub.status.busy": "2024-09-11T02:34:54.667733Z",
"iopub.status.idle": "2024-09-11T02:34:54.700224Z",
"shell.execute_reply": "2024-09-11T02:34:54.700004Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"class Search(BaseModel):\n",
@@ -227,17 +176,19 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 6,
"id": "aebd704a",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:34:54.701556Z",
"iopub.status.busy": "2024-09-11T02:34:54.701465Z",
"iopub.status.idle": "2024-09-11T02:34:55.179986Z",
"shell.execute_reply": "2024-09-11T02:34:55.179640Z"
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/harrisonchase/workplace/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change.\n",
" warn_beta(\n"
]
}
},
"outputs": [],
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
@@ -250,7 +201,7 @@
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"structured_llm = llm.with_structured_output(Search)\n",
"query_analyzer = {\"question\": RunnablePassthrough()} | prompt | structured_llm"
]
@@ -265,24 +216,17 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 33,
"id": "cc0d344b",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:34:55.181603Z",
"iopub.status.busy": "2024-09-11T02:34:55.181500Z",
"iopub.status.idle": "2024-09-11T02:34:55.778884Z",
"shell.execute_reply": "2024-09-11T02:34:55.778324Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='aliens', author='Jesse Knight')"
"Search(query='books about aliens', author='Jesse Knight')"
]
},
"execution_count": 9,
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
@@ -301,24 +245,17 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 34,
"id": "82b6b2ad",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:34:55.784266Z",
"iopub.status.busy": "2024-09-11T02:34:55.782603Z",
"iopub.status.idle": "2024-09-11T02:34:56.206779Z",
"shell.execute_reply": "2024-09-11T02:34:56.206068Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='aliens', author='Jess Knight')"
"Search(query='books about aliens', author='Jess Knight')"
]
},
"execution_count": 10,
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
@@ -339,16 +276,9 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 35,
"id": "98788a94",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:34:56.210043Z",
"iopub.status.busy": "2024-09-11T02:34:56.209657Z",
"iopub.status.idle": "2024-09-11T02:34:56.213962Z",
"shell.execute_reply": "2024-09-11T02:34:56.213413Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"system = \"\"\"Generate a relevant search query for a library system.\n",
@@ -369,16 +299,9 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 36,
"id": "e65412f5",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:34:56.216144Z",
"iopub.status.busy": "2024-09-11T02:34:56.216005Z",
"iopub.status.idle": "2024-09-11T02:34:56.218754Z",
"shell.execute_reply": "2024-09-11T02:34:56.218416Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"query_analyzer_all = {\"question\": RunnablePassthrough()} | prompt | structured_llm"
@@ -394,17 +317,18 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 37,
"id": "696b000f",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:34:56.220827Z",
"iopub.status.busy": "2024-09-11T02:34:56.220680Z",
"iopub.status.idle": "2024-09-11T02:34:58.846872Z",
"shell.execute_reply": "2024-09-11T02:34:58.846273Z"
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Error code: 400 - {'error': {'message': \"This model's maximum context length is 16385 tokens. However, your messages resulted in 33885 tokens (33855 in the messages, 30 in the functions). Please reduce the length of the messages or functions.\", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}\n"
]
}
},
"outputs": [],
],
"source": [
"try:\n",
" res = query_analyzer_all.invoke(\"what are books about aliens by jess knight\")\n",
@@ -422,16 +346,9 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 38,
"id": "0f0d0757",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:34:58.850318Z",
"iopub.status.busy": "2024-09-11T02:34:58.850100Z",
"iopub.status.idle": "2024-09-11T02:34:58.873883Z",
"shell.execute_reply": "2024-09-11T02:34:58.873525Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"llm_long = ChatOpenAI(model=\"gpt-4-turbo-preview\", temperature=0)\n",
@@ -441,24 +358,17 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 39,
"id": "03e5b7b2",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:34:58.875940Z",
"iopub.status.busy": "2024-09-11T02:34:58.875811Z",
"iopub.status.idle": "2024-09-11T02:35:02.947273Z",
"shell.execute_reply": "2024-09-11T02:35:02.946220Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='aliens', author='jess knight')"
"Search(query='aliens', author='Kevin Knight')"
]
},
"execution_count": 15,
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
@@ -479,16 +389,9 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 25,
"id": "32b19e07",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:35:02.951939Z",
"iopub.status.busy": "2024-09-11T02:35:02.951583Z",
"iopub.status.idle": "2024-09-11T02:35:41.777839Z",
"shell.execute_reply": "2024-09-11T02:35:41.777392Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_chroma import Chroma\n",
@@ -500,16 +403,9 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 51,
"id": "774cb7b0",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:35:41.780883Z",
"iopub.status.busy": "2024-09-11T02:35:41.780774Z",
"iopub.status.idle": "2024-09-11T02:35:41.782739Z",
"shell.execute_reply": "2024-09-11T02:35:41.782498Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"def select_names(question):\n",
@@ -520,16 +416,9 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 52,
"id": "1173159c",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:35:41.783992Z",
"iopub.status.busy": "2024-09-11T02:35:41.783913Z",
"iopub.status.idle": "2024-09-11T02:35:41.785911Z",
"shell.execute_reply": "2024-09-11T02:35:41.785632Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"create_prompt = {\n",
@@ -540,16 +429,9 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 53,
"id": "0a892607",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:35:41.787082Z",
"iopub.status.busy": "2024-09-11T02:35:41.787008Z",
"iopub.status.idle": "2024-09-11T02:35:41.788543Z",
"shell.execute_reply": "2024-09-11T02:35:41.788362Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"query_analyzer_select = create_prompt | structured_llm"
@@ -557,24 +439,17 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 54,
"id": "8195d7cd",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:35:41.789624Z",
"iopub.status.busy": "2024-09-11T02:35:41.789551Z",
"iopub.status.idle": "2024-09-11T02:35:42.099839Z",
"shell.execute_reply": "2024-09-11T02:35:42.099042Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptValue(messages=[SystemMessage(content='Generate a relevant search query for a library system.\\n\\n`author` attribute MUST be one of:\\n\\nJennifer Knight, Jill Knight, John Knight, Dr. Jeffrey Knight, Christopher Knight, Andrea Knight, Brandy Knight, Jennifer Keller, Becky Chambers, Sarah Knapp\\n\\nDo NOT hallucinate author name!'), HumanMessage(content='what are books by jess knight')])"
"ChatPromptValue(messages=[SystemMessage(content='Generate a relevant search query for a library system.\\n\\n`author` attribute MUST be one of:\\n\\nJesse Knight, Kelly Knight, Scott Knight, Richard Knight, Andrew Knight, Katherine Knight, Erica Knight, Ashley Knight, Becky Knight, Kevin Knight\\n\\nDo NOT hallucinate author name!'), HumanMessage(content='what are books by jess knight')])"
]
},
"execution_count": 20,
"execution_count": 54,
"metadata": {},
"output_type": "execute_result"
}
@@ -585,24 +460,17 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 55,
"id": "d3228b4e",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:35:42.106571Z",
"iopub.status.busy": "2024-09-11T02:35:42.105861Z",
"iopub.status.idle": "2024-09-11T02:35:42.909738Z",
"shell.execute_reply": "2024-09-11T02:35:42.908875Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='books about aliens', author='Jennifer Knight')"
"Search(query='books about aliens', author='Jesse Knight')"
]
},
"execution_count": 21,
"execution_count": 55,
"metadata": {},
"output_type": "execute_result"
}
@@ -624,45 +492,28 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 47,
"id": "a2e8b434",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:35:42.915376Z",
"iopub.status.busy": "2024-09-11T02:35:42.914923Z",
"iopub.status.idle": "2024-09-11T02:35:42.923958Z",
"shell.execute_reply": "2024-09-11T02:35:42.922391Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.pydantic_v1 import validator\n",
"\n",
"\n",
"class Search(BaseModel):\n",
" query: str\n",
" author: str\n",
"\n",
" @model_validator(mode=\"before\")\n",
" @classmethod\n",
" def double(cls, values: dict) -> dict:\n",
" author = values[\"author\"]\n",
" closest_valid_author = vectorstore.similarity_search(author, k=1)[\n",
" 0\n",
" ].page_content\n",
" values[\"author\"] = closest_valid_author\n",
" return values"
" @validator(\"author\")\n",
" def double(cls, v: str) -> str:\n",
" return vectorstore.similarity_search(v, k=1)[0].page_content"
]
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 48,
"id": "919c0601",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:35:42.927718Z",
"iopub.status.busy": "2024-09-11T02:35:42.927428Z",
"iopub.status.idle": "2024-09-11T02:35:42.933784Z",
"shell.execute_reply": "2024-09-11T02:35:42.933344Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"system = \"\"\"Generate a relevant search query for a library system\"\"\"\n",
@@ -680,24 +531,17 @@
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 50,
"id": "6c4f3e9a",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:35:42.936506Z",
"iopub.status.busy": "2024-09-11T02:35:42.936186Z",
"iopub.status.idle": "2024-09-11T02:35:43.711754Z",
"shell.execute_reply": "2024-09-11T02:35:43.710695Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='aliens', author='John Knight')"
"Search(query='books about aliens', author='Jesse Knight')"
]
},
"execution_count": 24,
"execution_count": 50,
"metadata": {},
"output_type": "execute_result"
}
@@ -708,16 +552,9 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": null,
"id": "a309cb11",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:35:43.717567Z",
"iopub.status.busy": "2024-09-11T02:35:43.717189Z",
"iopub.status.idle": "2024-09-11T02:35:43.722339Z",
"shell.execute_reply": "2024-09-11T02:35:43.720537Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"# TODO: show trigram similarity"
@@ -726,9 +563,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-311",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "poetry-venv-311"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -740,7 +577,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.1"
}
},
"nbformat": 4,

View File

@@ -33,25 +33,10 @@
"cell_type": "code",
"execution_count": 1,
"id": "e168ef5c-e54e-49a6-8552-5502854a6f01",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:41:53.160868Z",
"iopub.status.busy": "2024-09-11T02:41:53.160512Z",
"iopub.status.idle": "2024-09-11T02:41:57.605370Z",
"shell.execute_reply": "2024-09-11T02:41:57.604888Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain langchain-community langchain-openai langchain-chroma"
"# %pip install -qU langchain langchain-community langchain-openai langchain-chroma"
]
},
{
@@ -66,23 +51,15 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "40e2979e-a818-4b96-ac25-039336f94319",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:41:57.607874Z",
"iopub.status.busy": "2024-09-11T02:41:57.607697Z",
"iopub.status.idle": "2024-09-11T02:41:57.610422Z",
"shell.execute_reply": "2024-09-11T02:41:57.610012Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.\n",
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
@@ -101,16 +78,9 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 1,
"id": "1f621694",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:41:57.612276Z",
"iopub.status.busy": "2024-09-11T02:41:57.612146Z",
"iopub.status.idle": "2024-09-11T02:41:59.074590Z",
"shell.execute_reply": "2024-09-11T02:41:59.074052Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_chroma import Chroma\n",
@@ -138,21 +108,14 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"id": "0b51dd76-820d-41a4-98c8-893f6fe0d1ea",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:41:59.077712Z",
"iopub.status.busy": "2024-09-11T02:41:59.077514Z",
"iopub.status.idle": "2024-09-11T02:41:59.081509Z",
"shell.execute_reply": "2024-09-11T02:41:59.081112Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from typing import List, Optional\n",
"\n",
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Search(BaseModel):\n",
@@ -166,17 +129,19 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 3,
"id": "783c03c3-8c72-4f88-9cf4-5829ce6745d6",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:41:59.083613Z",
"iopub.status.busy": "2024-09-11T02:41:59.083492Z",
"iopub.status.idle": "2024-09-11T02:41:59.204636Z",
"shell.execute_reply": "2024-09-11T02:41:59.204377Z"
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/harrisonchase/workplace/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change.\n",
" warn_beta(\n"
]
}
},
"outputs": [],
],
"source": [
"from langchain_core.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
@@ -194,7 +159,7 @@
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"structured_llm = llm.with_structured_output(Search)\n",
"query_analyzer = {\"question\": RunnablePassthrough()} | prompt | structured_llm"
]
@@ -209,24 +174,17 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"id": "bc1d3863",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:41:59.206178Z",
"iopub.status.busy": "2024-09-11T02:41:59.206101Z",
"iopub.status.idle": "2024-09-11T02:41:59.817758Z",
"shell.execute_reply": "2024-09-11T02:41:59.817310Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(queries=['Harrison Work', 'Harrison employment history'])"
"Search(queries=['Harrison work location'])"
]
},
"execution_count": 6,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -237,24 +195,17 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"id": "af62af17-4f90-4dbd-a8b4-dfff51f1db95",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:41:59.820168Z",
"iopub.status.busy": "2024-09-11T02:41:59.819990Z",
"iopub.status.idle": "2024-09-11T02:42:00.309034Z",
"shell.execute_reply": "2024-09-11T02:42:00.308578Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(queries=['Harrison work history', 'Ankush work history'])"
"Search(queries=['Harrison work place', 'Ankush work place'])"
]
},
"execution_count": 7,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -275,16 +226,9 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 6,
"id": "1e047d87",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:00.311131Z",
"iopub.status.busy": "2024-09-11T02:42:00.310972Z",
"iopub.status.idle": "2024-09-11T02:42:00.313365Z",
"shell.execute_reply": "2024-09-11T02:42:00.313025Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import chain"
@@ -292,16 +236,9 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 31,
"id": "8dac7866",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:00.315138Z",
"iopub.status.busy": "2024-09-11T02:42:00.315016Z",
"iopub.status.idle": "2024-09-11T02:42:00.317427Z",
"shell.execute_reply": "2024-09-11T02:42:00.317088Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"@chain\n",
@@ -318,25 +255,17 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 33,
"id": "232ad8a7-7990-4066-9228-d35a555f7293",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:00.318951Z",
"iopub.status.busy": "2024-09-11T02:42:00.318829Z",
"iopub.status.idle": "2024-09-11T02:42:01.512855Z",
"shell.execute_reply": "2024-09-11T02:42:01.512321Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Harrison worked at Kensho'),\n",
" Document(page_content='Harrison worked at Kensho')]"
"[Document(page_content='Harrison worked at Kensho')]"
]
},
"execution_count": 10,
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
@@ -347,16 +276,9 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 34,
"id": "28e14ba5",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:01.515743Z",
"iopub.status.busy": "2024-09-11T02:42:01.515400Z",
"iopub.status.idle": "2024-09-11T02:42:02.349930Z",
"shell.execute_reply": "2024-09-11T02:42:02.349382Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
@@ -365,7 +287,7 @@
" Document(page_content='Ankush worked at Facebook')]"
]
},
"execution_count": 11,
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
@@ -399,7 +321,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.1"
}
},
"nbformat": 4,

View File

@@ -33,25 +33,10 @@
"cell_type": "code",
"execution_count": 1,
"id": "e168ef5c-e54e-49a6-8552-5502854a6f01",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:13.105266Z",
"iopub.status.busy": "2024-09-11T02:42:13.104556Z",
"iopub.status.idle": "2024-09-11T02:42:17.936922Z",
"shell.execute_reply": "2024-09-11T02:42:17.936478Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain langchain-community langchain-openai langchain-chroma"
"# %pip install -qU langchain langchain-community langchain-openai langchain-chroma"
]
},
{
@@ -66,23 +51,15 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "40e2979e-a818-4b96-ac25-039336f94319",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:17.939072Z",
"iopub.status.busy": "2024-09-11T02:42:17.938929Z",
"iopub.status.idle": "2024-09-11T02:42:17.941266Z",
"shell.execute_reply": "2024-09-11T02:42:17.940968Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.\n",
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
@@ -101,16 +78,9 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 16,
"id": "1f621694",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:17.942794Z",
"iopub.status.busy": "2024-09-11T02:42:17.942674Z",
"iopub.status.idle": "2024-09-11T02:42:19.939459Z",
"shell.execute_reply": "2024-09-11T02:42:19.938842Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_chroma import Chroma\n",
@@ -140,21 +110,14 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 17,
"id": "0b51dd76-820d-41a4-98c8-893f6fe0d1ea",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:19.942780Z",
"iopub.status.busy": "2024-09-11T02:42:19.942567Z",
"iopub.status.idle": "2024-09-11T02:42:19.947709Z",
"shell.execute_reply": "2024-09-11T02:42:19.947252Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from typing import List, Optional\n",
"\n",
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Search(BaseModel):\n",
@@ -172,16 +135,9 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 18,
"id": "783c03c3-8c72-4f88-9cf4-5829ce6745d6",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:19.949936Z",
"iopub.status.busy": "2024-09-11T02:42:19.949778Z",
"iopub.status.idle": "2024-09-11T02:42:20.073883Z",
"shell.execute_reply": "2024-09-11T02:42:20.073556Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers.openai_tools import PydanticToolsParser\n",
@@ -198,7 +154,7 @@
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"structured_llm = llm.with_structured_output(Search)\n",
"query_analyzer = {\"question\": RunnablePassthrough()} | prompt | structured_llm"
]
@@ -213,24 +169,17 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 19,
"id": "bc1d3863",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:20.075511Z",
"iopub.status.busy": "2024-09-11T02:42:20.075428Z",
"iopub.status.idle": "2024-09-11T02:42:20.902011Z",
"shell.execute_reply": "2024-09-11T02:42:20.901558Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='work history', person='HARRISON')"
"Search(query='workplace', person='HARRISON')"
]
},
"execution_count": 6,
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
@@ -241,24 +190,17 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 20,
"id": "af62af17-4f90-4dbd-a8b4-dfff51f1db95",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:20.904384Z",
"iopub.status.busy": "2024-09-11T02:42:20.904195Z",
"iopub.status.idle": "2024-09-11T02:42:21.468172Z",
"shell.execute_reply": "2024-09-11T02:42:21.467639Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Search(query='work history', person='ANKUSH')"
"Search(query='workplace', person='ANKUSH')"
]
},
"execution_count": 7,
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
@@ -279,16 +221,9 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 21,
"id": "1e047d87",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:21.470953Z",
"iopub.status.busy": "2024-09-11T02:42:21.470736Z",
"iopub.status.idle": "2024-09-11T02:42:21.473544Z",
"shell.execute_reply": "2024-09-11T02:42:21.473064Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import chain"
@@ -296,16 +231,9 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 22,
"id": "4ec0c7fe",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:21.476024Z",
"iopub.status.busy": "2024-09-11T02:42:21.475835Z",
"iopub.status.idle": "2024-09-11T02:42:21.478359Z",
"shell.execute_reply": "2024-09-11T02:42:21.477932Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"retrievers = {\n",
@@ -316,16 +244,9 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 23,
"id": "8dac7866",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:21.480247Z",
"iopub.status.busy": "2024-09-11T02:42:21.480084Z",
"iopub.status.idle": "2024-09-11T02:42:21.482732Z",
"shell.execute_reply": "2024-09-11T02:42:21.482382Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"@chain\n",
@@ -337,16 +258,9 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 24,
"id": "232ad8a7-7990-4066-9228-d35a555f7293",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:21.484480Z",
"iopub.status.busy": "2024-09-11T02:42:21.484361Z",
"iopub.status.idle": "2024-09-11T02:42:22.136704Z",
"shell.execute_reply": "2024-09-11T02:42:22.136244Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
@@ -354,7 +268,7 @@
"[Document(page_content='Harrison worked at Kensho')]"
]
},
"execution_count": 11,
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
@@ -365,16 +279,9 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 25,
"id": "28e14ba5",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:22.139305Z",
"iopub.status.busy": "2024-09-11T02:42:22.139106Z",
"iopub.status.idle": "2024-09-11T02:42:23.479739Z",
"shell.execute_reply": "2024-09-11T02:42:23.479170Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
@@ -382,7 +289,7 @@
"[Document(page_content='Ankush worked at Facebook')]"
]
},
"execution_count": 12,
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
@@ -416,7 +323,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.1"
}
},
"nbformat": 4,

View File

@@ -35,25 +35,10 @@
"cell_type": "code",
"execution_count": 1,
"id": "e168ef5c-e54e-49a6-8552-5502854a6f01",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:33.121714Z",
"iopub.status.busy": "2024-09-11T02:42:33.121392Z",
"iopub.status.idle": "2024-09-11T02:42:36.998607Z",
"shell.execute_reply": "2024-09-11T02:42:36.998126Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain langchain-community langchain-openai langchain-chroma"
"# %pip install -qU langchain langchain-community langchain-openai langchain-chroma"
]
},
{
@@ -68,23 +53,15 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "40e2979e-a818-4b96-ac25-039336f94319",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:37.001017Z",
"iopub.status.busy": "2024-09-11T02:42:37.000859Z",
"iopub.status.idle": "2024-09-11T02:42:37.003704Z",
"shell.execute_reply": "2024-09-11T02:42:37.003335Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.\n",
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
@@ -103,16 +80,9 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 1,
"id": "1f621694",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:37.005644Z",
"iopub.status.busy": "2024-09-11T02:42:37.005493Z",
"iopub.status.idle": "2024-09-11T02:42:38.288481Z",
"shell.execute_reply": "2024-09-11T02:42:38.287904Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_chroma import Chroma\n",
@@ -140,21 +110,14 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"id": "0b51dd76-820d-41a4-98c8-893f6fe0d1ea",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:38.291700Z",
"iopub.status.busy": "2024-09-11T02:42:38.291468Z",
"iopub.status.idle": "2024-09-11T02:42:38.295796Z",
"shell.execute_reply": "2024-09-11T02:42:38.295205Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from typing import Optional\n",
"\n",
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Search(BaseModel):\n",
@@ -168,16 +131,9 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 3,
"id": "783c03c3-8c72-4f88-9cf4-5829ce6745d6",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:38.297840Z",
"iopub.status.busy": "2024-09-11T02:42:38.297712Z",
"iopub.status.idle": "2024-09-11T02:42:38.420456Z",
"shell.execute_reply": "2024-09-11T02:42:38.420140Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
@@ -193,7 +149,7 @@
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"structured_llm = llm.bind_tools([Search])\n",
"query_analyzer = {\"question\": RunnablePassthrough()} | prompt | structured_llm"
]
@@ -208,24 +164,17 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"id": "bc1d3863",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:38.421934Z",
"iopub.status.busy": "2024-09-11T02:42:38.421831Z",
"iopub.status.idle": "2024-09-11T02:42:39.048915Z",
"shell.execute_reply": "2024-09-11T02:42:39.048519Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_korLZrh08PTRL94f4L7rFqdj', 'function': {'arguments': '{\"query\":\"Harrison\"}', 'name': 'Search'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 95, 'total_tokens': 109}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_483d39d857', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-ea94d376-37bf-4f80-abe6-e3b42b767ea0-0', tool_calls=[{'name': 'Search', 'args': {'query': 'Harrison'}, 'id': 'call_korLZrh08PTRL94f4L7rFqdj', 'type': 'tool_call'}], usage_metadata={'input_tokens': 95, 'output_tokens': 14, 'total_tokens': 109})"
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_ZnoVX4j9Mn8wgChaORyd1cvq', 'function': {'arguments': '{\"query\":\"Harrison\"}', 'name': 'Search'}, 'type': 'function'}]})"
]
},
"execution_count": 6,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -236,24 +185,17 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"id": "af62af17-4f90-4dbd-a8b4-dfff51f1db95",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:39.050923Z",
"iopub.status.busy": "2024-09-11T02:42:39.050785Z",
"iopub.status.idle": "2024-09-11T02:42:40.090421Z",
"shell.execute_reply": "2024-09-11T02:42:40.089454Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Hello! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 93, 'total_tokens': 103}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_483d39d857', 'finish_reason': 'stop', 'logprobs': None}, id='run-ebdfc44a-455a-4ca6-be85-84559886b1e1-0', usage_metadata={'input_tokens': 93, 'output_tokens': 10, 'total_tokens': 103})"
"AIMessage(content='Hello! How can I assist you today?')"
]
},
"execution_count": 7,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -274,16 +216,9 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 6,
"id": "1e047d87",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:40.093716Z",
"iopub.status.busy": "2024-09-11T02:42:40.093472Z",
"iopub.status.idle": "2024-09-11T02:42:40.097732Z",
"shell.execute_reply": "2024-09-11T02:42:40.097274Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers.openai_tools import PydanticToolsParser\n",
@@ -294,16 +229,9 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 7,
"id": "8dac7866",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:40.100028Z",
"iopub.status.busy": "2024-09-11T02:42:40.099882Z",
"iopub.status.idle": "2024-09-11T02:42:40.103105Z",
"shell.execute_reply": "2024-09-11T02:42:40.102734Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"@chain\n",
@@ -320,16 +248,9 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 8,
"id": "232ad8a7-7990-4066-9228-d35a555f7293",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:40.105092Z",
"iopub.status.busy": "2024-09-11T02:42:40.104917Z",
"iopub.status.idle": "2024-09-11T02:42:41.341967Z",
"shell.execute_reply": "2024-09-11T02:42:41.341455Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stderr",
@@ -344,7 +265,7 @@
"[Document(page_content='Harrison worked at Kensho')]"
]
},
"execution_count": 10,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@@ -355,24 +276,17 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 9,
"id": "28e14ba5",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:42:41.344639Z",
"iopub.status.busy": "2024-09-11T02:42:41.344411Z",
"iopub.status.idle": "2024-09-11T02:42:41.798332Z",
"shell.execute_reply": "2024-09-11T02:42:41.798054Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Hello! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 93, 'total_tokens': 103}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_483d39d857', 'finish_reason': 'stop', 'logprobs': None}, id='run-e87f058d-30c0-4075-8a89-a01b982d557e-0', usage_metadata={'input_tokens': 93, 'output_tokens': 10, 'total_tokens': 103})"
"AIMessage(content='Hello! How can I assist you today?')"
]
},
"execution_count": 11,
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
@@ -406,7 +320,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.1"
}
},
"nbformat": 4,

View File

@@ -62,8 +62,7 @@
"\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"if \"ANTHROPIC_API_KEY\" not in os.environ:\n",
" os.environ[\"ANTHROPIC_API_KEY\"] = getpass()\n",
"os.environ[\"ANTHROPIC_API_KEY\"] = getpass()\n",
"\n",
"model = ChatAnthropic(model=\"claude-3-sonnet-20240229\", temperature=0)"
]

View File

@@ -44,7 +44,7 @@
" ],\n",
")\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", api_key=\"llm-api-key\")\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", api_key=\"llm-api-key\")\n",
"\n",
"chain = prompt | llm"
]

View File

@@ -136,7 +136,7 @@
"source": [
"from langchain_core.output_parsers.openai_tools import PydanticToolsParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Table(BaseModel):\n",

View File

@@ -101,7 +101,7 @@
"source": [
"from typing import Optional\n",
"\n",
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"# Pydantic\n",
@@ -257,17 +257,17 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 4,
"id": "9194bcf2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"FinalResponse(final_output=Joke(setup='Why was the cat sitting on the computer?', punchline='Because it wanted to keep an eye on the mouse!', rating=7))"
"Response(output=Joke(setup='Why was the cat sitting on the computer?', punchline='To keep an eye on the mouse!', rating=8))"
]
},
"execution_count": 19,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -293,28 +293,28 @@
" response: str = Field(description=\"A conversational response to the user's query\")\n",
"\n",
"\n",
"class FinalResponse(BaseModel):\n",
" final_output: Union[Joke, ConversationalResponse]\n",
"class Response(BaseModel):\n",
" output: Union[Joke, ConversationalResponse]\n",
"\n",
"\n",
"structured_llm = llm.with_structured_output(FinalResponse)\n",
"structured_llm = llm.with_structured_output(Response)\n",
"\n",
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 5,
"id": "84d86132",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"FinalResponse(final_output=ConversationalResponse(response=\"I'm just a bunch of code, so I don't have feelings, but I'm here and ready to help you! How can I assist you today?\"))"
"Response(output=ConversationalResponse(response=\"I'm just a digital assistant, so I don't have feelings, but I'm here and ready to help you. How can I assist you today?\"))"
]
},
"execution_count": 20,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -667,7 +667,7 @@
"\n",
"from langchain_core.output_parsers import PydanticOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Person(BaseModel):\n",
@@ -794,7 +794,7 @@
"\n",
"from langchain_core.messages import AIMessage\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Person(BaseModel):\n",
@@ -915,9 +915,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "poetry-venv-2"
"name": "python3"
},
"language_info": {
"codemirror_mode": {

View File

@@ -64,14 +64,7 @@
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:43:40.609832Z",
"iopub.status.busy": "2024-09-11T02:43:40.609565Z",
"iopub.status.idle": "2024-09-11T02:43:40.617860Z",
"shell.execute_reply": "2024-09-11T02:43:40.617391Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"# The function name, type hints, and docstring are all part of the tool\n",
@@ -116,17 +109,10 @@
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:43:40.620257Z",
"iopub.status.busy": "2024-09-11T02:43:40.620084Z",
"iopub.status.idle": "2024-09-11T02:43:40.689214Z",
"shell.execute_reply": "2024-09-11T02:43:40.688938Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class add(BaseModel):\n",
@@ -158,14 +144,7 @@
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:43:40.690850Z",
"iopub.status.busy": "2024-09-11T02:43:40.690739Z",
"iopub.status.idle": "2024-09-11T02:43:40.693436Z",
"shell.execute_reply": "2024-09-11T02:43:40.693199Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from typing_extensions import Annotated, TypedDict\n",
@@ -222,8 +201,7 @@
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)"
]
@@ -231,19 +209,12 @@
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:43:42.447839Z",
"iopub.status.busy": "2024-09-11T02:43:42.447760Z",
"iopub.status.idle": "2024-09-11T02:43:43.181171Z",
"shell.execute_reply": "2024-09-11T02:43:43.180680Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_iXj4DiW1p7WLjTAQMRO0jxMs', 'function': {'arguments': '{\"a\":3,\"b\":12}', 'name': 'multiply'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 80, 'total_tokens': 97}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_483d39d857', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-0b620986-3f62-4df7-9ba3-4595089f9ad4-0', tool_calls=[{'name': 'multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_iXj4DiW1p7WLjTAQMRO0jxMs', 'type': 'tool_call'}], usage_metadata={'input_tokens': 80, 'output_tokens': 17, 'total_tokens': 97})"
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_BwYJ4UgU5pRVCBOUmiu7NhF9', 'function': {'arguments': '{\"a\":3,\"b\":12}', 'name': 'multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 80, 'total_tokens': 97}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_ba606877f9', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-7f05e19e-4561-40e2-a2d0-8f4e28e9a00f-0', tool_calls=[{'name': 'multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BwYJ4UgU5pRVCBOUmiu7NhF9', 'type': 'tool_call'}], usage_metadata={'input_tokens': 80, 'output_tokens': 17, 'total_tokens': 97})"
]
},
"execution_count": 5,
@@ -288,25 +259,18 @@
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:43:43.184004Z",
"iopub.status.busy": "2024-09-11T02:43:43.183777Z",
"iopub.status.idle": "2024-09-11T02:43:43.743024Z",
"shell.execute_reply": "2024-09-11T02:43:43.742171Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'multiply',\n",
" 'args': {'a': 3, 'b': 12},\n",
" 'id': 'call_1fyhJAbJHuKQe6n0PacubGsL',\n",
" 'id': 'call_rcdMie7E89Xx06lEKKxJyB5N',\n",
" 'type': 'tool_call'},\n",
" {'name': 'add',\n",
" 'args': {'a': 11, 'b': 49},\n",
" 'id': 'call_fc2jVkKzwuPWyU7kS9qn1hyG',\n",
" 'id': 'call_nheGN8yfvSJsnIuGZaXihou3',\n",
" 'type': 'tool_call'}]"
]
},
@@ -342,14 +306,7 @@
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:43:43.746273Z",
"iopub.status.busy": "2024-09-11T02:43:43.746020Z",
"iopub.status.idle": "2024-09-11T02:43:44.586236Z",
"shell.execute_reply": "2024-09-11T02:43:44.585619Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
@@ -364,7 +321,7 @@
],
"source": [
"from langchain_core.output_parsers import PydanticToolsParser\n",
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class add(BaseModel):\n",

View File

@@ -57,10 +57,9 @@
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)"
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{

View File

@@ -57,10 +57,9 @@
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)"
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{
@@ -78,7 +77,7 @@
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_9cViskmLvPnHjXk9tbVla5HA', 'function': {'arguments': '{\"a\":2,\"b\":4}', 'name': 'Multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 103, 'total_tokens': 112}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-095b827e-2bdd-43bb-8897-c843f4504883-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 2, 'b': 4}, 'id': 'call_9cViskmLvPnHjXk9tbVla5HA'}], usage_metadata={'input_tokens': 103, 'output_tokens': 9, 'total_tokens': 112})"
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_9cViskmLvPnHjXk9tbVla5HA', 'function': {'arguments': '{\"a\":2,\"b\":4}', 'name': 'Multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 103, 'total_tokens': 112}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-095b827e-2bdd-43bb-8897-c843f4504883-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 2, 'b': 4}, 'id': 'call_9cViskmLvPnHjXk9tbVla5HA'}], usage_metadata={'input_tokens': 103, 'output_tokens': 9, 'total_tokens': 112})"
]
},
"metadata": {},
@@ -112,7 +111,7 @@
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W', 'function': {'arguments': '{\"a\":1,\"b\":2}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 94, 'total_tokens': 109}, 'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-28f75260-9900-4bed-8cd3-f1579abb65e5-0', tool_calls=[{'name': 'Add', 'args': {'a': 1, 'b': 2}, 'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W'}], usage_metadata={'input_tokens': 94, 'output_tokens': 15, 'total_tokens': 109})"
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W', 'function': {'arguments': '{\"a\":1,\"b\":2}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 94, 'total_tokens': 109}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-28f75260-9900-4bed-8cd3-f1579abb65e5-0', tool_calls=[{'name': 'Add', 'args': {'a': 1, 'b': 2}, 'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W'}], usage_metadata={'input_tokens': 94, 'output_tokens': 15, 'total_tokens': 109})"
]
},
"metadata": {},

View File

@@ -53,8 +53,7 @@
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)"
]

View File

@@ -55,20 +55,13 @@
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:52:51.802901Z",
"iopub.status.busy": "2024-09-11T02:52:51.802682Z",
"iopub.status.idle": "2024-09-11T02:52:52.398167Z",
"shell.execute_reply": "2024-09-11T02:52:52.397911Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"# | output: false\n",
"# | echo: false\n",
"\n",
"%pip install -qU langchain langchain_openai\n",
"# %pip install -qU langchain langchain_openai\n",
"\n",
"import os\n",
"from getpass import getpass\n",
@@ -78,7 +71,7 @@
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)"
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{
@@ -93,14 +86,7 @@
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:52:52.399922Z",
"iopub.status.busy": "2024-09-11T02:52:52.399796Z",
"iopub.status.idle": "2024-09-11T02:52:52.406349Z",
"shell.execute_reply": "2024-09-11T02:52:52.406077Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
@@ -155,29 +141,22 @@
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:52:52.407763Z",
"iopub.status.busy": "2024-09-11T02:52:52.407691Z",
"iopub.status.idle": "2024-09-11T02:52:52.411761Z",
"shell.execute_reply": "2024-09-11T02:52:52.411512Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'description': 'Add the list of favorite pets.',\n",
" 'properties': {'pets': {'description': 'List of favorite pets to set.',\n",
" 'items': {'type': 'string'},\n",
" 'title': 'Pets',\n",
" 'type': 'array'},\n",
" 'user_id': {'description': \"User's ID.\",\n",
" 'title': 'User Id',\n",
"{'title': 'update_favorite_petsSchema',\n",
" 'description': 'Add the list of favorite pets.',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}},\n",
" 'user_id': {'title': 'User Id',\n",
" 'description': \"User's ID.\",\n",
" 'type': 'string'}},\n",
" 'required': ['pets', 'user_id'],\n",
" 'title': 'update_favorite_petsSchema',\n",
" 'type': 'object'}"
" 'required': ['pets', 'user_id']}"
]
},
"execution_count": 3,
@@ -199,26 +178,19 @@
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:52:52.427826Z",
"iopub.status.busy": "2024-09-11T02:52:52.427691Z",
"iopub.status.idle": "2024-09-11T02:52:52.431791Z",
"shell.execute_reply": "2024-09-11T02:52:52.431574Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'description': 'Add the list of favorite pets.',\n",
" 'properties': {'pets': {'description': 'List of favorite pets to set.',\n",
" 'items': {'type': 'string'},\n",
" 'title': 'Pets',\n",
" 'type': 'array'}},\n",
" 'required': ['pets'],\n",
" 'title': 'update_favorite_pets',\n",
" 'type': 'object'}"
"{'title': 'update_favorite_pets',\n",
" 'description': 'Add the list of favorite pets.',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}}},\n",
" 'required': ['pets']}"
]
},
"execution_count": 4,
@@ -240,14 +212,7 @@
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:52:52.433096Z",
"iopub.status.busy": "2024-09-11T02:52:52.433014Z",
"iopub.status.idle": "2024-09-11T02:52:52.437499Z",
"shell.execute_reply": "2024-09-11T02:52:52.437239Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -275,21 +240,14 @@
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:52:52.439148Z",
"iopub.status.busy": "2024-09-11T02:52:52.438742Z",
"iopub.status.idle": "2024-09-11T02:52:53.394524Z",
"shell.execute_reply": "2024-09-11T02:52:53.394005Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'update_favorite_pets',\n",
" 'args': {'pets': ['cats', 'parrots']},\n",
" 'id': 'call_pZ6XVREGh1L0BBSsiGIf1xVm',\n",
" 'id': 'call_W3cn4lZmJlyk8PCrKN4PRwqB',\n",
" 'type': 'tool_call'}]"
]
},
@@ -326,21 +284,14 @@
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:52:53.397134Z",
"iopub.status.busy": "2024-09-11T02:52:53.396972Z",
"iopub.status.idle": "2024-09-11T02:52:53.403332Z",
"shell.execute_reply": "2024-09-11T02:52:53.402787Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'update_favorite_pets',\n",
" 'args': {'pets': ['cats', 'parrots'], 'user_id': '123'},\n",
" 'id': 'call_pZ6XVREGh1L0BBSsiGIf1xVm',\n",
" 'id': 'call_W3cn4lZmJlyk8PCrKN4PRwqB',\n",
" 'type': 'tool_call'}]"
]
},
@@ -378,19 +329,12 @@
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:52:53.405183Z",
"iopub.status.busy": "2024-09-11T02:52:53.405048Z",
"iopub.status.idle": "2024-09-11T02:52:54.248576Z",
"shell.execute_reply": "2024-09-11T02:52:54.248107Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[ToolMessage(content='null', name='update_favorite_pets', tool_call_id='call_oYCD0THSedHTbwNAY3NW6uUj')]"
"[ToolMessage(content='null', name='update_favorite_pets', tool_call_id='call_HUyF6AihqANzEYxQnTUKxkXj')]"
]
},
"execution_count": 8,
@@ -421,14 +365,7 @@
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:52:54.251169Z",
"iopub.status.busy": "2024-09-11T02:52:54.250948Z",
"iopub.status.idle": "2024-09-11T02:52:54.254279Z",
"shell.execute_reply": "2024-09-11T02:52:54.253889Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
@@ -457,29 +394,22 @@
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:52:54.256425Z",
"iopub.status.busy": "2024-09-11T02:52:54.256279Z",
"iopub.status.idle": "2024-09-11T02:52:54.262533Z",
"shell.execute_reply": "2024-09-11T02:52:54.262228Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'description': 'Update list of favorite pets',\n",
" 'properties': {'pets': {'description': 'List of favorite pets to set.',\n",
" 'items': {'type': 'string'},\n",
" 'title': 'Pets',\n",
" 'type': 'array'},\n",
" 'user_id': {'description': \"User's ID.\",\n",
" 'title': 'User Id',\n",
"{'title': 'UpdateFavoritePetsSchema',\n",
" 'description': 'Update list of favorite pets',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}},\n",
" 'user_id': {'title': 'User Id',\n",
" 'description': \"User's ID.\",\n",
" 'type': 'string'}},\n",
" 'required': ['pets', 'user_id'],\n",
" 'title': 'UpdateFavoritePetsSchema',\n",
" 'type': 'object'}"
" 'required': ['pets', 'user_id']}"
]
},
"execution_count": 10,
@@ -488,8 +418,8 @@
}
],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_core.tools import BaseTool\n",
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
"class UpdateFavoritePetsSchema(BaseModel):\n",
@@ -510,26 +440,19 @@
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:52:54.264192Z",
"iopub.status.busy": "2024-09-11T02:52:54.264074Z",
"iopub.status.idle": "2024-09-11T02:52:54.267400Z",
"shell.execute_reply": "2024-09-11T02:52:54.267113Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'description': 'Update list of favorite pets',\n",
" 'properties': {'pets': {'description': 'List of favorite pets to set.',\n",
" 'items': {'type': 'string'},\n",
" 'title': 'Pets',\n",
" 'type': 'array'}},\n",
" 'required': ['pets'],\n",
" 'title': 'update_favorite_pets',\n",
" 'type': 'object'}"
"{'title': 'update_favorite_pets',\n",
" 'description': 'Update list of favorite pets',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}}},\n",
" 'required': ['pets']}"
]
},
"execution_count": 11,
@@ -543,33 +466,26 @@
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:52:54.269027Z",
"iopub.status.busy": "2024-09-11T02:52:54.268905Z",
"iopub.status.idle": "2024-09-11T02:52:54.276123Z",
"shell.execute_reply": "2024-09-11T02:52:54.275876Z"
}
},
"execution_count": 22,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'description': 'Update list of favorite pets',\n",
" 'properties': {'pets': {'description': 'List of favorite pets to set.',\n",
" 'items': {'type': 'string'},\n",
" 'title': 'Pets',\n",
" 'type': 'array'},\n",
" 'user_id': {'description': \"User's ID.\",\n",
" 'title': 'User Id',\n",
"{'title': 'UpdateFavoritePetsSchema',\n",
" 'description': 'Update list of favorite pets',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}},\n",
" 'user_id': {'title': 'User Id',\n",
" 'description': \"User's ID.\",\n",
" 'type': 'string'}},\n",
" 'required': ['pets', 'user_id'],\n",
" 'title': 'UpdateFavoritePetsSchema',\n",
" 'type': 'object'}"
" 'required': ['pets', 'user_id']}"
]
},
"execution_count": 12,
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
@@ -592,30 +508,23 @@
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:52:54.277497Z",
"iopub.status.busy": "2024-09-11T02:52:54.277400Z",
"iopub.status.idle": "2024-09-11T02:52:54.280323Z",
"shell.execute_reply": "2024-09-11T02:52:54.280072Z"
}
},
"execution_count": 23,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'description': 'Update list of favorite pets',\n",
" 'properties': {'pets': {'description': 'List of favorite pets to set.',\n",
" 'items': {'type': 'string'},\n",
" 'title': 'Pets',\n",
" 'type': 'array'}},\n",
" 'required': ['pets'],\n",
" 'title': 'update_favorite_pets',\n",
" 'type': 'object'}"
"{'title': 'update_favorite_pets',\n",
" 'description': 'Update list of favorite pets',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'description': 'List of favorite pets to set.',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}}},\n",
" 'required': ['pets']}"
]
},
"execution_count": 13,
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
@@ -626,30 +535,23 @@
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:52:54.281741Z",
"iopub.status.busy": "2024-09-11T02:52:54.281642Z",
"iopub.status.idle": "2024-09-11T02:52:54.288857Z",
"shell.execute_reply": "2024-09-11T02:52:54.288632Z"
}
},
"execution_count": 24,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'description': 'Use the tool.\\n\\nAdd run_manager: Optional[CallbackManagerForToolRun] = None\\nto child implementations to enable tracing.',\n",
" 'properties': {'pets': {'items': {'type': 'string'},\n",
" 'title': 'Pets',\n",
" 'type': 'array'},\n",
"{'title': 'update_favorite_petsSchema',\n",
" 'description': 'Use the tool.\\n\\nAdd run_manager: Optional[CallbackManagerForToolRun] = None\\nto child implementations to enable tracing.',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}},\n",
" 'user_id': {'title': 'User Id', 'type': 'string'}},\n",
" 'required': ['pets', 'user_id'],\n",
" 'title': 'update_favorite_petsSchema',\n",
" 'type': 'object'}"
" 'required': ['pets', 'user_id']}"
]
},
"execution_count": 14,
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
@@ -668,29 +570,22 @@
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T02:52:54.290237Z",
"iopub.status.busy": "2024-09-11T02:52:54.290145Z",
"iopub.status.idle": "2024-09-11T02:52:54.294273Z",
"shell.execute_reply": "2024-09-11T02:52:54.294053Z"
}
},
"execution_count": 26,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'description': 'Update list of favorite pets',\n",
" 'properties': {'pets': {'items': {'type': 'string'},\n",
" 'title': 'Pets',\n",
" 'type': 'array'}},\n",
" 'required': ['pets'],\n",
" 'title': 'update_favorite_pets',\n",
" 'type': 'object'}"
"{'title': 'update_favorite_pets',\n",
" 'description': 'Update list of favorite pets',\n",
" 'type': 'object',\n",
" 'properties': {'pets': {'title': 'Pets',\n",
" 'type': 'array',\n",
" 'items': {'type': 'string'}}},\n",
" 'required': ['pets']}"
]
},
"execution_count": 15,
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
@@ -702,7 +597,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
@@ -716,7 +611,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -58,10 +58,9 @@
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"llm_with_tools = llm.bind_tools(tools)"
]
},

View File

@@ -46,33 +46,19 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "84f70856-b865-4658-9930-7577fb4712ce",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T03:01:11.847104Z",
"iopub.status.busy": "2024-09-11T03:01:11.846727Z",
"iopub.status.idle": "2024-09-11T03:01:13.200038Z",
"shell.execute_reply": "2024-09-11T03:01:13.199355Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"!pip install -qU langchain-community wikipedia"
"!pip install -qU wikipedia"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 51,
"id": "b4eaed85-c5a6-4ba9-b401-40258b0131c2",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T03:01:13.203356Z",
"iopub.status.busy": "2024-09-11T03:01:13.202996Z",
"iopub.status.idle": "2024-09-11T03:01:14.740686Z",
"shell.execute_reply": "2024-09-11T03:01:14.739748Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -103,25 +89,18 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 55,
"id": "7f094f01-2e98-4947-acc4-0846963a96e0",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T03:01:14.745018Z",
"iopub.status.busy": "2024-09-11T03:01:14.744347Z",
"iopub.status.idle": "2024-09-11T03:01:14.752527Z",
"shell.execute_reply": "2024-09-11T03:01:14.752112Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Name: wikipedia\n",
"Description: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, facts, historical events, or other subjects. Input should be a search query.\n",
"args schema: {'query': {'description': 'query to look up on wikipedia', 'title': 'Query', 'type': 'string'}}\n",
"returns directly?: False\n"
"Name: wiki-tool\n",
"Description: look up things in wikipedia\n",
"args schema: {'query': {'title': 'Query', 'description': 'query to look up in Wikipedia, should be 3 or less words', 'type': 'string'}}\n",
"returns directly?: True\n"
]
}
],
@@ -145,16 +124,9 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 56,
"id": "1365784c-e666-41c8-a1bb-e50f822b5936",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T03:01:14.755274Z",
"iopub.status.busy": "2024-09-11T03:01:14.755068Z",
"iopub.status.idle": "2024-09-11T03:01:15.375704Z",
"shell.execute_reply": "2024-09-11T03:01:15.374841Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -168,7 +140,7 @@
"source": [
"from langchain_community.tools import WikipediaQueryRun\n",
"from langchain_community.utilities import WikipediaAPIWrapper\n",
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class WikiInputs(BaseModel):\n",
@@ -192,16 +164,9 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 57,
"id": "6e8850d6-6840-443e-a2be-adf64b30975c",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T03:01:15.378598Z",
"iopub.status.busy": "2024-09-11T03:01:15.378414Z",
"iopub.status.idle": "2024-09-11T03:01:15.382248Z",
"shell.execute_reply": "2024-09-11T03:01:15.381801Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -209,7 +174,7 @@
"text": [
"Name: wiki-tool\n",
"Description: look up things in wikipedia\n",
"args schema: {'query': {'description': 'query to look up in Wikipedia, should be 3 or less words', 'title': 'Query', 'type': 'string'}}\n",
"args schema: {'query': {'title': 'Query', 'description': 'query to look up in Wikipedia, should be 3 or less words', 'type': 'string'}}\n",
"returns directly?: True\n"
]
}
@@ -247,9 +212,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-311",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "poetry-venv-311"
"name": "python3"
},
"language_info": {
"codemirror_mode": {

View File

@@ -166,7 +166,7 @@
"\n",
"from langchain_openai.chat_models import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)"
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{

View File

@@ -53,14 +53,7 @@
"cell_type": "code",
"execution_count": 2,
"id": "08785b6d-722d-4620-b6ec-36deb3842c69",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T03:10:25.005243Z",
"iopub.status.busy": "2024-09-11T03:10:25.005074Z",
"iopub.status.idle": "2024-09-11T03:10:25.007679Z",
"shell.execute_reply": "2024-09-11T03:10:25.007361Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
@@ -88,16 +81,9 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "86258950-5e61-4340-81b9-84a5d26e8773",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T03:10:25.009496Z",
"iopub.status.busy": "2024-09-11T03:10:25.009371Z",
"iopub.status.idle": "2024-09-11T03:10:25.552917Z",
"shell.execute_reply": "2024-09-11T03:10:25.552592Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"# | echo: false\n",
@@ -105,24 +91,16 @@
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)"
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"id": "1d20604e-c4d1-4d21-841b-23e4f61aec36",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T03:10:25.554543Z",
"iopub.status.busy": "2024-09-11T03:10:25.554439Z",
"iopub.status.idle": "2024-09-11T03:10:25.631610Z",
"shell.execute_reply": "2024-09-11T03:10:25.631346Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"# Define tool\n",
@@ -153,33 +131,26 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "d354664c-ac44-4967-a35f-8912b3ad9477",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T03:10:25.633050Z",
"iopub.status.busy": "2024-09-11T03:10:25.632978Z",
"iopub.status.idle": "2024-09-11T03:10:26.556508Z",
"shell.execute_reply": "2024-09-11T03:10:26.556233Z"
}
},
"metadata": {},
"outputs": [
{
"ename": "ValidationError",
"evalue": "1 validation error for complex_toolSchema\ndict_arg\n Field required [type=missing, input_value={'int_arg': 5, 'float_arg': 2.1}, input_type=dict]\n For further information visit https://errors.pydantic.dev/2.8/v/missing",
"evalue": "1 validation error for complex_toolSchema\ndict_arg\n field required (type=value_error.missing)",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mValidationError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[5], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[43mchain\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43minvoke\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 2\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43muse complex tool. the args are 5, 2.1, empty dictionary. don\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mt forget dict_arg\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\n\u001b[1;32m 3\u001b[0m \u001b[43m)\u001b[49m\n",
"File \u001b[0;32m~/langchain/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py:2998\u001b[0m, in \u001b[0;36mRunnableSequence.invoke\u001b[0;34m(self, input, config, **kwargs)\u001b[0m\n\u001b[1;32m 2996\u001b[0m \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m context\u001b[38;5;241m.\u001b[39mrun(step\u001b[38;5;241m.\u001b[39minvoke, \u001b[38;5;28minput\u001b[39m, config, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n\u001b[1;32m 2997\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[0;32m-> 2998\u001b[0m \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m \u001b[43mcontext\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrun\u001b[49m\u001b[43m(\u001b[49m\u001b[43mstep\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43minvoke\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 2999\u001b[0m \u001b[38;5;66;03m# finish the root run\u001b[39;00m\n\u001b[1;32m 3000\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mBaseException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n",
"File \u001b[0;32m~/langchain/.venv/lib/python3.11/site-packages/langchain_core/tools/base.py:456\u001b[0m, in \u001b[0;36mBaseTool.invoke\u001b[0;34m(self, input, config, **kwargs)\u001b[0m\n\u001b[1;32m 449\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21minvoke\u001b[39m(\n\u001b[1;32m 450\u001b[0m \u001b[38;5;28mself\u001b[39m,\n\u001b[1;32m 451\u001b[0m \u001b[38;5;28minput\u001b[39m: Union[\u001b[38;5;28mstr\u001b[39m, Dict, ToolCall],\n\u001b[1;32m 452\u001b[0m config: Optional[RunnableConfig] \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m,\n\u001b[1;32m 453\u001b[0m \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs: Any,\n\u001b[1;32m 454\u001b[0m ) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Any:\n\u001b[1;32m 455\u001b[0m tool_input, kwargs \u001b[38;5;241m=\u001b[39m _prep_run_args(\u001b[38;5;28minput\u001b[39m, config, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n\u001b[0;32m--> 456\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrun\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtool_input\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[0;32m~/langchain/.venv/lib/python3.11/site-packages/langchain_core/tools/base.py:659\u001b[0m, in \u001b[0;36mBaseTool.run\u001b[0;34m(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, tool_call_id, **kwargs)\u001b[0m\n\u001b[1;32m 657\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m error_to_raise:\n\u001b[1;32m 658\u001b[0m run_manager\u001b[38;5;241m.\u001b[39mon_tool_error(error_to_raise)\n\u001b[0;32m--> 659\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m error_to_raise\n\u001b[1;32m 660\u001b[0m output \u001b[38;5;241m=\u001b[39m _format_output(content, artifact, tool_call_id, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mname, status)\n\u001b[1;32m 661\u001b[0m run_manager\u001b[38;5;241m.\u001b[39mon_tool_end(output, color\u001b[38;5;241m=\u001b[39mcolor, name\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mname, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n",
"File \u001b[0;32m~/langchain/.venv/lib/python3.11/site-packages/langchain_core/tools/base.py:622\u001b[0m, in \u001b[0;36mBaseTool.run\u001b[0;34m(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, tool_call_id, **kwargs)\u001b[0m\n\u001b[1;32m 620\u001b[0m context \u001b[38;5;241m=\u001b[39m copy_context()\n\u001b[1;32m 621\u001b[0m context\u001b[38;5;241m.\u001b[39mrun(_set_config_context, child_config)\n\u001b[0;32m--> 622\u001b[0m tool_args, tool_kwargs \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_to_args_and_kwargs\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtool_input\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 623\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m signature(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_run)\u001b[38;5;241m.\u001b[39mparameters\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mrun_manager\u001b[39m\u001b[38;5;124m\"\u001b[39m):\n\u001b[1;32m 624\u001b[0m tool_kwargs[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mrun_manager\u001b[39m\u001b[38;5;124m\"\u001b[39m] \u001b[38;5;241m=\u001b[39m run_manager\n",
"File \u001b[0;32m~/langchain/.venv/lib/python3.11/site-packages/langchain_core/tools/base.py:545\u001b[0m, in \u001b[0;36mBaseTool._to_args_and_kwargs\u001b[0;34m(self, tool_input)\u001b[0m\n\u001b[1;32m 544\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_to_args_and_kwargs\u001b[39m(\u001b[38;5;28mself\u001b[39m, tool_input: Union[\u001b[38;5;28mstr\u001b[39m, Dict]) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Tuple[Tuple, Dict]:\n\u001b[0;32m--> 545\u001b[0m tool_input \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_parse_input\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtool_input\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 546\u001b[0m \u001b[38;5;66;03m# For backwards compatibility, if run_input is a string,\u001b[39;00m\n\u001b[1;32m 547\u001b[0m \u001b[38;5;66;03m# pass as a positional argument.\u001b[39;00m\n\u001b[1;32m 548\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(tool_input, \u001b[38;5;28mstr\u001b[39m):\n",
"File \u001b[0;32m~/langchain/.venv/lib/python3.11/site-packages/langchain_core/tools/base.py:487\u001b[0m, in \u001b[0;36mBaseTool._parse_input\u001b[0;34m(self, tool_input)\u001b[0m\n\u001b[1;32m 485\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m input_args \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m 486\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28missubclass\u001b[39m(input_args, BaseModel):\n\u001b[0;32m--> 487\u001b[0m result \u001b[38;5;241m=\u001b[39m \u001b[43minput_args\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmodel_validate\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtool_input\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 488\u001b[0m result_dict \u001b[38;5;241m=\u001b[39m result\u001b[38;5;241m.\u001b[39mmodel_dump()\n\u001b[1;32m 489\u001b[0m \u001b[38;5;28;01melif\u001b[39;00m \u001b[38;5;28missubclass\u001b[39m(input_args, BaseModelV1):\n",
"File \u001b[0;32m~/langchain/.venv/lib/python3.11/site-packages/pydantic/main.py:568\u001b[0m, in \u001b[0;36mBaseModel.model_validate\u001b[0;34m(cls, obj, strict, from_attributes, context)\u001b[0m\n\u001b[1;32m 566\u001b[0m \u001b[38;5;66;03m# `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks\u001b[39;00m\n\u001b[1;32m 567\u001b[0m __tracebackhide__ \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mTrue\u001b[39;00m\n\u001b[0;32m--> 568\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m__pydantic_validator__\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mvalidate_python\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 569\u001b[0m \u001b[43m \u001b[49m\u001b[43mobj\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstrict\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstrict\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mfrom_attributes\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mfrom_attributes\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcontext\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcontext\u001b[49m\n\u001b[1;32m 570\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n",
"\u001b[0;31mValidationError\u001b[0m: 1 validation error for complex_toolSchema\ndict_arg\n Field required [type=missing, input_value={'int_arg': 5, 'float_arg': 2.1}, input_type=dict]\n For further information visit https://errors.pydantic.dev/2.8/v/missing"
"Cell \u001b[0;32mIn[6], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[43mchain\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43minvoke\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 2\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43muse complex tool. the args are 5, 2.1, empty dictionary. don\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mt forget dict_arg\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\n\u001b[1;32m 3\u001b[0m \u001b[43m)\u001b[49m\n",
"File \u001b[0;32m~/.pyenv/versions/3.10.5/lib/python3.10/site-packages/langchain_core/runnables/base.py:2572\u001b[0m, in \u001b[0;36mRunnableSequence.invoke\u001b[0;34m(self, input, config, **kwargs)\u001b[0m\n\u001b[1;32m 2570\u001b[0m \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m step\u001b[38;5;241m.\u001b[39minvoke(\u001b[38;5;28minput\u001b[39m, config, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n\u001b[1;32m 2571\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[0;32m-> 2572\u001b[0m \u001b[38;5;28minput\u001b[39m \u001b[38;5;241m=\u001b[39m \u001b[43mstep\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43minvoke\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 2573\u001b[0m \u001b[38;5;66;03m# finish the root run\u001b[39;00m\n\u001b[1;32m 2574\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mBaseException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n",
"File \u001b[0;32m~/.pyenv/versions/3.10.5/lib/python3.10/site-packages/langchain_core/tools.py:380\u001b[0m, in \u001b[0;36mBaseTool.invoke\u001b[0;34m(self, input, config, **kwargs)\u001b[0m\n\u001b[1;32m 373\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21minvoke\u001b[39m(\n\u001b[1;32m 374\u001b[0m \u001b[38;5;28mself\u001b[39m,\n\u001b[1;32m 375\u001b[0m \u001b[38;5;28minput\u001b[39m: Union[\u001b[38;5;28mstr\u001b[39m, Dict],\n\u001b[1;32m 376\u001b[0m config: Optional[RunnableConfig] \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m,\n\u001b[1;32m 377\u001b[0m \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs: Any,\n\u001b[1;32m 378\u001b[0m ) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Any:\n\u001b[1;32m 379\u001b[0m config \u001b[38;5;241m=\u001b[39m ensure_config(config)\n\u001b[0;32m--> 380\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrun\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 381\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43minput\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 382\u001b[0m \u001b[43m \u001b[49m\u001b[43mcallbacks\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mconfig\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mcallbacks\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 383\u001b[0m \u001b[43m \u001b[49m\u001b[43mtags\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mconfig\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mtags\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 384\u001b[0m \u001b[43m \u001b[49m\u001b[43mmetadata\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mconfig\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmetadata\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 385\u001b[0m \u001b[43m \u001b[49m\u001b[43mrun_name\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mconfig\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mrun_name\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 386\u001b[0m \u001b[43m \u001b[49m\u001b[43mrun_id\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mconfig\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mpop\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mrun_id\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 387\u001b[0m \u001b[43m \u001b[49m\u001b[43mconfig\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mconfig\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 388\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 389\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[0;32m~/.pyenv/versions/3.10.5/lib/python3.10/site-packages/langchain_core/tools.py:537\u001b[0m, in \u001b[0;36mBaseTool.run\u001b[0;34m(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, **kwargs)\u001b[0m\n\u001b[1;32m 535\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m ValidationError \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 536\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mhandle_validation_error:\n\u001b[0;32m--> 537\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m e\n\u001b[1;32m 538\u001b[0m \u001b[38;5;28;01melif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mhandle_validation_error, \u001b[38;5;28mbool\u001b[39m):\n\u001b[1;32m 539\u001b[0m observation \u001b[38;5;241m=\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mTool input validation error\u001b[39m\u001b[38;5;124m\"\u001b[39m\n",
"File \u001b[0;32m~/.pyenv/versions/3.10.5/lib/python3.10/site-packages/langchain_core/tools.py:526\u001b[0m, in \u001b[0;36mBaseTool.run\u001b[0;34m(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, **kwargs)\u001b[0m\n\u001b[1;32m 524\u001b[0m context \u001b[38;5;241m=\u001b[39m copy_context()\n\u001b[1;32m 525\u001b[0m context\u001b[38;5;241m.\u001b[39mrun(_set_config_context, child_config)\n\u001b[0;32m--> 526\u001b[0m parsed_input \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_parse_input\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtool_input\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 527\u001b[0m tool_args, tool_kwargs \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_to_args_and_kwargs(parsed_input)\n\u001b[1;32m 528\u001b[0m observation \u001b[38;5;241m=\u001b[39m (\n\u001b[1;32m 529\u001b[0m context\u001b[38;5;241m.\u001b[39mrun(\n\u001b[1;32m 530\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_run, \u001b[38;5;241m*\u001b[39mtool_args, run_manager\u001b[38;5;241m=\u001b[39mrun_manager, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mtool_kwargs\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 533\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m context\u001b[38;5;241m.\u001b[39mrun(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_run, \u001b[38;5;241m*\u001b[39mtool_args, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mtool_kwargs)\n\u001b[1;32m 534\u001b[0m )\n",
"File \u001b[0;32m~/.pyenv/versions/3.10.5/lib/python3.10/site-packages/langchain_core/tools.py:424\u001b[0m, in \u001b[0;36mBaseTool._parse_input\u001b[0;34m(self, tool_input)\u001b[0m\n\u001b[1;32m 422\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 423\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m input_args \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[0;32m--> 424\u001b[0m result \u001b[38;5;241m=\u001b[39m \u001b[43minput_args\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mparse_obj\u001b[49m\u001b[43m(\u001b[49m\u001b[43mtool_input\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 425\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m {\n\u001b[1;32m 426\u001b[0m k: \u001b[38;5;28mgetattr\u001b[39m(result, k)\n\u001b[1;32m 427\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m k, v \u001b[38;5;129;01min\u001b[39;00m result\u001b[38;5;241m.\u001b[39mdict()\u001b[38;5;241m.\u001b[39mitems()\n\u001b[1;32m 428\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m k \u001b[38;5;129;01min\u001b[39;00m tool_input\n\u001b[1;32m 429\u001b[0m }\n\u001b[1;32m 430\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m tool_input\n",
"File \u001b[0;32m~/.pyenv/versions/3.10.5/lib/python3.10/site-packages/pydantic/main.py:526\u001b[0m, in \u001b[0;36mpydantic.main.BaseModel.parse_obj\u001b[0;34m()\u001b[0m\n",
"File \u001b[0;32m~/.pyenv/versions/3.10.5/lib/python3.10/site-packages/pydantic/main.py:341\u001b[0m, in \u001b[0;36mpydantic.main.BaseModel.__init__\u001b[0;34m()\u001b[0m\n",
"\u001b[0;31mValidationError\u001b[0m: 1 validation error for complex_toolSchema\ndict_arg\n field required (type=value_error.missing)"
]
}
],
@@ -201,16 +172,9 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 8,
"id": "8fedb550-683d-45ae-8876-ae7acb332019",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T03:10:26.558131Z",
"iopub.status.busy": "2024-09-11T03:10:26.558031Z",
"iopub.status.idle": "2024-09-11T03:10:27.399844Z",
"shell.execute_reply": "2024-09-11T03:10:27.399201Z"
}
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -222,10 +186,9 @@
"\n",
"raised the following error:\n",
"\n",
"<class 'pydantic_core._pydantic_core.ValidationError'>: 1 validation error for complex_toolSchema\n",
"<class 'pydantic.error_wrappers.ValidationError'>: 1 validation error for complex_toolSchema\n",
"dict_arg\n",
" Field required [type=missing, input_value={'int_arg': 5, 'float_arg': 2.1}, input_type=dict]\n",
" For further information visit https://errors.pydantic.dev/2.8/v/missing\n"
" field required (type=value_error.missing)\n"
]
}
],
@@ -263,16 +226,9 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 10,
"id": "02cc4223-35fa-4240-976a-012299ca703c",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T03:10:27.404122Z",
"iopub.status.busy": "2024-09-11T03:10:27.403539Z",
"iopub.status.idle": "2024-09-11T03:10:38.080547Z",
"shell.execute_reply": "2024-09-11T03:10:38.079955Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
@@ -280,7 +236,7 @@
"10.5"
]
},
"execution_count": 7,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
@@ -321,16 +277,9 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 11,
"id": "b5659956-9454-468a-9753-a3ff9052b8f5",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T03:10:38.083810Z",
"iopub.status.busy": "2024-09-11T03:10:38.083623Z",
"iopub.status.idle": "2024-09-11T03:10:38.090089Z",
"shell.execute_reply": "2024-09-11T03:10:38.089682Z"
}
},
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import AIMessage, HumanMessage, ToolCall, ToolMessage\n",
@@ -386,16 +335,9 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 12,
"id": "4c45f5bd-cbb4-47d5-b4b6-aec50673c750",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-11T03:10:38.092152Z",
"iopub.status.busy": "2024-09-11T03:10:38.092021Z",
"iopub.status.idle": "2024-09-11T03:10:39.592443Z",
"shell.execute_reply": "2024-09-11T03:10:39.591990Z"
}
},
"metadata": {},
"outputs": [
{
"data": {
@@ -403,7 +345,7 @@
"10.5"
]
},
"execution_count": 9,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@@ -459,7 +401,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.5"
}
},
"nbformat": 4,

View File

@@ -46,10 +46,9 @@
"\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"llm_with_tools = llm.bind_tools(tools)"
]
},

View File

@@ -50,19 +50,18 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"AI21_API_KEY\" not in os.environ:\n",
" os.environ[\"AI21_API_KEY\"] = getpass()"
]
"os.environ[\"AI21_API_KEY\"] = getpass()"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
@@ -74,14 +73,14 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "7c2e19d3-7c58-4470-9e1a-718b27a32056",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
@@ -116,15 +115,15 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "c40756fb-cbf8-4d44-a293-3989d707237e",
"metadata": {},
"outputs": [],
"source": [
"from langchain_ai21 import ChatAI21\n",
"\n",
"llm = ChatAI21(model=\"jamba-instruct\", temperature=0)"
]
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
@@ -136,10 +135,8 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "46b982dc-5d8a-46da-a711-81c03ccd6adc",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" (\n",
@@ -150,7 +147,9 @@
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
@@ -164,7 +163,6 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "39353473fce5dd2e",
"metadata": {
"collapsed": false,
@@ -172,7 +170,6 @@
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
@@ -194,26 +191,25 @@
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "39c0ccd229927eab",
"metadata": {},
"source": "# Tool Calls / Function Calling"
},
{
"cell_type": "markdown",
"id": "2bf6b40be07fe2d4",
"metadata": {},
"source": "This example shows how to use tool calling with AI21 models:"
},
{
"cell_type": "code",
"execution_count": null,
"id": "a181a28df77120fb",
"metadata": {},
],
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"cell_type": "markdown",
"source": "# Tool Calls / Function Calling",
"id": "39c0ccd229927eab"
},
{
"metadata": {},
"cell_type": "markdown",
"source": "This example shows how to use tool calling with AI21 models:",
"id": "2bf6b40be07fe2d4"
},
{
"metadata": {},
"cell_type": "code",
"source": [
"import os\n",
"from getpass import getpass\n",
@@ -223,8 +219,7 @@
"from langchain_core.tools import tool\n",
"from langchain_core.utils.function_calling import convert_to_openai_tool\n",
"\n",
"if \"AI21_API_KEY\" not in os.environ:\n",
" os.environ[\"AI21_API_KEY\"] = getpass()\n",
"os.environ[\"AI21_API_KEY\"] = getpass()\n",
"\n",
"\n",
"@tool\n",
@@ -281,7 +276,10 @@
" print(f\"Assistant: {llm_answer.content}\")\n",
" else:\n",
" print(f\"Assistant: {response.content}\")"
]
],
"id": "a181a28df77120fb",
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",

View File

@@ -59,8 +59,7 @@
"import getpass\n",
"import os\n",
"\n",
"if \"ANTHROPIC_API_KEY\" not in os.environ:\n",
" os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass(\"Enter your Anthropic API key: \")"
"os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass(\"Enter your Anthropic API key: \")"
]
},
{
@@ -275,7 +274,7 @@
}
],
"source": [
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class GetWeather(BaseModel):\n",

View File

@@ -19,7 +19,7 @@
"\n",
"::: {.callout-warning}\n",
"\n",
"The Anthropic API officially supports tool-calling so this workaround is no longer needed. Please use [ChatAnthropic](/docs/integrations/chat/anthropic) with `langchain-anthropic>=0.1.5`.\n",
"The Anthropic API officially supports tool-calling so this workaround is no longer needed. Please use [ChatAnthropic](/docs/integrations/chat/anthropic) with `langchain-anthropic>=0.1.15`.\n",
"\n",
":::\n",
"\n",
@@ -69,7 +69,7 @@
}
],
"source": [
"from pydantic import BaseModel\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"\n",
"\n",
"class Person(BaseModel):\n",

View File

@@ -54,8 +54,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"ANYSCALE_API_KEY\" not in os.environ:\n",
" os.environ[\"ANYSCALE_API_KEY\"] = getpass()"
"os.environ[\"ANYSCALE_API_KEY\"] = getpass()"
]
},
{

View File

@@ -58,10 +58,7 @@
"import getpass\n",
"import os\n",
"\n",
"if \"AZURE_OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"AZURE_OPENAI_API_KEY\"] = getpass.getpass(\n",
" \"Enter your AzureOpenAI API key: \"\n",
" )\n",
"os.environ[\"AZURE_OPENAI_API_KEY\"] = getpass.getpass(\"Enter your AzureOpenAI API key: \")\n",
"os.environ[\"AZURE_OPENAI_ENDPOINT\"] = \"https://YOUR-ENDPOINT.openai.azure.com/\""
]
},

View File

@@ -76,8 +76,7 @@
"import getpass\n",
"import os\n",
"\n",
"if \"CEREBRAS_API_KEY\" not in os.environ:\n",
" os.environ[\"CEREBRAS_API_KEY\"] = getpass.getpass(\"Enter your Cerebras API key: \")"
"os.environ[\"CEREBRAS_API_KEY\"] = getpass.getpass(\"Enter your Cerebras API key: \")"
]
},
{

View File

@@ -90,10 +90,7 @@
"import os\n",
"\n",
"os.environ[\"DATABRICKS_HOST\"] = \"https://your-workspace.cloud.databricks.com\"\n",
"if \"DATABRICKS_TOKEN\" not in os.environ:\n",
" os.environ[\"DATABRICKS_TOKEN\"] = getpass.getpass(\n",
" \"Enter your Databricks access token: \"\n",
" )"
"os.environ[\"DATABRICKS_TOKEN\"] = getpass.getpass(\"Enter your Databricks access token: \")"
]
},
{

View File

@@ -123,8 +123,8 @@
"from dotenv import find_dotenv, load_dotenv\n",
"from langchain_community.chat_models import ChatDeepInfra\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"from langchain_core.tools import tool\n",
"from pydantic import BaseModel\n",
"\n",
"model_name = \"meta-llama/Meta-Llama-3-70B-Instruct\"\n",
"\n",

View File

@@ -264,7 +264,7 @@
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"llm = ChatEdenAI(provider=\"openai\", temperature=0.2, max_tokens=500)\n",
"\n",

View File

@@ -47,8 +47,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"EVERLYAI_API_KEY\" not in os.environ:\n",
" os.environ[\"EVERLYAI_API_KEY\"] = getpass()"
"os.environ[\"EVERLYAI_API_KEY\"] = getpass()"
]
},
{

View File

@@ -52,8 +52,7 @@
"import getpass\n",
"import os\n",
"\n",
"if \"FIREWORKS_API_KEY\" not in os.environ:\n",
" os.environ[\"FIREWORKS_API_KEY\"] = getpass.getpass(\"Enter your Fireworks API key: \")"
"os.environ[\"FIREWORKS_API_KEY\"] = getpass.getpass(\"Enter your Fireworks API key: \")"
]
},
{

View File

@@ -44,8 +44,7 @@
"import getpass\n",
"import os\n",
"\n",
"if \"FRIENDLI_TOKEN\" not in os.environ:\n",
" os.environ[\"FRIENDLI_TOKEN\"] = getpass.getpass(\"Friendi Personal Access Token: \")"
"os.environ[\"FRIENDLI_TOKEN\"] = getpass.getpass(\"Friendi Personal Access Token: \")"
]
},
{

View File

@@ -47,8 +47,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"GIGACHAT_CREDENTIALS\" not in os.environ:\n",
" os.environ[\"GIGACHAT_CREDENTIALS\"] = getpass()"
"os.environ[\"GIGACHAT_CREDENTIALS\"] = getpass()"
]
},
{

View File

@@ -60,8 +60,7 @@
"import getpass\n",
"import os\n",
"\n",
"if \"GOOGLE_API_KEY\" not in os.environ:\n",
" os.environ[\"GOOGLE_API_KEY\"] = getpass.getpass(\"Enter your Google AI API key: \")"
"os.environ[\"GOOGLE_API_KEY\"] = getpass.getpass(\"Enter your Google AI API key: \")"
]
},
{

View File

@@ -50,8 +50,7 @@
"import getpass\n",
"import os\n",
"\n",
"if \"GROQ_API_KEY\" not in os.environ:\n",
" os.environ[\"GROQ_API_KEY\"] = getpass.getpass(\"Enter your Groq API key: \")"
"os.environ[\"GROQ_API_KEY\"] = getpass.getpass(\"Enter your Groq API key: \")"
]
},
{

View File

@@ -483,7 +483,7 @@
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class GetWeather(BaseModel):\n",

View File

@@ -226,8 +226,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_core.tools import tool\n",
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
"class WeatherInput(BaseModel):\n",
@@ -343,8 +343,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.pydantic_v1 import BaseModel\n",
"from langchain_core.utils.function_calling import convert_to_openai_tool\n",
"from pydantic import BaseModel\n",
"\n",
"\n",
"class Joke(BaseModel):\n",

View File

@@ -52,8 +52,7 @@
"import getpass\n",
"import os\n",
"\n",
"if \"MISTRAL_API_KEY\" not in os.environ:\n",
" os.environ[\"MISTRAL_API_KEY\"] = getpass.getpass(\"Enter your Mistral API key: \")"
"os.environ[\"MISTRAL_API_KEY\"] = getpass.getpass(\"Enter your Mistral API key: \")"
]
},
{

View File

@@ -658,8 +658,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.pydantic_v1 import Field\n",
"from langchain_core.tools import tool\n",
"from pydantic import Field\n",
"\n",
"\n",
"@tool\n",

View File

@@ -426,8 +426,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_core.tools import tool\n",
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
"# Define the schema for function arguments\n",

View File

@@ -41,8 +41,7 @@
"import getpass\n",
"import os\n",
"\n",
"if \"YI_API_KEY\" not in os.environ:\n",
" os.environ[\"YI_API_KEY\"] = getpass.getpass(\"Enter your Yi API key: \")"
"os.environ[\"YI_API_KEY\"] = getpass.getpass(\"Enter your Yi API key: \")"
]
},
{

View File

@@ -72,7 +72,7 @@
"source": [
"from enum import Enum\n",
"\n",
"from pydantic import BaseModel, Field\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Operation(Enum):\n",
@@ -135,8 +135,8 @@
"source": [
"from pprint import pprint\n",
"\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"from langchain_core.utils.function_calling import convert_pydantic_to_openai_function\n",
"from pydantic import BaseModel\n",
"\n",
"openai_function_def = convert_pydantic_to_openai_function(Calculator)\n",
"pprint(openai_function_def)"

View File

@@ -39,10 +39,9 @@
"import getpass\n",
"import os\n",
"\n",
"if \"UNSTRUCTURED_API_KEY\" not in os.environ:\n",
" os.environ[\"UNSTRUCTURED_API_KEY\"] = getpass.getpass(\n",
" \"Enter your Unstructured API key: \"\n",
" )"
"os.environ[\"UNSTRUCTURED_API_KEY\"] = getpass.getpass(\n",
" \"Enter your Unstructured API key: \"\n",
")"
]
},
{

View File

@@ -58,8 +58,7 @@
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"AI21_API_KEY\" not in os.environ:\n",
" os.environ[\"AI21_API_KEY\"] = getpass()"
"os.environ[\"AI21_API_KEY\"] = getpass()"
]
},
{

View File

@@ -44,8 +44,7 @@
"import getpass\n",
"import os\n",
"\n",
"if \"DASHSCOPE_API_KEY\" not in os.environ:\n",
" os.environ[\"DASHSCOPE_API_KEY\"] = getpass.getpass(\"DashScope API Key:\")"
"os.environ[\"DASHSCOPE_API_KEY\"] = getpass.getpass(\"DashScope API Key:\")"
]
},
{

Some files were not shown because too many files have changed in this diff Show More