Commit Graph

905 Commits

Author SHA1 Message Date
Bagatur
dc42279eb5
core[patch]: fix Typing.cast import (#24313)
Fixes #24287
2024-07-16 16:53:48 +00:00
Leonid Ganeline
5fcf2ef7ca
core: docstrings documents (#23506)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-16 10:43:54 -04:00
Shenhai Ran
5f2dea2b20
core[patch]: Add encoding options when create prompt template from a file (#24054)
- Uses default utf-8 encoding for loading prompt templates from file

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-16 09:35:09 -04:00
Leonid Ganeline
198b85334f
core[patch]: docstrings langchain_core/ files update (#24285)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-16 09:21:51 -04:00
Tibor Reiss
1c753d1e81
core[patch]: Update typing for template format to include jinja2 as a Literal (#24144)
Fixes #23929 via adjusting the typing
2024-07-16 09:09:42 -04:00
JP-Ellis
f77659463a
core[patch]: allow message utils to work with lcel (#23743)
The functions `convert_to_messages` has had an expansion of the
arguments it can take:

1. Previously, it only could take a `Sequence` in order to iterate over
it. This has been broadened slightly to an `Iterable` (which should have
no other impact).
2. Support for `PromptValue` and `BaseChatPromptTemplate` has been
added. These are generated when combining messages using the overloaded
`+` operator.

Functions which rely on `convert_to_messages` (namely `filter_messages`,
`merge_message_runs` and `trim_messages`) have had the type of their
arguments similarly expanded.

Resolves #23706.

<!--
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
-->

---------

Signed-off-by: JP-Ellis <josh@jpellis.me>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-07-15 08:58:05 -07:00
Harold Martin
ccdaf14eff
docs: Spell check fixes (#24217)
**Description:** Spell check fixes for docs, comments, and a couple of
strings. No code change e.g. variable names.
**Issue:** none
**Dependencies:** none
**Twitter handle:** hmartin
2024-07-15 15:51:43 +00:00
Leonid Ganeline
cacdf96f9c
core docstrings tracers update (#24211)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-15 11:37:09 -04:00
Leonid Ganeline
36ee083753
core: docstrings utils update (#24213)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-15 11:36:00 -04:00
Bagatur
620b118c70
core[patch]: Release 0.2.19 (#24272) 2024-07-15 07:51:30 -07:00
ccurme
888fbc07b5
core[patch]: support passing args_schema through as_tool (#24269)
Note: this allows the schema to be passed in positionally.

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import RunnableLambda


class Add(BaseModel):
    """Add two integers together."""

    a: int = Field(..., description="First integer")
    b: int = Field(..., description="Second integer")


def add(input: dict) -> int:
    return input["a"] + input["b"]


runnable = RunnableLambda(add)
as_tool = runnable.as_tool(Add)
as_tool.args_schema.schema()
```
```
{'title': 'Add',
 'description': 'Add two integers together.',
 'type': 'object',
 'properties': {'a': {'title': 'A',
   'description': 'First integer',
   'type': 'integer'},
  'b': {'title': 'B', 'description': 'Second integer', 'type': 'integer'}},
 'required': ['a', 'b']}
```
2024-07-15 07:51:05 -07:00
Bagatur
0da5078cad
langchain[minor]: Generic configurable model (#23419)
alternative to
[23244](https://github.com/langchain-ai/langchain/pull/23244). allows
you to use chat model declarative methods

![Screenshot 2024-06-25 at 1 07 10
PM](https://github.com/langchain-ai/langchain/assets/22008038/910d1694-9b7b-46bc-bc2e-3792df9321d6)
2024-07-15 01:11:01 +00:00
Bagatur
d0728b0ba0
core[patch]: add tool name to tool message (#24243)
Copying current ToolNode behavior
2024-07-15 00:42:40 +00:00
Bagatur
5c3e2612da
core[patch]: Release 0.2.18 (#24230) 2024-07-13 09:14:43 -07:00
Bagatur
65321bf975
core[patch]: fix ToolCall "type" when streaming (#24218) 2024-07-13 08:59:03 -07:00
Eugene Yurtsev
8d82a0d483
core[patch]: Mark GraphVectorStore as beta (#24195)
* This PR marks graph vectorstore as beta
2024-07-12 14:28:06 -04:00
Bagatur
0a1e475a30
core[patch]: Release 0.2.17 (#24189) 2024-07-12 17:08:29 +00:00
Bagatur
6166ea67a8
core[minor]: rename ToolMessage.raw_output -> artifact (#24185) 2024-07-12 09:52:44 -07:00
Leonid Ganeline
aa3e3cfa40
core[patch]: docstrings runnables update (#24161)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-12 11:27:06 -04:00
Erick Friis
1132fb801b
core: release 0.2.16 (#24159) 2024-07-11 23:59:41 +00:00
Nuno Campos
1d37aa8403
core: Remove extra newline (#24157) 2024-07-11 23:55:36 +00:00
Bagatur
8d100c58de
core[patch]: Tool accept RunnableConfig (#24143)
Relies on #24038

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-11 22:13:17 +00:00
Bagatur
5fd1e67808
core[minor], integrations...[patch]: Support ToolCall as Tool input and ToolMessage as Tool output (#24038)
Changes:
- ToolCall, InvalidToolCall and ToolCallChunk can all accept a "type"
parameter now
- LLM integration packages add "type" to all the above
- Tool supports ToolCall inputs that have "type" specified
- Tool outputs ToolMessage when a ToolCall is passed as input
- Tools can separately specify ToolMessage.content and
ToolMessage.raw_output
- Tools emit events for validation errors (using on_tool_error and
on_tool_end)

Example:
```python
@tool("structured_api", response_format="content_and_raw_output")
def _mock_structured_tool_with_raw_output(
    arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> Tuple[str, dict]:
    """A Structured Tool"""
    return f"{arg1} {arg2}", {"arg1": arg1, "arg2": arg2, "arg3": arg3}


def test_tool_call_input_tool_message_with_raw_output() -> None:
    tool_call: Dict = {
        "name": "structured_api",
        "args": {"arg1": 1, "arg2": True, "arg3": {"img": "base64string..."}},
        "id": "123",
        "type": "tool_call",
    }
    expected = ToolMessage("1 True", raw_output=tool_call["args"], tool_call_id="123")
    tool = _mock_structured_tool_with_raw_output
    actual = tool.invoke(tool_call)
    assert actual == expected

    tool_call.pop("type")
    with pytest.raises(ValidationError):
        tool.invoke(tool_call)

    actual_content = tool.invoke(tool_call["args"])
    assert actual_content == expected.content
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-11 14:54:02 -07:00
Bagatur
eeb996034b
core[patch]: Release 0.2.15 (#24149) 2024-07-11 21:34:25 +00:00
Nuno Campos
03fba07d15
core[patch]: Update styles for mermaid graphs (#24147) 2024-07-11 14:19:36 -07:00
ccurme
8ee8ca7c83
core[patch]: propagate parse_docstring to tool decorator (#24123)
Disabled by default.

```python
from langchain_core.tools import tool

@tool(parse_docstring=True)
def foo(bar: str, baz: int) -> str:
    """The foo.

    Args:
        bar: this is the bar
        baz: this is the baz
    """
    return bar


foo.args_schema.schema()
```
```json
{
  "title": "fooSchema",
  "description": "The foo.",
  "type": "object",
  "properties": {
    "bar": {
      "title": "Bar",
      "description": "this is the bar",
      "type": "string"
    },
    "baz": {
      "title": "Baz",
      "description": "this is the baz",
      "type": "integer"
    }
  },
  "required": [
    "bar",
    "baz"
  ]
}
```
2024-07-11 20:11:45 +00:00
Eugene Yurtsev
4ba14adec6
core[patch]: Clean up indexing test code (#24139)
Refactor the code to use the existing InMemroyVectorStore.

This change is needed for another PR that moves some of the imports
around (and messes up the mock.patch in this file)
2024-07-11 18:54:46 +00:00
ccurme
122e80e04d
core[patch]: add versionadded to as_tool (#24138) 2024-07-11 18:08:08 +00:00
Erick Friis
c4417ea93c
core: release 0.2.14, remove poetry 1.7 incompatible flag from root (#24137) 2024-07-11 17:59:51 +00:00
Nuno Campos
2428984205
core: Add metadata to graph json repr (#24131)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-11 17:23:52 +00:00
Nuno Campos
3e454d7568
core: fix docstring (#24129) 2024-07-11 16:38:14 +00:00
Nuno Campos
ee3fe20af4
core: mermaid: Render metadata key-value pairs when drawing mermaid graph (#24103)
- if node is runnable binding with metadata attached
2024-07-11 16:22:23 +00:00
Eugene Yurtsev
dc131ac42a
core[minor]: Add dispatching for custom events (#24080)
This PR allows dispatching adhoc events for a given run.

# Context

This PR allows users to send arbitrary data to the callback system and
to the astream events API from within a given runnable. This can be
extremely useful to surface custom information to end users about
progress etc.

Integration with langsmith tracer will be done separately since the data
cannot be currently visualized. It'll be accommodated using the events
attribute of the Run

# Examples with astream events

```python
from langchain_core.callbacks import adispatch_custom_event
from langchain_core.tools import tool

@tool
async def foo(x: int) -> int:
    """Foo"""
    await adispatch_custom_event("event1", {"x": x})
    await adispatch_custom_event("event2", {"x": x})
    return x + 1

async for event in foo.astream_events({'x': 1}, version='v2'):
    print(event)
```

```python
{'event': 'on_tool_start', 'data': {'input': {'x': 1}}, 'name': 'foo', 'tags': [], 'run_id': 'fd6fb7a7-dd37-4191-962c-e43e245909f6', 'metadata': {}, 'parent_ids': []}
{'event': 'on_custom_event', 'run_id': 'fd6fb7a7-dd37-4191-962c-e43e245909f6', 'name': 'event1', 'tags': [], 'metadata': {}, 'data': {'x': 1}, 'parent_ids': []}
{'event': 'on_custom_event', 'run_id': 'fd6fb7a7-dd37-4191-962c-e43e245909f6', 'name': 'event2', 'tags': [], 'metadata': {}, 'data': {'x': 1}, 'parent_ids': []}
{'event': 'on_tool_end', 'data': {'output': 2}, 'run_id': 'fd6fb7a7-dd37-4191-962c-e43e245909f6', 'name': 'foo', 'tags': [], 'metadata': {}, 'parent_ids': []}
```

```python
from langchain_core.callbacks import adispatch_custom_event
from langchain_core.runnables import RunnableLambda

@RunnableLambda
async def foo(x: int) -> int:
    """Foo"""
    await adispatch_custom_event("event1", {"x": x})
    await adispatch_custom_event("event2", {"x": x})
    return x + 1

async for event in foo.astream_events(1, version='v2'):
    print(event)
```

```python
{'event': 'on_chain_start', 'data': {'input': 1}, 'name': 'foo', 'tags': [], 'run_id': 'ce2beef2-8608-49ea-8eba-537bdaafb8ec', 'metadata': {}, 'parent_ids': []}
{'event': 'on_custom_event', 'run_id': 'ce2beef2-8608-49ea-8eba-537bdaafb8ec', 'name': 'event1', 'tags': [], 'metadata': {}, 'data': {'x': 1}, 'parent_ids': []}
{'event': 'on_custom_event', 'run_id': 'ce2beef2-8608-49ea-8eba-537bdaafb8ec', 'name': 'event2', 'tags': [], 'metadata': {}, 'data': {'x': 1}, 'parent_ids': []}
{'event': 'on_chain_stream', 'run_id': 'ce2beef2-8608-49ea-8eba-537bdaafb8ec', 'name': 'foo', 'tags': [], 'metadata': {}, 'data': {'chunk': 2}, 'parent_ids': []}
{'event': 'on_chain_end', 'data': {'output': 2}, 'run_id': 'ce2beef2-8608-49ea-8eba-537bdaafb8ec', 'name': 'foo', 'tags': [], 'metadata': {}, 'parent_ids': []}
```

# Examples with handlers 

This is copy pasted from unit tests

```python
    class CustomCallbackManager(BaseCallbackHandler):
        def __init__(self) -> None:
            self.events: List[Any] = []

        def on_custom_event(
            self,
            name: str,
            data: Any,
            *,
            run_id: UUID,
            tags: Optional[List[str]] = None,
            metadata: Optional[Dict[str, Any]] = None,
            **kwargs: Any,
        ) -> None:
            assert kwargs == {}
            self.events.append(
                (
                    name,
                    data,
                    run_id,
                    tags,
                    metadata,
                )
            )

    callback = CustomCallbackManager()

    run_id = uuid.UUID(int=7)

    @RunnableLambda
    def foo(x: int, config: RunnableConfig) -> int:
        dispatch_custom_event("event1", {"x": x})
        dispatch_custom_event("event2", {"x": x}, config=config)
        return x

    foo.invoke(1, {"callbacks": [callback], "run_id": run_id})

    assert callback.events == [
        ("event1", {"x": 1}, UUID("00000000-0000-0000-0000-000000000007"), [], {}),
        ("event2", {"x": 1}, UUID("00000000-0000-0000-0000-000000000007"), [], {}),
    ]
```
2024-07-11 02:25:12 +00:00
Erick Friis
6ea6f9f7bc
core: release 0.2.13 (#24096) 2024-07-10 16:39:15 -07:00
ccurme
975b6129f6
core[patch]: support conversion of runnables to tools (#23992)
Open to other thoughts on UX.

string input:
```python
as_tool = retriever.as_tool()
as_tool.invoke("cat")  # [Document(...), ...]
```

typed dict input:
```python
class Args(TypedDict):
    key: int

def f(x: Args) -> str:
    return str(x["key"] * 2)

as_tool = RunnableLambda(f).as_tool(
    name="my tool",
    description="description",  # name, description are inferred if not supplied
)
as_tool.invoke({"key": 3})  # "6"
```

for untyped dict input, allow specification of parameters + types
```python
def g(x: Dict[str, Any]) -> str:
    return str(x["key"] * 2)

as_tool = RunnableLambda(g).as_tool(arg_types={"key": int})
result = as_tool.invoke({"key": 3})  # "6"
```

Passing the `arg_types` is slightly awkward but necessary to ensure tool
calls populate parameters correctly:
```python
from typing import Any, Dict

from langchain_core.runnables import RunnableLambda
from langchain_openai import ChatOpenAI


def f(x: Dict[str, Any]) -> str:
    return str(x["key"] * 2)

runnable = RunnableLambda(f)
as_tool = runnable.as_tool(arg_types={"key": int})

llm = ChatOpenAI().bind_tools([as_tool])

result = llm.invoke("Use the tool on 3.")
tool_call = result.tool_calls[0]
args = tool_call["args"]
assert args == {"key": 3}

as_tool.run(args)
```

Contrived (?) example with langgraph agent as a tool:
```python
from typing import List, Literal
from typing_extensions import TypedDict

from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent


llm = ChatOpenAI(temperature=0)


def magic_function(input: int) -> int:
    """Applies a magic function to an input."""
    return input + 2


agent_1 = create_react_agent(llm, [magic_function])


class Message(TypedDict):
    role: Literal["human"]
    content: str

agent_tool = agent_1.as_tool(
    arg_types={"messages": List[Message]},
    name="Jeeves",
    description="Ask Jeeves.",
)

agent_2 = create_react_agent(llm, [agent_tool])
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-10 19:29:59 -04:00
Bagatur
6928f4c438
core[minor]: Add ToolMessage.raw_output (#23994)
Decisions to discuss:
1.  is a new attr needed or could additional_kwargs be used for this
2. is raw_output a good name for this attr
3. should raw_output default to {} or None
4. should raw_output be included in serialization
5. do we need to update repr/str to  exclude raw_output
2024-07-10 20:11:10 +00:00
William FH
1e1fd30def
[Core] Fix fstring in logger warning (#24043) 2024-07-09 19:53:18 -07:00
Nuno Campos
859e434932
core: Speed up json parse for large strings (#24036)
for a large string:
- old 4.657918874989264
- new 0.023724667000351474
2024-07-09 12:26:50 -07:00
Nuno Campos
160fc7f246
core: Move json parsing in base chat model / output parser to bg thread (#24031)
- add version of AIMessageChunk.__add__ that can add many chunks,
instead of only 2
- In agenerate_from_stream merge and parse chunks in bg thread
- In output parse base classes do more work in bg threads where
appropriate

---------

Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
2024-07-09 12:26:36 -07:00
Erick Friis
bedd893cd1
core: release 0.2.12 (#23991) 2024-07-08 21:29:29 +00:00
Eugene Yurtsev
f765e8fa9d
core[minor],community[patch],standard-tests[patch]: Move InMemoryImplementation to langchain-core (#23986)
This PR moves the in memory implementation to langchain-core.

* The implementation remains importable from langchain-community.
* Supporting utilities are marked as private for now.
2024-07-08 14:11:51 -07:00
Eugene Yurtsev
2c180d645e
core[minor],community[minor]: Upgrade all @root_validator() to @pre_init (#23841)
This PR introduces a @pre_init decorator that's a @root_validator(pre=True) but with all the defaults populated!
2024-07-08 16:09:29 -04:00
Eugene Yurtsev
9787552b00
core[patch]: Use InMemoryChatMessageHistory in unit tests (#23916)
Update unit test to use the existing implementation of chat message
history
2024-07-05 20:10:54 +00:00
Eugene Yurtsev
e0186df56b
core[patch]: Clarify upsert response semantics (#23921) 2024-07-05 15:59:47 -04:00
Eugene Yurtsev
5b7d5f7729
core[patch]: Add comment to clarify aadd_documents (#23920)
Add comment to clarify how add documents works
2024-07-05 15:20:16 -04:00
ccurme
74c7198906
core, anthropic[patch]: support streaming tool calls when function has no arguments (#23915)
resolves https://github.com/langchain-ai/langchain/issues/23911

When an AIMessageChunk is instantiated, we attempt to parse tool calls
off of the tool_call_chunks.

Here we add a special-case to this parsing, where `""` will be parsed as
`{}`.

This is a reaction to how Anthropic streams tool calls in the case where
a function has no arguments:
```
{'id': 'toolu_01J8CgKcuUVrMqfTQWPYh64r', 'input': {}, 'name': 'magic_function', 'type': 'tool_use', 'index': 1}
{'partial_json': '', 'type': 'tool_use', 'index': 1}
```
The `partial_json` does not accumulate to a valid json string-- most
other providers tend to emit `"{}"` in this case.
2024-07-05 18:57:41 +00:00
Christophe Bornet
42d049f618
core[minor]: Add Graph Store component (#23092)
This PR introduces a GraphStore component. GraphStore extends
VectorStore with the concept of links between documents based on
document metadata. This allows linking documents based on a variety of
techniques, including common keywords, explicit links in the content,
and other patterns.

This works with existing Documents, so it’s easy to extend existing
VectorStores to be used as GraphStores. The interface can be implemented
for any Vector Store technology that supports metadata, not only graph
DBs.

When retrieving documents for a given query, the first level of search
is done using classical similarity search. Next, links may be followed
using various traversal strategies to get additional documents. This
allows documents to be retrieved that aren’t directly similar to the
query but contain relevant information.

2 retrieving methods are added to the VectorStore ones : 
* traversal_search which gets all linked documents up to a certain depth
* mmr_traversal_search which selects linked documents using an MMR
algorithm to have more diverse results.

If a depth of retrieval of 0 is used, GraphStore is effectively a
VectorStore. It enables an easy transition from a simple VectorStore to
GraphStore by adding links between documents as a second step.

An implementation for Apache Cassandra is also proposed.

See
https://github.com/datastax/ragstack-ai/blob/main/libs/knowledge-store/notebooks/astra_support.ipynb
for a notebook explaining how to use GraphStore and that shows that it
can answer correctly to questions that a simple VectorStore cannot.

**Twitter handle:** _cbornet
2024-07-05 12:24:10 -04:00
Leonid Ganeline
77f5fc3d55
core: docstrings load (#23787)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-05 12:23:19 -04:00
Eugene Yurtsev
6f08e11d7c
core[minor]: add upsert, streaming_upsert, aupsert, astreaming_upsert methods to the VectorStore abstraction (#23774)
This PR rolls out part of the new proposed interface for vectorstores
(https://github.com/langchain-ai/langchain/pull/23544) to existing store
implementations.

The PR makes the following changes:

1. Adds standard upsert, streaming_upsert, aupsert, astreaming_upsert
methods to the vectorstore.
2. Updates `add_texts` and `aadd_texts` to be non required with a
default implementation that delegates to `upsert` and `aupsert` if those
have been implemented. The original `add_texts` and `aadd_texts` methods
are problematic as they spread object specific information across
document and **kwargs. (e.g., ids are not a part of the document)
3. Adds a default implementation to `add_documents` and `aadd_documents`
that delegates to `upsert` and `aupsert` respectively.
4. Adds standard unit tests to verify that a given vectorstore
implements a correct read/write API.

A downside of this implementation is that it creates `upsert` with a
very similar signature to `add_documents`.
The reason for introducing `upsert` is to:
* Remove any ambiguities about what information is allowed in `kwargs`.
Specifically kwargs should only be used for information common to all
indexed data. (e.g., indexing timeout).
*Allow inheriting from an anticipated generalized interface for indexing
that will allow indexing `BaseMedia` (i.e., allow making a vectorstore
for images/audio etc.)
 
`add_documents` can be deprecated in the future in favor of `upsert` to
make sure that users have a single correct way of indexing content.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-05 12:21:40 -04:00
G Sreejith
3c752238c5
core[patch]: Fix typo in docstring (graphm -> graph) (#23910)
Changes has been as per the request
Replaced graphm with graph
2024-07-05 16:20:33 +00:00
Leonid Ganeline
12c92b6c19
core: docstrings outputs (#23889)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-05 12:18:17 -04:00
Leonid Ganeline
1eca98ec56
core: docstrings prompts (#23890)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-05 12:17:52 -04:00
Mohammad Mohtashim
2274d2b966
core[patch]: Accounting for Optional Input Variables in BasePromptTemplate (#22851)
**Description**: After reviewing the prompts API, it is clear that the
only way a user can explicitly mark an input variable as optional is
through the `MessagePlaceholder.optional` attribute. Otherwise, the user
must explicitly pass in the `input_variables` expected to be used in the
`BasePromptTemplate`, which will be validated upon execution. Therefore,
to semantically handle a `MessagePlaceholder` `variable_name` as
optional, we will treat the `variable_name` of `MessagePlaceholder` as a
`partial_variable` if it has been marked as optional. This approach
aligns with how the `variable_name` of `MessagePlaceholder` is already
handled
[here](https://github.com/keenborder786/langchain/blob/optional_input_variables/libs/core/langchain_core/prompts/chat.py#L991).
Additionally, an attribute `optional_variable` has been added to
`BasePromptTemplate`, and the `variable_name` of `MessagePlaceholder` is
also made part of `optional_variable` when marked as optional.

Moreover, the `get_input_schema` method has been updated for
`BasePromptTemplate` to differentiate between optional and non-optional
variables.

**Issue**: #22832, #21425

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-05 15:49:40 +00:00
Eugene Yurtsev
9ccc4b1616
core[patch]: Fix logic in BaseChatModel that processes the llm string that is used as a key for caching chat models responses (#23842)
This PR should fix the following issue:
https://github.com/langchain-ai/langchain/issues/23824
Introduced as part of this PR:
https://github.com/langchain-ai/langchain/pull/23416

I am unable to reproduce the issue locally though it's clear that we're
getting a `serialized` object which is not a dictionary somehow.

The test below passes for me prior to the PR as well

```python

def test_cache_with_sqllite() -> None:
    from langchain_community.cache import SQLiteCache

    from langchain_core.globals import set_llm_cache

    cache = SQLiteCache(database_path=".langchain.db")
    set_llm_cache(cache)
    chat_model = FakeListChatModel(responses=["hello", "goodbye"], cache=True)
    assert chat_model.invoke("How are you?").content == "hello"
    assert chat_model.invoke("How are you?").content == "hello"
```
2024-07-03 16:23:55 -04:00
Vadym Barda
9bb623381b
core[minor]: update conversion utils to handle RemoveMessage (#23840) 2024-07-03 16:13:31 -04:00
Eugene Yurtsev
4ab78572e7
core[patch]: Speed up unit tests for imports (#23837)
Speed up unit tests for imports
2024-07-03 15:55:15 -04:00
Théo Deschamps
39b19cf764
core[patch]: extract input variables for path and detail keys in order to format an ImagePromptTemplate (#22613)
- Description: Add support for `path` and `detail` keys in
`ImagePromptTemplate`. Previously, only variables associated with the
`url` key were considered. This PR allows for the inclusion of a local
image path and a detail parameter as input to the format method.
- Issues:
    - fixes #20820 
    - related to #22024 
- Dependencies: None
- Twitter handle: @DeschampsTho5

---------

Co-authored-by: tdeschamps <tdeschamps@kameleoon.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-07-03 18:58:42 +00:00
Leonid Ganeline
55f6f91f17
core[patch]: docstrings output_parsers (#23825)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-03 14:27:40 -04:00
Bagatur
a0c2281540
infra: update mypy 1.10, ruff 0.5 (#23721)
```python
"""python scripts/update_mypy_ruff.py"""
import glob
import tomllib
from pathlib import Path

import toml
import subprocess
import re

ROOT_DIR = Path(__file__).parents[1]


def main():
    for path in glob.glob(str(ROOT_DIR / "libs/**/pyproject.toml"), recursive=True):
        print(path)
        with open(path, "rb") as f:
            pyproject = tomllib.load(f)
        try:
            pyproject["tool"]["poetry"]["group"]["typing"]["dependencies"]["mypy"] = (
                "^1.10"
            )
            pyproject["tool"]["poetry"]["group"]["lint"]["dependencies"]["ruff"] = (
                "^0.5"
            )
        except KeyError:
            continue
        with open(path, "w") as f:
            toml.dump(pyproject, f)
        cwd = "/".join(path.split("/")[:-1])
        completed = subprocess.run(
            "poetry lock --no-update; poetry install --with typing; poetry run mypy . --no-color",
            cwd=cwd,
            shell=True,
            capture_output=True,
            text=True,
        )
        logs = completed.stdout.split("\n")

        to_ignore = {}
        for l in logs:
            if re.match("^(.*)\:(\d+)\: error:.*\[(.*)\]", l):
                path, line_no, error_type = re.match(
                    "^(.*)\:(\d+)\: error:.*\[(.*)\]", l
                ).groups()
                if (path, line_no) in to_ignore:
                    to_ignore[(path, line_no)].append(error_type)
                else:
                    to_ignore[(path, line_no)] = [error_type]
        print(len(to_ignore))
        for (error_path, line_no), error_types in to_ignore.items():
            all_errors = ", ".join(error_types)
            full_path = f"{cwd}/{error_path}"
            try:
                with open(full_path, "r") as f:
                    file_lines = f.readlines()
            except FileNotFoundError:
                continue
            file_lines[int(line_no) - 1] = (
                file_lines[int(line_no) - 1][:-1] + f"  # type: ignore[{all_errors}]\n"
            )
            with open(full_path, "w") as f:
                f.write("".join(file_lines))

        subprocess.run(
            "poetry run ruff format .; poetry run ruff --select I --fix .",
            cwd=cwd,
            shell=True,
            capture_output=True,
            text=True,
        )


if __name__ == "__main__":
    main()

```
2024-07-03 10:33:27 -07:00
William FH
6cd56821dc
[Core] Unify function schema parsing (#23370)
Use pydantic to infer nested schemas and all that fun.
Include bagatur's convenient docstring parser
Include annotation support


Previously we didn't adequately support many typehints in the
bind_tools() method on raw functions (like optionals/unions, nested
types, etc.)
2024-07-03 09:55:38 -07:00
Leonid Ganeline
716a316654
core: docstrings indexing (#23785)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-03 11:27:34 -04:00
Leonid Ganeline
30fdc2dbe7
core: docstrings messages (#23788)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-03 11:25:00 -04:00
Bagatur
7a3d8e5a99
core[patch]: Release 0.2.11 (#23780) 2024-07-02 17:35:57 -04:00
Bagatur
d677dadf5f
core[patch]: mark RemoveMessage beta (#23656) 2024-07-02 21:27:21 +00:00
SN
acc457f645
core[patch]: fix nested sections for mustache templating (#23747)
The prompt template variable detection only worked for singly-nested
sections because we just kept track of whether we were in a section and
then set that to false as soon as we encountered an end block. i.e. the
following:

```
{{#outerSection}}
    {{variableThatShouldntShowUp}}
    {{#nestedSection}}
        {{nestedVal}}
    {{/nestedSection}}
    {{anotherVariableThatShouldntShowUp}}
{{/outerSection}}
```

Would yield `['outerSection', 'anotherVariableThatShouldntShowUp']` as
input_variables (whereas it should just yield `['outerSection']`). This
fixes that by keeping track of the current depth and using a stack.
2024-07-02 10:20:45 -07:00
Eugene Yurtsev
ebcee4f610
core[patch]: Add versionadded to get_by_ids (#23728) 2024-07-01 15:16:00 -04:00
Eugene Yurtsev
e800f6bb57
core[minor]: Create BaseMedia object (#23639)
This PR implements a BaseContent object from which Document and Blob
objects will inherit proposed here:
https://github.com/langchain-ai/langchain/pull/23544

Alternative: Create a base object that only has an identifier and no
metadata.

For now decided against it, since that refactor can be done at a later
time. It also feels a bit odd since our IDs are optional at the moment.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-01 15:07:30 -04:00
Nuno Campos
b36e95caa9
core[patch]: use async messages where possible (#23718)
Fix #23716

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-01 18:33:05 +00:00
Spyros Avlonitis
8cfb2fa1b7
core[minor]: Add maxsize for InMemoryCache (#23405)
This PR introduces a maxsize parameter for the InMemoryCache class,
allowing users to specify the maximum number of items to store in the
cache. If the cache exceeds the specified maximum size, the oldest items
are removed. Additionally, comprehensive unit tests have been added to
ensure all functionalities are thoroughly tested. The tests are written
using pytest and cover both synchronous and asynchronous methods.

Twitter: @spyrosavl

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-01 14:21:21 -04:00
Eugene Yurtsev
b5aef4cf97
core[patch]: Fix llm string representation for serializable models (#23416)
Fix LLM string representation for serializable objects.

Fix for issue: https://github.com/langchain-ai/langchain/issues/23257

The llm string of serializable chat models is the serialized
representation of the object. LangChain serialization dumps some basic
information about non serializable objects including their repr() which
includes an object id.

This means that if a chat model has any non serializable fields (e.g., a
cache), then any new instantiation of the those fields will change the
llm representation of the chat model and cause chat misses.

i.e., re-instantiating a postgres cache would result in cache misses!
2024-07-01 14:06:33 -04:00
nobbbbby
3904f2cd40
core: fix NameError (#23658)
**Description:** In the chat_models module of the language model, the
import statement for BaseModel has been moved from the conditionally
imported section to the main import area, fixing `NameError `.
**Issue:** fix `NameError `
2024-07-01 17:51:23 +00:00
Eugene Yurtsev
4f1821db3e
core[minor]: Add get_by_ids to vectorstore interface (#23594)
This PR adds a part of the indexing API proposed in this RFC
https://github.com/langchain-ai/langchain/pull/23544/files.

It allows rolling out `get_by_ids` which should be uncontroversial to
existing vectorstores without introducing new abstractions.

The semantics for this method depend on the ability of identifying
returned documents using the new optional ID field on documents:
https://github.com/langchain-ai/langchain/pull/23411

Alternatives are:

1. Relax the sequence requirement

```python
def get_by_ids(self, ids: Iterable[str], /) -> Iterable[Document]:
```

Rejected:
- implementations are more likley to start batching with bad defaults
- users would need to call list() or we'd need to introduce another
convenience method

2. Support more kwargs

```python

def get_by_ids(self, ids: Sequence[str], /, **kwargs) -> List[Document]:
...
```

Rejected: 
- No need for `batch` parameter since IDs is a sequence
- Output cannot be customized since `Document` is fixed. (e.g.,
parameters could be useful to grab extra metadata like the vector that
was indexed with the Document or to project a part of the document)
2024-07-01 13:04:33 -04:00
Vadym Barda
e8d77002ea
core: add RemoveMessage (#23636)
This change adds a new message type `RemoveMessage`. This will enable
`langgraph` users to manually modify graph state (or have the graph
nodes modify the state) to remove messages by `id`

Examples:

* allow users to delete messages from state by calling

```python
graph.update_state(config, values=[RemoveMessage(id=state.values[-1].id)])
```

* allow nodes to delete messages

```python
graph.add_node("delete_messages", lambda state: [RemoveMessage(id=state[-1].id)])
```
2024-06-28 14:40:02 -07:00
Jacob Lee
a032583b17
docs[patch]: Update diagrams (#23613) 2024-06-28 12:36:00 -07:00
Leonid Ganeline
75a44fe951
core: chat_* docstrings (#23412)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-06-27 17:29:38 -04:00
Eugene Yurtsev
da7beb1c38
core[patch]: Add unit test when catching generator exit (#23402)
This pr adds a unit test for:
https://github.com/langchain-ai/langchain/pull/22662
And narrows the scope where the exception is caught.
2024-06-27 20:36:07 +00:00
Eugene Yurtsev
96b72edac8
core[minor]: Add optional ID field to Document schema (#23411)
This PR adds an optional ID field to the document schema.

# 1. Optional or Required

- An optional field will will requrie additional checking for the type
in user code (annoying).
- However, vectorstores currently don't respect this field. So if we
make it
required and start returning random UUIDs that might be even more
confusing
  to users.


**Proposal**: Start with Optional and convert to Required (with default
set to uuid4()) in 1-2 major releases.


# 2. Override __str__ or generic solution in prompts

Overriding __str__ as a simple way to avoid changing user code that
relies on
default str(document) in prompts. 


I considered rolling out a more general solution in prompts
(https://github.com/langchain-ai/langchain/pull/8685),
but to do that we need to:

1. Make things serializable
2. The more general solution would likely need to be backwards
compatible as well
3. It's unclear that one wants to format a List[int] in the same way as
List[Document]. The former should be `,` seperated (likely), the latter
   should be `---` separated (likely).


**Proposal** Start with __str__ override and focus on the vectorstore
APIs, we generalize prompts later
2024-06-27 12:15:58 -04:00
Jacob Lee
60fc15a56b
docs[patch]: Update docs introduction and README (#23558)
CC @hwchase17 @baskaryan
2024-06-27 08:51:43 -07:00
Leonid Ganeline
2c9b84c3a8
core[patch]: docstrings agents (#23502)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-06-26 17:50:48 -04:00
Leonid Ganeline
2a5d59b3d7
core[patch]: callbacks docstrings (#23375)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-06-26 17:11:06 -04:00
Leonid Ganeline
1141b08eb8
core: docstrings example_selectors (#23542)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-06-26 17:10:40 -04:00
Bagatur
32f8f39974
core[patch]: use args_schema doc for tool description (#23503) 2024-06-25 15:26:35 -07:00
ccurme
86ca44d451
core: release 0.2.10 (#23420) 2024-06-25 16:26:31 -04:00
Isaac Francisco
85f5d14cef
[docs]: split up tool docs (#22919) 2024-06-25 13:15:08 -07:00
William FH
8955bc1866
[Core] Logging: Suppress missing parent warning (#23363) 2024-06-25 14:57:23 -04:00
ccurme
730c551819
core[patch]: export tool output parsers from langchain_core.output_parsers (#23305)
These currently read off AIMessage.tool_calls, and only fall back to
OpenAI parsing if tool calls aren't populated.

Importing these from `openai_tools` (e.g., in our [tool calling
docs](https://python.langchain.com/v0.2/docs/how_to/tool_calling/#tool-calls))
can lead to confusion.

After landing, would need to release core and update docs.
2024-06-25 14:40:42 -04:00
Eugene Yurtsev
7e9e69c758
core[patch]: Add unit test for str and repr for Document (#23414) 2024-06-25 18:28:21 +00:00
Riccardo Schirone
4530d851e4
Merge pull request #22662
* core: runnables: special handling GeneratorExit because no error
2024-06-25 08:42:03 -04:00
William FH
efb4c12abe
[Core] Add support for inferring Annotated types (#23284)
in bind_tools() / convert_to_openai_function
2024-06-21 15:16:30 -07:00
Vadym Barda
9ac302cb97
core[minor]: update draw_mermaid node label processing (#23285)
This fixes processing issue for nodes with numbers in their labels (e.g.
`"node_1"`, which would previously be relabeled as `"node__"`, and now
are correctly processed as `"node_1"`)
2024-06-21 21:35:32 +00:00
Bagatur
f824f6d925
docs: fix merge message runs docstring (#23279) 2024-06-21 19:50:50 +00:00
Bagatur
9eda8f2fe8
docs: fix trim_messages code blocks (#23271) 2024-06-21 17:15:31 +00:00
Bagatur
4c97a9ee53
docs: fix message transformer docstrings (#23264) 2024-06-21 16:10:03 +00:00
Brace Sproul
abe7566d7d
core[minor]: BaseChatModel with_structured_output implementation (#22859) 2024-06-21 08:14:03 -07:00
mackong
360a70c8a8
core[patch]: fix no current event loop for sql history in async mode (#22933)
- **Description:** When use
RunnableWithMessageHistory/SQLChatMessageHistory in async mode, we'll
get the following error:
```
Error in RootListenersTracer.on_chain_end callback: RuntimeError("There is no current event loop in thread 'asyncio_3'.")
```
which throwed by
ddfbca38df/libs/community/langchain_community/chat_message_histories/sql.py (L259).
and no message history will be add to database.

In this patch, a new _aexit_history function which will'be called in
async mode is added, and in turn aadd_messages will be called.

In this patch, we use `afunc` attribute of a Runnable to check if the
end listener should be run in async mode or not.

  - **Issue:** #22021, #22022 
  - **Dependencies:** N/A
2024-06-21 10:39:47 -04:00
mackong
b108b4d010
core[patch]: set schema format for AsyncRootListenersTracer (#23214)
- **Description:** AsyncRootListenersTracer support on_chat_model_start,
it's schema_format should be "original+chat".
  - **Issue:** N/A
  - **Dependencies:**
2024-06-21 09:30:27 -04:00
Bagatur
976b456619
docs: BaseChatModel key methods table (#23238)
If we're moving documenting inherited params think these kinds of tables
become more important

![Screenshot 2024-06-20 at 3 59 12
PM](https://github.com/langchain-ai/langchain/assets/22008038/722266eb-2353-4e85-8fae-76b19bd333e0)
2024-06-20 21:00:22 -07:00
Bagatur
12e0c28a6e
docs: fix chat model methods table (#23233)
rst table not md
![Screenshot 2024-06-20 at 12 37 46
PM](https://github.com/langchain-ai/langchain/assets/22008038/7a03b869-c1f4-45d0-8d27-3e16f4c6eb19)
2024-06-20 19:51:10 +00:00
Eugene Yurtsev
7545b1d29b
core[patch]: Fix doc-strings for code blocks (#23232)
Code blocks need extra space around them to be rendered properly by
sphinx
2024-06-20 19:34:52 +00:00
Eugene Yurtsev
59d7adff8f
core[patch]: Add clarification about streaming to RunnableLambda (#23227)
Add streaming clarification to runnable lambda docstring.
2024-06-20 16:47:16 +00:00
ChrisDEV
cb6cf4b631
Fix return value type of dumpd (#20123)
The return type of `json.loads` is `Any`.

In fact, the return type of `dumpd` must be based on `json.loads`, so
the correction here is understandable.

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-20 16:31:41 +00:00
Guangdong Liu
0bce28cd30
core(patch): Fix encoding problem of load_prompt method (#21559)
- description: Add encoding parameters.
- @baskaryan, @efriis, @eyurtsev, @hwchase17.


![54d25ac7b1d5c2e47741a56fe8ed8ba](https://github.com/langchain-ai/langchain/assets/48236177/ffea9596-2001-4e19-b245-f8a6e231b9f9)
2024-06-20 09:25:54 -07:00
Philippe PRADOS
8711c61298
core[minor]: Adds an in-memory implementation of RecordManager (#13200)
**Description:**
langchain offers three technologies to save data:
-
[vectorstore](https://python.langchain.com/docs/modules/data_connection/vectorstores/)
- [docstore](https://js.langchain.com/docs/api/schema/classes/Docstore)
- [record
manager](https://python.langchain.com/docs/modules/data_connection/indexing)

If you want to combine these technologies in a sample persistence
stategy you need a common implementation for each. `DocStore` propose
`InMemoryDocstore`.

We propose the class `MemoryRecordManager` to complete the system.

This is the prelude to another full-request, which needs a consistent
combination of persistence components.

**Tag maintainer:**
@baskaryan

**Twitter handle:**
@pprados

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-20 12:19:10 -04:00
David DeCaprio
a4bcb45f65
core:Add optional max_messages to MessagePlaceholder (#16098)
- **Description:** Add optional max_messages to MessagePlaceholder
- **Issue:**
[16096](https://github.com/langchain-ai/langchain/issues/16096)
- **Dependencies:** None
- **Twitter handle:** @davedecaprio

Sometimes it's better to limit the history in the prompt itself rather
than the memory. This is needed if you want different prompts in the
chain to have different history lengths.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-19 23:39:51 +00:00
Eugene Yurtsev
1fcf875fe3
core[patch]: Document agent schema (#23194)
* Document agent schema
* Refer folks to langgraph for more information on how to create agents.
2024-06-19 20:16:57 +00:00
Eugene Yurtsev
c2d43544cc
core[patch]: Document messages namespace (#23154)
- Moved doc-strings below attribtues in TypedDicts -- seems to render
better on APIReference pages.
* Provided more description and some simple code examples
2024-06-19 15:00:00 -04:00
Eugene Yurtsev
3c917204dc
core[patch]: Add doc-strings to outputs, fix @root_validator (#23190)
- Document outputs namespace
- Update a vanilla @root_validator that was missed
2024-06-19 14:59:06 -04:00
Sergey Kozlov
94452a94b1
core[patch[: add exceptions propagation test for astream_events v2 (#23159)
**Description:** `astream_events(version="v2")` didn't propagate
exceptions in `langchain-core<=0.2.6`, fixed in the #22916. This PR adds
a unit test to check that exceptions are propagated upwards.

Co-authored-by: Sergey Kozlov <sergey.kozlov@ludditelabs.io>
2024-06-19 13:00:25 -04:00
Bagatur
677408bfc9
core[patch]: fix chat history circular import (#23182) 2024-06-19 09:08:36 -07:00
Eugene Yurtsev
883e90d06e
core[patch]: Add an example to the Document schema doc-string (#23131)
Add an example to the document schema
2024-06-19 11:35:30 -04:00
ccurme
2b08e9e265
core[patch]: update test to catch circular imports (#23172)
This raises ImportError due to a circular import:
```python
from langchain_core import chat_history
```

This does not:
```python
from langchain_core import runnables
from langchain_core import chat_history
```

Here we update `test_imports` to run each import in a separate
subprocess. Open to other ways of doing this!
2024-06-19 15:24:38 +00:00
Eugene Yurtsev
ae4c0ed25a
core[patch]: Add documentation to load namespace (#23143)
Document some of the modules within the load namespace
2024-06-19 15:21:41 +00:00
Eugene Yurtsev
a34e650f8b
core[patch]: Add doc-string to document compressor (#23085) 2024-06-19 11:03:49 -04:00
Eugene Yurtsev
4fe8403bfb
core[patch]: Expand documentation in the indexing namespace (#23134) 2024-06-19 10:11:44 -04:00
Eugene Yurtsev
fe4f10047b
core[patch]: Document embeddings namespace (#23132)
Document embeddings namespace
2024-06-19 10:11:16 -04:00
Eugene Yurtsev
a3bae56a48
core[patch]: Update documentation in LLM namespace (#23138)
Update documentation in lllm namespace.
2024-06-19 10:10:50 -04:00
Bagatur
e8a8286012
core[patch]: runnablewithchathistory from core.runnables (#23136) 2024-06-19 00:15:18 +00:00
Vadym Barda
b483bf5095
core[minor]: handle boolean data in draw_mermaid (#23135)
This change should address graph rendering issues for edges with boolean
data

Example from langgraph:

```python
from typing import Annotated, TypedDict

from langchain_core.messages import AnyMessage
from langgraph.graph import END, START, StateGraph
from langgraph.graph.message import add_messages


class State(TypedDict):
    messages: Annotated[list[AnyMessage], add_messages]


def branch(state: State) -> bool:
    return 1 + 1 == 3


graph_builder = StateGraph(State)
graph_builder.add_node("foo", lambda state: {"messages": [("ai", "foo")]})
graph_builder.add_node("bar", lambda state: {"messages": [("ai", "bar")]})

graph_builder.add_conditional_edges(
    START,
    branch,
    path_map={True: "foo", False: "bar"},
    then=END,
)

app = graph_builder.compile()
print(app.get_graph().draw_mermaid())
```

Previous behavior:

```python
AttributeError: 'bool' object has no attribute 'split'
```

Current behavior:

```python
%%{init: {'flowchart': {'curve': 'linear'}}}%%
graph TD;
	__start__[__start__]:::startclass;
	__end__[__end__]:::endclass;
	foo([foo]):::otherclass;
	bar([bar]):::otherclass;
	__start__ -. ('a',) .-> foo;
	foo --> __end__;
	__start__ -. ('b',) .-> bar;
	bar --> __end__;
	classDef startclass fill:#ffdfba;
	classDef endclass fill:#baffc9;
	classDef otherclass fill:#fad7de;
```
2024-06-18 20:15:42 +00:00
Bagatur
093ae04d58
core[patch]: Pin pydantic in py3.12.4 (#23130) 2024-06-18 12:00:02 -07:00
Artem Mukhin
e271f75bee
docs: Fix URL formatting in deprecation warnings (#23075)
**Description**

Updated the URLs in deprecation warning messages. The URLs were
previously written as raw strings and are now formatted to be clickable
HTML links.

Example of a broken link in the current API Reference:
https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.extraction.create_extraction_chain_pydantic.html

<img width="942" alt="Screenshot 2024-06-18 at 13 21 07"
src="https://github.com/langchain-ai/langchain/assets/4854600/a1b1863c-cd03-4af2-a9bc-70375407fb00">
2024-06-18 14:49:58 -04:00
Eugene Yurtsev
aa6415aa7d
core[minor]: Support multiple keys in get_from_dict_or_env (#23086)
Support passing multiple keys for ge_from_dict_or_env
2024-06-18 14:13:28 -04:00
Bagatur
01783d67fc
core[patch]: Release 0.2.9 (#23091) 2024-06-18 17:15:04 +00:00
Eugene Yurtsev
5564d9e404
core[patch]: Document BaseStore (#23082)
Add doc-string to BaseStore
2024-06-18 11:47:47 -04:00
Takuya Igei
9f791b6ad5
core[patch],community[patch],langchain[patch]: tenacity dependency to version >=8.1.0,<8.4.0 (#22973)
Fix https://github.com/langchain-ai/langchain/issues/22972.

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-18 10:34:28 -04:00
Nuno Campos
f01f12ce1e
Include "no escape" and "inverted section" mustache vars in Prompt.input_variables and Prompt.input_schema (#22981) 2024-06-17 19:24:13 -07:00
Bagatur
c2b2e3266c
core[minor]: message transformer utils (#22752) 2024-06-17 15:30:07 -07:00
Bagatur
8235bae48e
core[patch]: Release 0.2.8 (#23012) 2024-06-17 20:55:39 +00:00
Nuno Campos
bd4b68cd54
core: run_in_executor: Wrap StopIteration in RuntimeError (#22997)
- StopIteration can't be set on an asyncio.Future it raises a TypeError
and leaves the Future pending forever so we need to convert it to a
RuntimeError
2024-06-17 20:40:01 +00:00
Erick Friis
9ef15691d6
core: release 0.2.7 (#22917) 2024-06-14 20:03:58 +00:00
Nuno Campos
338180f383
core: in astream_events v2 always await task even if already finished (#22916)
- this ensures exceptions propagate to the caller
2024-06-14 19:54:20 +00:00
kiarina
8171efd07a
core[patch]: Fix FunctionCallbackHandler._on_tool_end (#22908)
If the global `debug` flag is enabled, the agent will get the following
error in `FunctionCallbackHandler._on_tool_end` at runtime.

```
Error in ConsoleCallbackHandler.on_tool_end callback: AttributeError("'list' object has no attribute 'strip'")
```

By calling str() before strip(), the error was avoided.
This error can be seen at
[debugging.ipynb](https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/debugging.ipynb).

- Issue: NA
- Dependencies: NA
- Twitter handle: https://x.com/kiarina37
2024-06-14 17:59:29 +00:00
Eugene Yurtsev
4a77a3ab19
core[patch]: fix validation of @deprecated decorator (#22513)
This PR moves the validation of the decorator to a better place to avoid
creating bugs while deprecating code.

Prevent issues like this from arising:
https://github.com/langchain-ai/langchain/issues/22510

we should replace with a linter at some point that just does static
analysis
2024-06-14 16:52:30 +00:00
Erick Friis
7234fd0f51
core: release 0.2.6 (#22868) 2024-06-13 22:22:34 +00:00
Jacob Lee
bcbb43480c
core[patch]: Treat type as a special field when merging lists (#22750)
Should we even log a warning? At least for Anthropic, it's expected to
get e.g. `text_block` followed by `text_delta`.

@ccurme @baskaryan @efriis
2024-06-13 15:08:24 -07:00
Nuno Campos
bae82e966a
core: In astream_events v2 propagate cancel/break to the inner astream call (#22865)
- previous behavior was for the inner astream to continue running with
no interruption
- also propagate break in core runnable methods
2024-06-13 15:02:48 -07:00
James Braza
45b394268c
core[patch]: allowing latest packaging versions (#22792)
Allowing version 24 of https://github.com/pypa/packaging

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-12 23:22:20 +00:00
Christophe Bornet
d04e899b56
ci: add testing with Python 3.12 (#22813)
We need to use a different version of numpy for py3.8 and py3.12 in
pyproject.
And so do projects that use that Python version range and import
langchain.

    - **Twitter handle:** _cbornet
2024-06-12 16:31:36 -04:00
Eugene Yurtsev
5dbbdcbf8e
core[patch]: Update remaining root_validators (#22829)
This PR updates the remaining root_validators in core to either be explicit pre-init or post-init validators.
2024-06-12 14:47:40 -04:00
Eugene Yurtsev
74e705250f
core[patch]: update some root_validators (#22787)
Update some of the @root_validators to be explicit pre=True or
pre=False, skip_on_failure=True for pydantic 2 compatibility.
2024-06-12 13:04:57 -04:00
Erick Friis
2aaf86ddae
core: fix mustache falsy cases (#22747) 2024-06-10 14:00:12 -07:00
Eugene Yurtsev
5a7eac191a
core[patch]: Add missing type annotations (#22756)
Add missing type annotations.

The missing type annotations will raise exceptions with pydantic 2.
2024-06-10 16:59:41 -04:00
Erick Friis
9e03864d64
core: add error message for non-structured llm to StructuredPrompt (#22684)
previously was the blank `NotImplementedError` from
`BaseLanguageModel.with_structured_output`
2024-06-07 19:42:09 +00:00
William FH
be79ce9336
[Core] Unified Enable/Disable Tracing (#22576) 2024-06-06 16:54:35 -07:00
Erick Friis
a24a9c6427
multiple: get rid of pyproject extras (#22581)
They cause `poetry lock` to take a ton of time, and `uv pip install` can
resolve the constraints from these toml files in trivial time
(addressing problem with #19153)

This allows us to properly upgrade lockfile dependencies moving forward,
which revealed some issues that were either fixed or type-ignored (see
file comments)
2024-06-06 15:45:22 -07:00
Bagatur
4367e89c9a
core[patch]: Release 0.2.5 (#22642) 2024-06-06 15:44:26 -07:00
Eugene Yurtsev
28f744c1f5
core[patch]: Correctly order parent ids in astream events (from root to immediate parent), add defensive check for cycles (#22637)
This PR makes two changes:

1. Fixes the order of parent IDs to be from root to immediate parent
2. Adds a simple defensive check for cycles
2024-06-06 20:37:52 +00:00
Eugene Yurtsev
035a9c9609
core[minor]: Add parent_ids to astream_events API (#22563)
Include a list of parent ids for each event in astream events.
2024-06-06 16:14:28 -04:00
Nicolas Nkiere
51005e2776
core[minor]: Add an async root listener and with_alisteners method (#22151)
- [x] **Adding AsyncRootListener**: "langchain_core: Adding
AsyncRootListener"

- **Description:** Adding an AsyncBaseTracer, AsyncRootListener and
`with_alistener` function. This is to enable binding async root listener
to runnables. This currently only supported for sync listeners.
- **Issue:** None
- **Dependencies:** None

- [x] **Add tests and docs**: Added units tests and example snippet code
within the function description of `with_alistener`


- [x] **Lint and test**: Run make format_diff, make lint_diff and make
test
2024-06-06 16:03:44 -04:00
Christophe Bornet
12ddb4fc6f
core[patch]: Use explicit classes for InMemoryByteStore and InMemoryStore (#22608)
The current implementation doesn't work well with type checking.
Instead replace with class definition that correctly works with type
checking.
2024-06-06 07:34:43 -07:00
andyjessen
cfed68e06f
docs: Fix description (#22611)
This commit fixes the description of the hair_color field.
2024-06-06 07:25:27 -07:00
andyjessen
8b40428f58
docs: Fix typo (#22603)
This commit changes minor typo in the field description.
2024-06-06 07:38:36 -04:00
Christophe Bornet
c34ad8c163
core[patch]: Improve VectorStore API doc (#22547) 2024-06-05 10:23:44 -04:00
Christophe Bornet
8ba868d3b0
core[patch]: Add similarity_score_threshold to VectorStore search types (#22477) 2024-06-04 13:43:55 -07:00
Eugene Yurtsev
9120cf5df2
core[patch]: Deduplicate of callback handlers in merge_configs (#22478)
This PR adds deduplication of callback handlers in merge_configs.

Fix for this issue:
https://github.com/langchain-ai/langchain/issues/22227

The issue appears when the code is:

1) running python >=3.11
2) invokes a runnable from within a runnable
3) binds the callbacks to the child runnable from the parent runnable
using with_config

In this case, the same callbacks end up appearing twice: (1) the first
time from with_config, (2) the second time with langchain automatically
propagating them on behalf of the user.


Prior to this PR this will emit duplicate events:

```python
@tool
async def get_items(question: str, callbacks: Callbacks):  # <--- Accept callbacks
    """Ask question"""
    template = ChatPromptTemplate.from_messages(
        [
            (
                "human",
                "'{question}"
            )
        ]
    )
    chain = template | chat_model.with_config(
        {
            "callbacks": callbacks,  # <-- Propagate callbacks
        }
    )
    return await chain.ainvoke({"question": question})
```

Prior to this PR this will work work correctly (no duplicate events):

```python
@tool
async def get_items(question: str, callbacks: Callbacks):  # <--- Accept callbacks
    """Ask question"""
    template = ChatPromptTemplate.from_messages(
        [
            (
                "human",
                "'{question}"
            )
        ]
    )
    chain = template | chat_model
    return await chain.ainvoke({"question": question}, {"callbacks": callbacks})
```

This will also work (as long as the user is using python >= 3.11) -- as
langchain will automatically propagate callbacks

```python
@tool
async def get_items(question: str,):  
    """Ask question"""
    template = ChatPromptTemplate.from_messages(
        [
            (
                "human",
                "'{question}"
            )
        ]
    )
    chain = template | chat_model
    return await chain.ainvoke({"question": question})
```
2024-06-04 16:19:00 -04:00
Bagatur
161b02a8be
core[patch]: Release 0.2.4 (#22489) 2024-06-04 11:14:54 -07:00
ccurme
6db25b4e31
core[patch]: bump langsmith (#22476)
Noticing errors logged in some situations when tracing with Langsmith:
```python
from langchain_core.pydantic_v1 import BaseModel
from langchain_anthropic import ChatAnthropic


class AnswerWithJustification(BaseModel):
    """An answer to the user question along with justification for the answer."""
    answer: str
    justification: str


llm = ChatAnthropic(model="claude-3-haiku-20240307")
structured_llm = llm.with_structured_output(AnswerWithJustification)

list(structured_llm.stream("What weighs more a pound of bricks or a pound of feathers"))
```
```
Error in LangChainTracer.on_chain_end callback: AttributeError("'NoneType' object has no attribute 'append'")
[AnswerWithJustification(answer='A pound of bricks and a pound of feathers weigh the same amount.', justification='This is because a pound is a unit of mass, not volume. By definition, a pound of any material, whether bricks or feathers, will weigh the same - one pound. The physical size or volume of the materials does not matter when measuring by mass. So a pound of bricks and a pound of feathers both weigh exactly one pound.')]
```
2024-06-04 10:05:53 -07:00
Nuno Campos
58b118544e
Use immutable sequence type for batch/batch_as_completed types (#22433)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-04 08:04:09 -07:00
Christophe Bornet
23bba18f92
core[patch]: Fix VectorStore's as_retriever mutating tags param (#22470)
The current VectorStore `as_retriever` implementation mutates the `tags`
param when it's passed in kwargs.
This fix ensures that a copy is done.
2024-06-04 09:50:36 -04:00
Guangdong Liu
bc7e32f315
core(patch):fix partial_variables not working with SystemMessagePromptTemplate (#20711)
- **Issue:**  close #17560
- @baskaryan, @eyurtsev
2024-06-03 16:22:42 -07:00
Jacob Lee
c01467b1f4
core[patch]: RFC: Allow concatenation of messages with multi part content (#22002)
Anthropic's streaming treats tool calls as different content parts
(streamed back with a different index) from normal content in the
`content`.

This means that we need to update our chunk-merging logic to handle
chunks with multi-part content. The alternative is coerceing Anthropic's
responses into a string, but we generally like to preserve model
provider responses faithfully when we can. This will also likely be
useful for multimodal outputs in the future.

This current PR does unfortunately make `index` a magic field within
content parts, but Anthropic and OpenAI both use it at the moment to
determine order anyway. To avoid cases where we have content arrays with
holes and to simplify the logic, I've also restricted merging to chunks
in order.

TODO: tests

CC @baskaryan @ccurme @efriis
2024-06-03 09:46:40 -07:00
Nuno Campos
ceb73ad06f
core: In BaseRetriever make get_relevant_docs delegate to invoke (#22434)
- This fixes all the tracing issues with people still using
get_relevant_docs, and a change we need for 0.3 anyway

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-03 07:34:53 -07:00
Nuno Campos
ed8e9c437a
core: In RunnableSequence pass kwargs to the first step (#22393)
- This is a pattern that shows up occasionally in langgraph questions,
people chain a graph to something else after, and want to pass the graph
some kwargs (eg. stream_mode)
2024-06-03 14:18:10 +00:00
Bagatur
2b9f1469d8
core[patch]: Release 0.2.3 (#22329) 2024-05-30 11:35:09 -07:00
Harrison Chase
ee32369265
core[patch]: fix runnable history and add docs (#22283) 2024-05-30 11:26:41 -07:00
William FH
dcec133b85
[Core] Update Tracing Interops (#22318)
LangSmith and LangChain context var handling evolved in parallel since
originally we didn't expect people to want to interweave the decorator
and langchain code.

Once we get a new langsmith release, this PR will let you seemlessly
hand off between @traceable context and runnable config context so you
can arbitrarily nest code.

It's expected that this fails right now until we get another release of
the SDK
2024-05-30 10:34:49 -07:00
ccurme
e71b0b5827
core[patch]: Release 0.2.2 (#22289) 2024-05-29 19:51:37 +00:00
Bagatur
d61bdeba25
core[patch]: allow access RunnableWithFallbacks.runnable attrs (#22139)
RFC, candidate fix for #13095 #22134
2024-05-28 13:18:09 -07:00
hmasdev
bbd7015b5d
core[patch]: Add TypeError handler into get_graph of Runnable (#19856)
# Description

## Problem

`Runnable.get_graph` fails when `InputType` or `OutputType` property
raises `TypeError`.

-
003c98e5b4/libs/core/langchain_core/runnables/base.py (L250-L274)
-
003c98e5b4/libs/core/langchain_core/runnables/base.py (L394-L396)

This problem prevents getting a graph of `Runnable` objects whose
`InputType` or `OutputType` property raises `TypeError` but whose
`invoke` works well, such as `langchain.output_parsers.RegexParser`,
which I have already pointed out in #19792 that a `TypeError` would
occur.

## Solution

- Add `try-except` syntax to handle `TypeError` to the codes which get
`input_node` and `output_node`.

# Issue
- #19801 

# Twitter Handle
- [hmdev3](https://twitter.com/hmdev3)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-27 21:34:34 +00:00
Eugene Yurtsev
2d693c484e
docs: fix some spelling mistakes caught by newest version of code spell (#22090)
Going to merge this even though it doesn't pass all tests, and open a
separate PR for the remaining spelling mistakes.
2024-05-23 16:59:11 -04:00
ccurme
cd07521170
core: bump to 0.2.1rc (#22080) 2024-05-23 18:36:50 +00:00
ccurme
fbfed65fb1
core, partners: add token usage attribute to AIMessage (#21944)
```python
class UsageMetadata(TypedDict):
    """Usage metadata for a message, such as token counts.

    Attributes:
        input_tokens: (int) count of input (or prompt) tokens
        output_tokens: (int) count of output (or completion) tokens
        total_tokens: (int) total token count
    """

    input_tokens: int
    output_tokens: int
    total_tokens: int
```
```python
class AIMessage(BaseMessage):
    ...
    usage_metadata: Optional[UsageMetadata] = None
    """If provided, token usage information associated with the message."""
    ...
```
2024-05-23 14:21:58 -04:00
Bagatur
50186da0a1
infra: rm unused # noqa violations (#22049)
Updating #21137
2024-05-22 15:21:08 -07:00
Bagatur
3b0437c05b
core[patch]: Release 0.2.1 (#22003) 2024-05-22 00:05:04 +00:00
Eugene Yurtsev
ded53297e0
core[patch]: Add unit test for RunnableGenerator for eventstream v2 (#21990)
No unit tests with runnable generator
2024-05-21 14:29:15 -04:00
Nuno Campos
fb6108c8f5
core[patch]: In astream_events(version=v2) tap output of root run (#21977)
- if tap_output_iter/aiter is called multiple times for the same run
issue events only once
- if chat model run is tapped don't issue duplicate on_llm_new_token
events
- if first chunk arrives after run has ended do not emit it as a stream
event

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-05-21 14:03:57 -04:00
ccurme
a983465694
docs: set default anthropic model (#21988)
`ChatAnthropic()` raises ValidationError.
2024-05-21 11:01:18 -07:00
Michael Reed
7a5e1bcf99
core[patch]: Fix NPE in function_calling._get_python_function_required_args (#21863)
Example error message:
line 206, in _get_python_function_required_args
    if is_function_type and required[0] == "self":
                            ~~~~~~~~^^^
IndexError: list index out of range

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-20 22:06:27 +00:00
Coozywana
b6c8b6f944
Fix base.py typo (#21862)
ChatOpenaAI --> ChatOpenAI

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-18 13:05:02 +00:00
Bagatur
8b3c5f93f5
docs: lcel how to and cheatsheet (#21851) 2024-05-17 19:04:45 -07:00
Nuno Campos
b1e7b40b6a
core: Tap output of sync iterators for astream_events (#21842)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-17 16:57:41 -07:00
Eugene Yurtsev
67b6f6c82a
core[patch]: Check if event loop is closed in memory stream (#21841)
Check if event stream is closed in memory loop.

Using try/except here to avoid race condition, but this may incur a
small overhead in versions prios to 3.11
2024-05-17 21:53:59 +00:00
Erick Friis
23310626b3
core: release 0.2.0 (#21829) 2024-05-17 13:04:39 -07:00
ccurme
181dfef118
core, standard tests, partner packages: add test for model params (#21677)
1. Adds `.get_ls_params` to BaseChatModel which returns
```python
class LangSmithParams(TypedDict, total=False):
    ls_provider: str
    ls_model_name: str
    ls_model_type: Literal["chat"]
    ls_temperature: Optional[float]
    ls_max_tokens: Optional[int]
    ls_stop: Optional[List[str]]
```
by default it will only return
```python
{ls_model_type="chat", ls_stop=stop}
```

2. Add these params to inheritable metadata in
`CallbackManager.configure`

3. Implement `.get_ls_params` and populate all params for Anthropic +
all subclasses of BaseChatOpenAI

Sample trace:
https://smith.langchain.com/public/d2962673-4c83-47c7-b51e-61d07aaffb1b/r

**OpenAI**:
<img width="984" alt="Screenshot 2024-05-17 at 10 03 35 AM"
src="https://github.com/langchain-ai/langchain/assets/26529506/2ef41f74-a9df-4e0e-905d-da74fa82a910">

**Anthropic**:
<img width="978" alt="Screenshot 2024-05-17 at 10 06 07 AM"
src="https://github.com/langchain-ai/langchain/assets/26529506/39701c9f-7da5-4f1a-ab14-84e9169d63e7">

**Mistral** (and all others for which params are not yet populated):
<img width="977" alt="Screenshot 2024-05-17 at 10 08 43 AM"
src="https://github.com/langchain-ai/langchain/assets/26529506/37d7d894-fec2-4300-986f-49a5f0191b03">
2024-05-17 13:51:26 -04:00
Eugene Yurtsev
6ed0aa3239
core[major]: only use function description (#21622)
Do not prefix function signature

---

* Reason for this is that information is already present with tool
calling models.
* This will save on tokens for those models, and makes it more obvious
what the description is!
* The @tool can get more parameters to allow a user to re-introduce the
the signature if we want
2024-05-16 11:17:53 -04:00
William FH
8498b41cda
Finish agent migration doc (#21731) 2024-05-16 14:43:19 +00:00
William FH
ca768c8353
[Core] Check is async callable (#21714)
To permit proper coercion of objects like the following:


```python
class MyAsyncCallable:
    async def __call__(self, foo):
        return await ...

class MyAsyncGenerator:
    async def __call__(self, foo):
        await ...
        yield 
```
2024-05-15 10:49:49 -07:00
Eugene Yurtsev
5c2cfabec6
core[minor]: Add v2 implementation of astream events (#21638)
This PR introduces a v2 implementation of astream events that removes
intermediate abstractions and fixes some issues with v1 implementation.

The v2 implementation significantly reduces relevant code that's
associated with the astream events implementation together with
overhead.

After this PR, the astream events implementation:

- Uses an async callback handler
- No longer relies on BaseTracer
- No longer relies on json patch

As a result of this re-write, a number of issues were discovered with
the existing implementation.

## Changes in V2 vs. V1

### on_chat_model_end `output`

The outputs associated with `on_chat_model_end` changed depending on
whether it was within a chain or not.

As a root level runnable the output was: 

```python
"data": {"output": AIMessageChunk(content="hello world!", id='some id')}
```

As part of a chain the output was:

```
            "data": {
                "output": {
                    "generations": [
                        [
                            {
                                "generation_info": None,
                                "message": AIMessageChunk(
                                    content="hello world!", id=AnyStr()
                                ),
                                "text": "hello world!",
                                "type": "ChatGenerationChunk",
                            }
                        ]
                    ],
                    "llm_output": None,
                }
            },
```

After this PR, we will always use the simpler representation:

```python
"data": {"output": AIMessageChunk(content="hello world!", id='some id')}
```

**NOTE** Non chat models (i.e., regular LLMs) are still associated with
the more verbose format.

### Remove some `_stream` events

`on_retriever_stream` and `on_tool_stream` events were removed -- these
were not real events, but created as an artifact of implementing on top
of astream_log.

The same information is already available in the `x_on_end` events.

### Propagating Names

Names of runnables have been updated to be more consistent

```python
  model = GenericFakeChatModel(messages=infinite_cycle).configurable_fields(
        messages=ConfigurableField(
            id="messages",
            name="Messages",
            description="Messages return by the LLM",
        )
    )
```

Before:
```python
"name": "RunnableConfigurableFields",
```

After:
```python
"name": "GenericFakeChatModel",
```

### on_retriever_end

on_retriever_end will always return `output` which is a list of
documents (rather than a dict containing a key called "documents")

### Retry events

Removed the `on_retry` callback handler. It was incorrectly showing that
the failed function being retried has invoked `on_chain_end`


https://github.com/langchain-ai/langchain/pull/21638/files#diff-e512e3f84daf23029ebcceb11460f1c82056314653673e450a5831147d8cb84dL1394
2024-05-15 11:48:47 -04:00
Bagatur
241a6e43a5
docs: update structured how to (#21679) 2024-05-14 22:19:51 +00:00
Eugene Yurtsev
e69a9bedf8
core[patch]: Update mypy config (#21684)
Update mypy config to ignore checking deps from numpy and pytest (which are optional in langsmith sdk)
2024-05-14 17:29:07 -04:00
Eugene Yurtsev
5c64c004cc
core[patch]: Add unit tests with some streaming scenarios (#21668)
Add unit tests that show differences between sync / async versions when
streaming.

The inner on_chain_chunk event is missing if mixing sync and async
functionality. Likely due to missing tap_output_iter implementation on
the sync variant of `_transform_stream_with_config`
2024-05-14 15:30:57 +00:00
Eugene Yurtsev
2ac4d2960c
core[patch]: Add unit test to catch ordering (#21669)
Add unit test to catch ordering issues
2024-05-14 15:25:33 +00:00
Zhao Blake
972d2071c6
core[patch]: Fix typo in VectorStoreExampleSelector doc-string (#21574) 2024-05-14 13:31:37 +00:00
Erick Friis
c77d2f2b06
multiple: core 0.2 nonbreaking dep, check_diff community->langchain dep (#21646)
0.2 is not a breaking release for core (but it is for langchain and
community)

To keep the core+langchain+community packages in sync at 0.2, we will
relax deps throughout the ecosystem to tolerate `langchain-core` 0.2
2024-05-13 19:50:36 -07:00
adreo00
40aff1eacc
core[major]: AsyncCallbackManagerForToolRun no longer casts return object to string (#20374)
- **Description:** Stops `AsyncCallbackManagerForToolRun` from
converting the output to str
- **Issue:** #20372
- **Dependencies:** None
2024-05-13 15:09:12 -04:00
Guangdong Liu
a156aace2b
core[patch]:Fix Incorrect listeners parameters for Runnable.with_listeners() and .map() (#20661)
- **Issue:** fix #20509
-  @baskaryan, @eyurtsev


![image](https://github.com/langchain-ai/langchain/assets/48236177/f799a976-b983-4d8b-b373-64392e1fd6c6)
2024-05-13 11:16:17 -04:00
Nuno Campos
ad0f3c14c2
core: allow mermaid node labels to have any characters (#21385)
- it's only node ids that are limited

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-07 12:16:49 -07:00
Erick Friis
0fb93cd740
core: release 0.1.52 (#21350) 2024-05-06 22:20:35 +00:00
Nuno Campos
6f17158606
fix: core: Include in json output also fields set outside the constructor (#21342) 2024-05-06 14:37:36 -07:00
Erick Friis
811e9cee8b
core: release 0.1.51 (#21328) 2024-05-06 10:40:19 -07:00
Leonid Ganeline
3ef8b24277
core[patch]: utils.guard_import fix (#21133)
Issues (nit): 
1. `utils.guard_import` prints wrong error message when there is an
import `error.` It prints the whole `module_name` but should be only the
first part as the pip package name. E.i. `langchain_core.utils` -> print
not `langchain-core` but `langchain_core.utils`. Also replace '_' with
'-' in the pip package name.
2. it does not handle the `ModuleNotFoundError` which raised if
`guard_import("wrong_module")`

Fixed issues; added ut-s. Controversial: I've reraised
`ModuleNotFoundError` as `ImportError`, since in case of the error, the
proposed action is the same - we need to install a missed package.
2024-05-03 17:21:36 -04:00
Nuno Campos
6e1e0c7d5c
fix: core: draw_mermaid() would create subgroup for edges with same src and tgt (#21275)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-03 13:51:08 -07:00
ccurme
6da3d92b42
(all): update removal in deprecation warnings from 0.2 to 0.3 (#21265)
We are pushing out the removal of these to 0.3.

`find . -type f -name "*.py" -exec sed -i ''
's/removal="0\.2/removal="0.3/g' {} +`
2024-05-03 14:29:36 -04:00
Erick Friis
c1eb95b967
core: release 0.1.50 (#21230) 2024-05-02 22:44:18 +00:00
Nuno Campos
47ce8d5a57
core: tracer: remove numeric execution order (#21220)
- this hasn't been used in a long time and requires some additional
bookkeeping i'm going to streamline in the next pr
2024-05-02 15:38:55 -07:00
Bagatur
d297d90ad9
core[patch]: Release 0.1.49 (#21211) 2024-05-02 17:06:27 +00:00
Nuno Campos
663747b730
core[patch]: Fixes for convert_messages (#21207)
- support two-tuples of any sequence type (eg. json.loads never produces
tuples)
- support type alias for role key
- if id is passed in in dict form use it
- if tool_calls passed in in dict form use them

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-02 16:55:42 +00:00
William FH
ab55f6996d
[Core] Tracing: update parent run_tree's child_runs (#21049) 2024-05-01 06:33:08 -07:00
Erick Friis
2407c353be
core: release 0.1.48 (#21113) 2024-04-30 19:52:36 +00:00
Eugene Yurtsev
3c064a757f
core[minor],langchain[patch],community[patch]: Move storage interfaces to core (#20750)
* Move storage interface to core
* Move in memory and file system implementation to core
2024-04-30 13:14:26 -04:00
Erick Friis
748f2ba9ea
core: release 0.1.47 (#21094) 2024-04-30 09:22:05 -07:00
William FH
5c63ac3dd7
[Patch] Dedent docstring (#20959)
Technically a slight prompt breaking change, but I think positive EV in
that it saves tokens and results in more sane / in-distribution prompts
2024-04-30 07:40:57 -07:00
Christophe Bornet
5c77f45b06
community[minor]: Add async methods to CassandraCache and CassandraSemanticCache (#20654) 2024-04-30 10:27:44 -04:00
William FH
db14d4326d
[Core] Feat Pretty Print Tool calls (#20997)
Right now, `tool_calls` are not included in the `pretty_print()` output.
Would be nice to show!


![image](https://github.com/langchain-ai/langchain/assets/13333726/6a0ffca3-d02f-4e18-bc76-513eeca2e964)
2024-04-30 07:14:43 -07:00
Leonid Ganeline
1a2ff56cd8
core[patch[: docstring update (#21036)
Added missed docstrings. Updated docstrings to consistent format.
2024-04-29 15:35:34 -04:00
hmn falahi
4822beb298
Ignore self/cls from required args of class functions in convert_to_openai_tool (#20691)
Removed redundant self/cls from required args of class functions in
_get_python_function_required_args:

```python
class MemberTool:
    def search_member(
            self,
            keyword: str,
            *args,
            **kwargs,
    ):
        """Search on members with any keyword like first_name, last_name, email

        Args:
            keyword: Any keyword of member
        """

        headers = dict(authorization=kwargs['token'])
        members = []
        try:
            members = request_(
                method='SEARCH',
                url=f'{service_url}/apiv1/members',
                headers=headers,
                json=dict(query=keyword),
            )

        except Exception as e:
            logger.info(e.__doc__)

        return members

convert_to_openai_tool(MemberTool.search_member)
```
expected result:
```
{'type': 'function', 'function': {'name': 'search_member', 'description': 'Search on members with any keyword like first_name, last_name, username, email', 'parameters': {'type': 'object', 'properties': {'keyword': {'type': 'string', 'description': 'Any keyword of member'}}, 'required': ['keyword']}}}
```

#20685

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-29 11:46:26 -04:00
William FH
9fa9f05e5d
Catch System Error in ast parse (#20961)
I can't seem to reproduce, but i got this:

```
SystemError: AST constructor recursion depth mismatch (before=102, after=37)
```

And the operation isn't critical for the actual forward pass so seems
preferable to expand our caught exceptions
2024-04-26 19:31:55 -07:00
YH
2aca7fcdcf
core[patch]: Enhance link extraction with query parameters (#20259)
**Description**: This update enhances the `extract_sub_links` function
within the `langchain_core/utils/html.py` module to include query
parameters in the extracted URLs.

**Issue**: N/A

**Dependencies**: No additional dependencies required for this change.

**Twitter handle**: N/A

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-27 02:22:36 +00:00
Leonid Kuligin
893a924b90
core[minor], community[patch], langchain[patch]: move BaseChatLoader to core (#19607)
Thank you for contributing to LangChain!

- [ ] **PR title**: "core: move BaseChatLoader and BaseToolkit from
community"


- [ ] **PR message**: move BaseChatLoader and BaseToolkit

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-26 21:45:51 +00:00
Erick Friis
d4befd0cfb
core: fix batch ordering test (#20952) 2024-04-26 21:17:26 +00:00
ccurme
7d8d0229fa
remove placeholder error message (#20340) 2024-04-26 13:48:48 +00:00
William FH
4c437ebb9c
Use lstv2 (#20747) 2024-04-25 16:51:42 -07:00
Anish Chakraborty
898362de81
core[patch]: improve comma separated list output parser to handle non-space separated list (#20434)
- **Description:** Changes
`lanchain_core.output_parsers.CommaSeparatedListOutputParser` to handle
`,` as a delimiter alongside the previous implementation which used `, `
as delimiter.
- **Issue:** Started noticing that some results returned by LLMs were
not getting parsed correctly when the output contained `,` instead of `,
`.
  - **Dependencies:** No
  - **Twitter handle:** not active on twitter.


<!---
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
-->
2024-04-25 20:10:56 +00:00
ccurme
b8db73233c
core, community: deprecate tool.__call__ (#20900)
Does not update docs.
2024-04-25 14:50:39 -04:00
Bagatur
5b83130855
core[minor], langchain[patch], community[patch]: mv StructuredQuery (#20849)
mv StructuredQuery to core
2024-04-25 09:40:26 -07:00
Bagatur
ffad3985a1
core[patch]: Release 0.1.46 (#20891) 2024-04-25 15:40:17 +00:00
William FH
a936f696a6
[Core] Feat: update config CVar in tool.invoke (#20808) 2024-04-24 17:17:21 -07:00
Harrison Chase
43c041cda5
support messages in messages out (#20862) 2024-04-24 14:58:58 -07:00
Erick Friis
8c95ac3145
docs, multiple: de-beta with_structured_output (#20850) 2024-04-24 19:34:57 +00:00
Nuno Campos
477eb1745c
Better support for subgraphs in graph viz (#20840) 2024-04-24 12:32:52 -07:00
Eugene Yurtsev
d8aa72f51d
core[minor],langchain[patch]: Move base indexing interface and logic to core (#20667)
This PR moves the interface and the logic to core.

The following changes to namespaces:


`indexes` -> `indexing`
`indexes._api` -> `indexing.api`


Testing code is intentionally duplicated for now since it's testing
different
implementations of the record manager (in-memory vs. SQL).

Common logic will need to be pulled out into the test client.


A follow up PR will move the SQL based implementation outside of
LangChain.
2024-04-24 13:18:42 -04:00
Eugene Yurtsev
30e48c9878
core[patch],community[patch]: Move file chat history back to community (#20834)
Marking as patch since we haven't had releases in between. This just reverting part of a PR from yesterday.
2024-04-24 12:47:25 -04:00
Erick Friis
30c7951505
core: use qualname in beta message (#20361) 2024-04-23 11:20:13 -07:00
Eugene Yurtsev
ad6b5f84e5
community[patch],core[minor]: Move in memory cache implementation to core (#20753)
This PR moves the InMemoryCache implementation from community to core.
2024-04-23 11:10:11 -04:00
Eugene Yurtsev
a2cc9b55ba
core[patch]: Remove autoupgrade to addable dict in Runnable/RunnableLambda/RunnablePassthrough transform (#20677)
Causes an issue for this code

```python
from langchain.chat_models.openai import ChatOpenAI
from langchain.output_parsers.openai_tools import JsonOutputToolsParser
from langchain.schema import SystemMessage

prompt = SystemMessage(content="You are a nice assistant.") + "{question}"

llm = ChatOpenAI(
    model_kwargs={
        "tools": [
            {
                "type": "function",
                "function": {
                    "name": "web_search",
                    "description": "Searches the web for the answer to the question.",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "query": {
                                "type": "string",
                                "description": "The question to search for.",
                            },
                        },
                    },
                },
            }
        ],
    },
    streaming=True,
)

parser = JsonOutputToolsParser(first_tool_only=True)

llm_chain = prompt | llm | parser | (lambda x: x)


for chunk in llm_chain.stream({"question": "tell me more about turtles"}):
    print(chunk)

# message = llm_chain.invoke({"question": "tell me more about turtles"})

# print(message)
```

Instead by definition, we'll assume that RunnableLambdas consume the
entire stream and that if the stream isn't addable then it's the last
message of the stream that's in the usable format.

---

If users want to use addable dicts, they can wrap the dict in an
AddableDict class.

---

Likely, need to follow up with the same change for other places in the
code that do the upgrade
2024-04-23 10:35:06 -04:00
Eugene Yurtsev
645b1e142e
core[minor],langchain[patch],community[patch]: Move InMemory and File implementations of Chat History to core (#20752)
This PR moves the implementations for chat history to core. So it's
easier to determine which dependencies need to be broken / add
deprecation warnings
2024-04-23 10:22:11 -04:00
ccurme
7a922f3e48
core, openai: support custom token encoders (#20762) 2024-04-23 13:57:05 +00:00
Eugene Yurtsev
38adbfdf34
community[patch],core[minor]: Move BaseToolKit to core.tools (#20669) 2024-04-22 14:04:30 -04:00
ccurme
c010ec8b71
patch: deprecate (a)get_relevant_documents (#20477)
- `.get_relevant_documents(query)` -> `.invoke(query)`
- `.get_relevant_documents(query=query)` -> `.invoke(query)`
- `.get_relevant_documents(query, callbacks=callbacks)` ->
`.invoke(query, config={"callbacks": callbacks})`
- `.get_relevant_documents(query, **kwargs)` -> `.invoke(query,
**kwargs)`

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-22 11:14:53 -04:00
Erick Friis
940242c1ec
core: release 0.1.45 (#20664) 2024-04-19 09:55:02 -07:00
Nuno Campos
48307e46a3
core[patch]: Fix runnable map ser/de (#20631) 2024-04-18 18:52:33 -07:00
Erick Friis
3425988de7
core: deprecation default to qualname (#20578) 2024-04-18 15:35:17 -07:00
Erick Friis
66fb0b1f35
core: fix fireworks mapping (#20613) 2024-04-18 18:08:40 +00:00
aditya thomas
cea379e7c7
community, core[callbacks]: move FileCallbackHandler from community to core (#20495)
**Description:** Move `FileCallbackHandler` from community to core
**Issue:** #20493 
**Dependencies:** None

(imo) `FileCallbackHandler` is a built-in LangChain callback handler
like `StdOutCallbackHandler` and should properly be in in core.
2024-04-17 22:29:30 -04:00
Erick Friis
e395115807
docs: aws docs updates (#20571) 2024-04-17 23:32:00 +00:00
Nuno Campos
719da8746e
core: fix attributeerror in runnablelambda.deps (#20569)
- would happen when user's code tries to access attritbute that doesnt
exist, we prefer to let this crash in the user's code, rather than here
- also catch more cases where a runnable is invoked/streamed inside a
lambda. before we weren't seeing these as deps
2024-04-17 15:38:39 -07:00
Bagatur
7917e2c418
core[patch]: Release 0.1.44 (#20564) 2024-04-17 18:34:44 +00:00
Erick Friis
e7fe5f7d3f
anthropic[patch]: serialization in partner package (#18828) 2024-04-16 16:05:58 -07:00
Erick Friis
6adca37eb7
core: default chat/llm _identifying_params to lc_attributes (#20232) 2024-04-16 14:55:47 -07:00
Nuno Campos
806a54908c
Runnable graph viz improvements (#20529)
- Add conditional: bool property to json representation of the graphs
- Add option to generate mermaid graph stripped of styles (useful as a
text representation of graph)
2024-04-16 20:17:47 +00:00
Nuno Campos
f3aa26d6bf
Fix getattr in runnable binding for cases where config is passed in as arg too (#20528)
…s arg too

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-16 13:10:29 -07:00
Leonid Ganeline
45d045b2c5
core[minor], langchain[patch]: tools dependencies refactoring (#18759)
The `langchain.tools`
[namespace](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.tools)
can be completely eliminated by moving one class and 3 functions into
`core`. It makes sense since the class and functions are very core.
2024-04-16 14:15:09 -04:00
Ravindu Somawansa
5acc7ba622
community[minor]: Add glue catalog loader (#20220)
Add Glue Catalog loader
2024-04-16 11:39:23 -04:00
Dawson Bauer
aab075345e
core[patch]: Fix imports defined in messages sub-package (#20500)
core[patch]: Fix imports defined in messages sub-package (#20500)
2024-04-16 14:19:51 +00:00
Erick Friis
90184255f8
core: release 0.1.43 (#20489) 2024-04-15 22:48:34 +00:00
Erick Friis
7997f3b7f8
core: forward config params to default (#20402)
nuno's fault not mine

---------

Co-authored-by: Nuno Campos <nuno@boringbits.io>
Co-authored-by: Nuno Campos <nuno@langchain.dev>
2024-04-15 15:42:39 -07:00
Nuno Campos
97b2191e99
core: Add concept of conditional edge to graph rendering (#20480)
- implement for mermaid, graphviz and ascii
- this is to be used in langgraph
2024-04-15 13:49:06 -07:00
Ángel Igareta
60c7a17781
Remove logic to exclude intermediate nodes from rendering time (#20459)
Description: For simplicity, migrate the logic of excluding intermediate
nodes in the .get_graph() of langgraph package
(https://github.com/langchain-ai/langgraph/pull/310) at graph creation
time instead of graph rendering time.

Note: #20381 needs to be approved first

---------

Co-authored-by: Angel Igareta <angel.igareta@klarna.com>
Co-authored-by: Nuno Campos <nuno@langchain.dev>
Co-authored-by: Nuno Campos <nuno@boringbits.io>
2024-04-15 16:40:51 +00:00
Ángel Igareta
d55a365c6c
Fix CDN URL in mermaid graph renderer (#20381)
Description of features on mermaid graph renderer:
- Fixing CDN to use official Mermaid JS CDN:
https://www.jsdelivr.com/package/npm/mermaid?tab=files
- Add device_scale_factor to allow increasing quality of resulting PNG.
2024-04-15 08:01:35 -07:00
Alexander Smirnov
ece008f117
docs: Refine RunnablePassthrough docstring (#19812)
Description: This update refines the documentation for
`RunnablePassthrough` by removing an unnecessary import and correcting a
minor syntactical error in the example provided. This change enhances
the clarity and correctness of the documentation, ensuring that users
have a more accurate guide to follow.

Issue: N/A

Dependencies: None

This PR focuses solely on documentation improvements, specifically
targeting the `RunnablePassthrough` class within the `langchain_core`
module. By clarifying the example provided in the docstring, users are
offered a more straightforward and error-free guide to utilizing the
`RunnablePassthrough` class effectively.

As this is a documentation update, it does not include changes that
require new integrations, tests, or modifications to dependencies. It
adheres to the guidelines of minimal package interference and backward
compatibility, ensuring that the overall integrity and functionality of
the LangChain package remain unaffected.

Thank you for considering this documentation refinement for inclusion in
the LangChain project.
2024-04-13 16:23:32 -07:00
ccurme
38faa74c23
community[patch]: update use of deprecated llm methods (#20393)
.predict and .predict_messages for BaseLanguageModel and BaseChatModel
2024-04-12 17:28:23 -04:00
Bagatur
f1248f8d9a
core[patch]: configurable init params (#20070)
Proposed fix for #20061. need to test

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-12 21:18:43 +00:00
Erick Friis
29282371db
core: bind_tools interface on basechatmodel (#20360) 2024-04-12 01:32:19 +00:00
Eugene Yurtsev
2900720cd3
core[patch]: Update documentation for base retriever (#20345)
Updating in code documentation for base retriever to direct folks toward
the .invoke and .ainvoke methods + explain how to implement
2024-04-11 16:20:14 -04:00
Eugene Yurtsev
653489a1a9
docs: Update documentation for custom LLMs (#19972)
Update documentation for customizing LLMs
2024-04-11 12:21:27 -04:00
Bagatur
e72330aacc
core[patch]: Release 0.1.42 (#20332) 2024-04-11 09:10:27 -07:00
ccurme
795c728f71
mistral[patch]: add IDs to tool calls (#20299)
Mistral gives us one ID per response, no individual IDs for tool calls.

```python
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_mistralai import ChatMistralAI


prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant"),
        ("human", "{input}"),
        MessagesPlaceholder("agent_scratchpad"),
    ]
)
model = ChatMistralAI(model="mistral-large-latest", temperature=0)

@tool
def magic_function(input: int) -> int:
    """Applies a magic function to an input."""
    return input + 2

tools = [magic_function]

agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input": "what is the value of magic_function(3)?"})
```

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-11 11:09:30 -04:00
Eugene Yurtsev
f02f708f52
core[patch]: For now remove user warning (#20321)
Remove warning since it creates a lot of noise.
2024-04-11 10:33:01 -04:00
Bagatur
cb25fa0d55
core[patch]: fix ChatGeneration.text with content blocks (#20294) 2024-04-10 15:54:06 -07:00
Bagatur
03b247cca1
core[patch]: include tool_calls in ai msg chunk serialization (#20291) 2024-04-10 22:27:40 +00:00
Nuno Campos
15271ac832
core: mustache prompt templates (#19980)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-10 11:25:32 -07:00
ccurme
21c1ce0bc1
update agents to use tool call messages (#20074)
```python
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant"),
        MessagesPlaceholder("chat_history", optional=True),
        ("human", "{input}"),
        MessagesPlaceholder("agent_scratchpad"),
    ]
)
model = ChatAnthropic(model="claude-3-opus-20240229")

@tool
def magic_function(input: int) -> int:
    """Applies a magic function to an input."""
    return input + 2

tools = [magic_function]

agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input": "what is the value of magic_function(3)?"})
```
```
> Entering new AgentExecutor chain...

Invoking: `magic_function` with `{'input': 3}`
responded: [{'text': '<thinking>\nThe user has asked for the value of magic_function applied to the input 3. Looking at the available tools, magic_function is the relevant one to use here, as it takes an integer input and returns an integer output.\n\nThe magic_function has one required parameter:\n- input (integer)\n\nThe user has directly provided the value 3 for the input parameter. Since the required parameter is present, we can proceed with calling the function.\n</thinking>', 'type': 'text'}, {'id': 'toolu_01HsTheJPA5mcipuFDBbJ1CW', 'input': {'input': 3}, 'name': 'magic_function', 'type': 'tool_use'}]

5
Therefore, the value of magic_function(3) is 5.

> Finished chain.
{'input': 'what is the value of magic_function(3)?',
 'output': 'Therefore, the value of magic_function(3) is 5.'}
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-10 11:54:51 -04:00
Erick Friis
9eb6f538f0
infra, multiple: rc release versions (#20252) 2024-04-09 17:54:58 -07:00
Bagatur
a43b9e4f33
core[patch]: Pre-release 0.1.42-rc.1 (#20248) 2024-04-09 19:10:38 -05:00
Bagatur
9514bc4d67
core[minor], ...: add tool calls message (#18947)
core[minor], langchain[patch], openai[minor], anthropic[minor], fireworks[minor], groq[minor], mistralai[minor]

```python
class ToolCall(TypedDict):
    name: str
    args: Dict[str, Any]
    id: Optional[str]

class InvalidToolCall(TypedDict):
    name: Optional[str]
    args: Optional[str]
    id: Optional[str]
    error: Optional[str]

class ToolCallChunk(TypedDict):
    name: Optional[str]
    args: Optional[str]
    id: Optional[str]
    index: Optional[int]


class AIMessage(BaseMessage):
    ...
    tool_calls: List[ToolCall] = []
    invalid_tool_calls: List[InvalidToolCall] = []
    ...


class AIMessageChunk(AIMessage, BaseMessageChunk):
    ...
    tool_call_chunks: Optional[List[ToolCallChunk]] = None
    ...
```
Important considerations:
- Parsing logic occurs within different providers;
- ~Changing output type is a breaking change for anyone doing explicit
type checking;~
- ~Langsmith rendering will need to be updated:
https://github.com/langchain-ai/langchainplus/pull/3561~
- ~Langserve will need to be updated~
- Adding chunks:
- ~AIMessage + ToolCallsMessage = ToolCallsMessage if either has
non-null .tool_calls.~
- Tool call chunks are appended, merging when having equal values of
`index`.
  - additional_kwargs accumulate the normal way.
- During streaming:
- ~Messages can change types (e.g., from AIMessageChunk to
AIToolCallsMessageChunk)~
- Output parsers parse additional_kwargs (during .invoke they read off
tool calls).

Packages outside of `partners/`:
- https://github.com/langchain-ai/langchain-cohere/pull/7
- https://github.com/langchain-ai/langchain-google/pull/123/files

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-09 18:41:42 -05:00
Bagatur
a07238d14e
core[patch]: Release 0.1.41 (#20233) 2024-04-09 21:11:37 +00:00
Christophe Bornet
f43b48aebc
core[minor]: Implement aformat_messages for _StringImageMessagePromptTemplate (#20036) 2024-04-09 15:59:39 -04:00
Christophe Bornet
19001e6cb9
core[minor]: Implement aformat for FewShotPromptWithTemplates (#20039) 2024-04-09 15:58:41 -04:00
William FH
039b7a472d
[core] fix: manually specifying run_id for chat models.invoke() and .ainvoke() (#20082) 2024-04-06 16:57:32 -07:00
Guangdong Liu
5a76087965
langchain-core[minor]: Allow passing local cache to language models (#19331)
After this PR it will be possible to pass a cache instance directly to a
language model. This is useful to allow different language models to use
different caches if needed.

- **Issue:** close #19276

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-05 11:19:54 -04:00
Eugene Yurtsev
e4fc0e7502
core[patch]: Document BaseCache abstraction in code (#20046)
Document the base cache abstraction in the cache.
2024-04-05 10:56:57 -04:00
Christophe Bornet
4d8a6a27a3
core[minor]: Implement aformat_prompt and ainvoke in BasePromptTemplate (#20035) 2024-04-05 10:36:43 -04:00
Christophe Bornet
7e5c1905b1
core[minor]: Add async aformat_document method (#20037) 2024-04-05 10:29:53 -04:00
Christophe Bornet
927793d088
Merge pull request #20038
* Implement aformat_messages for ChatMessagePromptTemplate
2024-04-05 10:25:27 -04:00
Bagatur
209de0a561
anthropic[minor]: tool use (#20016) 2024-04-04 13:22:48 -07:00
Jan Nissen
31e3ecc728
core[minor]: support pydantic V2 for JSONOutputParser, allow for other sources of JSON schemas (#19716)
This PR supports using Pydantic v2 objects to generate the schema for
the JSONOutputParser (#19441). This also adds a `json_schema` parameter
to allow users to pass any JSON schema to validate with, not just
pydantic.
2024-04-04 10:57:47 -04:00
Christophe Bornet
f97de4e275
core[minor]: Add aformat to FewShotPromptTemplate (#19652) 2024-04-04 10:24:55 -04:00
Utkarsha Gupte
b27f81c51c
core[patch]: mypy ignore fixes #17048 (#19931)
core/langchain_core/_api[Patch]: mypy ignore fixes #17048
Related to #17048

Applied mypy fixes to below two files:
libs/core/langchain_core/_api/deprecation.py
libs/core/langchain_core/_api/beta_decorator.py

Summary of Fixes:
**Issue 1**
class _deprecated_property(type(obj)): # type: ignore
error: Unsupported dynamic base class "type"  [misc]
Fix: 
1. Added an __init__ method to _deprecated_property to initialize the
fget, fset, fdel, and __doc__ attributes.
2. In the __get__, __set__, and __delete__ methods, we now use the
self.fget, self.fset, and self.fdel attributes to call the original
methods after emitting the warning.

3. The finalize function now creates an instance of _deprecated_property
with the fget, fset, fdel, and doc attributes from the original obj
property.



**Issue 2**



 def finalize(  # type: ignore
                wrapper: Callable[..., Any], new_doc: str
            ) -> T:


error: All conditional function variants must have identical
signatures



Fix:
Ensured that both definitions of the finalize function have the
same signature

Twitter Handle -
https://x.com/gupteutkarsha?s=11&t=uwHe4C3PPpGRvoO5Qpm1aA
2024-04-04 10:22:38 -04:00
Erick Friis
83f62fdacf
core: fix try_load_from_hub for older langchain versions load_chain (#19964) 2024-04-03 17:00:25 +00:00
Eugene Yurtsev
d293431e10
core[minor]: Add aload to document loader (#19936)
Add aload to document loader
2024-04-03 10:46:47 -04:00
Ángel Igareta
31a641a155
core: fix return of draw_mermaid_png and change to not save image by default (#19950)
- **Description:** Improvement for #19599: fixing missing return of
graph.draw_mermaid_png and improve it to make the saving of the rendered
image optional

Co-authored-by: Angel Igareta <angel.igareta@klarna.com>
2024-04-03 06:20:35 -07:00
Bagatur
4328c54aab
core[patch]: Release 0.1.39 (#19940) 2024-04-03 00:25:56 +00:00
Nuno Campos
f4568fe0c6
core: BaseChatModel modify chat message before passing to run_manager (#19939)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-02 16:40:27 -07:00
Erick Friis
f0d5b59962
core[patch]: remove requests (#19891)
Removes required usage of `requests` from `langchain-core`, all of which
has been deprecated.

- removes Tracer V1 implementations
- removes old `try_load_from_hub` github-based hub implementations

Removal done in a way where imports will still succeed, and usage will
fail with a `RuntimeError`.
2024-04-02 20:28:10 +00:00
Bagatur
3218463f6a
core[patch]: Release 0.1.38 (#19895) 2024-04-01 22:47:46 -07:00
Mohammad Mohtashim
9ae2df36fc
Core[major]: Base Tracer to propagate raw output from tool for on_tool_end (#18932)
This PR completes work for PR #18798 to expose raw tool output in
on_tool_end.

Affected APIs:
* astream_log
* astream_events
* callbacks sent to langsmith via langsmith-sdk
* Any other code that relies on BaseTracer!

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-02 01:24:46 +00:00
Nuno Campos
2ae6dcdf01
core: Assign missing message ids in BaseChatModel (#19863)
- This ensures ids are stable across streamed chunks
- Multiple messages in batch call get separate ids
- Also fix ids being dropped when combining message chunks

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-02 01:18:36 +00:00
Mayank Solanki
d5c412b0a9
core: Add docs for RunnableConfigurableFields (#19849)
- [x] **docs**: core: Add docs for `RunnableConfigurableFields`

- **Description:** Added incode docs for `RunnableConfigurableFields`
with example
    - **Issue:** #18803 
    - **Dependencies:** NA
    - **Twitter handle:** NA

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-01 20:40:10 +00:00
Ángel Igareta
c2ccf22dfd
core: generate mermaid syntax and render visual graph (#19599)
- **Description:** Add functionality to generate Mermaid syntax and
render flowcharts from graph data. This includes support for custom node
colors and edge curve styles, as well as the ability to export the
generated graphs to PNG images using either the Mermaid.INK API or
Pyppeteer for local rendering.
- **Dependencies:** Optional dependencies are `pyppeteer` if rendering
wants to be done using Pypeteer and Javascript code.

---------

Co-authored-by: Angel Igareta <angel.igareta@klarna.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-01 08:14:46 -07:00
Bagatur
08c10bd66a
core[patch]: Release 0.1.37 (#19831) 2024-03-31 14:50:39 -07:00
Harrison Chase
56525f2ac1
dont mutate metadata/tags (#19742) 2024-03-29 17:55:27 -07:00