Commit Graph

3751 Commits

Author SHA1 Message Date
Nicolas
ad04585e30
community[minor]: Firecrawl.dev integration (#20364)
Added the [FireCrawl](https://firecrawl.dev) document loader. Firecrawl
crawls and convert any website into LLM-ready data. It crawls all
accessible subpages and give you clean markdown for each.

    - **Description:** Adds FireCrawl data loader
    - **Dependencies:** firecrawl-py
    - **Twitter handle:** @mendableai 

ccing contributors: (@ericciarla @nickscamara)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-12 19:13:48 +00:00
Tomaz Bratanic
a1b105ac00
experimental[patch]: Skip pydantic validation for llm graph transformer and fix JSON response where possible (#19915)
LLMs might sometimes return invalid response for LLM graph transformer.
Instead of failing due to pydantic validation, we skip it and manually
check and optionally fix error where we can, so that more information
gets extracted
2024-04-12 11:29:25 -07:00
P. Taylor Goetz
9317df7f16
community[patch]: Add "model" attribute to the payload sent to Ollama in ChatOllama (#20354)
Example Ollama API calls:

Request without "model":
```
curl --location 'http://localhost:11434/api/chat' \
--header 'Content-Type: application/json' \
--data '{
  "messages": [
    {
      "role": "user",
      "content": "What is the capitol of PA?"
    }
  ],
  "stream": false
}'
```
Response:
```
{"error":"model is required"}
```

Request with "model":
```
curl --location 'http://localhost:11434/api/chat' \
--header 'Content-Type: application/json' \
--data '{
  "model": "openchat",
  "messages": [
    {
      "role": "user",
      "content": "What is the capitol of PA?"
    }
  ],
  "stream": false
}'
```

Response:
```
{
  "eval_duration" : 733248000,
  "created_at" : "2024-04-11T23:04:08.735766843Z",
  "model" : "openchat",
  "message" : {
    "content" : " The capital city of Pennsylvania is Harrisburg.",
    "role" : "assistant"
  },
  "total_duration" : 3138731168,
  "prompt_eval_count" : 25,
  "load_duration" : 466562959,
  "done" : true,
  "prompt_eval_duration" : 1938495000,
  "eval_count" : 10
}
```
2024-04-12 13:32:53 -04:00
Alex Sherstinsky
fad0962643
community: for Predibase -- enable both Predibase-hosted and HuggingFace-hosted fine-tuned adapter repositories (#20370) 2024-04-12 08:32:00 -07:00
Eugene Yurtsev
6470b30173
langchain[patch]: Add deprecation warning to extraction chains (#20224)
Add deprecation warnings to extraction chains
2024-04-12 10:24:32 -04:00
Eugene Yurtsev
b65a1d4cfd
langchain[patch]: Add another unit test for indexing code (#20387)
Add another unit test for indexing
2024-04-12 10:19:18 -04:00
Erick Friis
29282371db
core: bind_tools interface on basechatmodel (#20360) 2024-04-12 01:32:19 +00:00
Erick Friis
e6806a08d4
multiple: standard chat model tests (#20359) 2024-04-11 18:23:13 -07:00
Isak Nyberg
bac9fb9a7c
community: add gpt-4 pricing in callback (#20292)
Added the pricing for `gpt-4-turbo` and `gpt-4-turbo-2024-04-09` in the
callback method.
related to issue #17173 

https://openai.com/pricing#language-models
2024-04-11 18:02:39 -04:00
Leonid Ganeline
7cf2d2759d
community[patch]: docstrings update (#20301)
Added missed docstrings. Format docstings to the consistent form.
2024-04-11 16:23:27 -04:00
Eugene Yurtsev
2900720cd3
core[patch]: Update documentation for base retriever (#20345)
Updating in code documentation for base retriever to direct folks toward
the .invoke and .ainvoke methods + explain how to implement
2024-04-11 16:20:14 -04:00
Erick Friis
ec0273fc92
chroma: release 0.1.0 (#20355) 2024-04-11 12:39:52 -07:00
Erick Friis
da707d0755
chroma: remove relevance score int test (#20346)
deprecating feature in #20302
2024-04-11 11:29:33 -07:00
Bagatur
6608089030
langchain[patch]: Release 0.1.16 (#20335) 2024-04-11 09:28:37 -07:00
Eugene Yurtsev
653489a1a9
docs: Update documentation for custom LLMs (#19972)
Update documentation for customizing LLMs
2024-04-11 12:21:27 -04:00
Bagatur
799714c629
release anthropic, fireworks, openai, groq, mistral (#20333) 2024-04-11 09:19:52 -07:00
Bagatur
e72330aacc
core[patch]: Release 0.1.42 (#20332) 2024-04-11 09:10:27 -07:00
ccurme
795c728f71
mistral[patch]: add IDs to tool calls (#20299)
Mistral gives us one ID per response, no individual IDs for tool calls.

```python
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_mistralai import ChatMistralAI


prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant"),
        ("human", "{input}"),
        MessagesPlaceholder("agent_scratchpad"),
    ]
)
model = ChatMistralAI(model="mistral-large-latest", temperature=0)

@tool
def magic_function(input: int) -> int:
    """Applies a magic function to an input."""
    return input + 2

tools = [magic_function]

agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input": "what is the value of magic_function(3)?"})
```

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-11 11:09:30 -04:00
Eugene Yurtsev
22fd844e8a
community[patch]: Add deprecation warnings to postgres implementation (#20222)
Add deprecation warnings to postgres implementation that are in langchain-postgres.
2024-04-11 10:33:22 -04:00
Eugene Yurtsev
f02f708f52
core[patch]: For now remove user warning (#20321)
Remove warning since it creates a lot of noise.
2024-04-11 10:33:01 -04:00
Bagatur
c706689413
openai[patch]: use tool_calls in request (#20272) 2024-04-11 03:55:52 -07:00
Bagatur
e936fba428
langchain[patch]: agents check prompt partial vars (#20303) 2024-04-11 03:55:09 -07:00
Bagatur
cb25fa0d55
core[patch]: fix ChatGeneration.text with content blocks (#20294) 2024-04-10 15:54:06 -07:00
Bagatur
03b247cca1
core[patch]: include tool_calls in ai msg chunk serialization (#20291) 2024-04-10 22:27:40 +00:00
Erick Friis
0fa551c278
chroma: bump rc, keep optional (#20298) 2024-04-10 14:22:56 -07:00
Erick Friis
16f8fff14f
chroma: add required fastapi dep to restrict to <1 (#20297) 2024-04-10 14:16:13 -07:00
Erick Friis
991fd82532
chroma: add optional fastapi dep to restrict to <1 (#20295) 2024-04-10 12:49:44 -07:00
killind-dev
f8a54d1d73
chroma: Add chroma partner package (#19292)
**Description:** Adds chroma to the partners package. Tests & code
mirror those in the community package.
**Dependencies:** None
**Twitter handle:** @akiradev0x

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-10 19:33:45 +00:00
Yuki Watanabe
eef19954f3
core[patch]: fix duplicated kwargs in _load_sql_databse_chain (#19908)
`kwargs` is specified twice in [this
line](3218463f6a/libs/langchain/langchain/chains/loading.py (L386)),
causing runtime error when passing any keyword arguments.
2024-04-10 12:20:28 -07:00
Nuno Campos
15271ac832
core: mustache prompt templates (#19980)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-10 11:25:32 -07:00
Leonid Ganeline
4cb5f4c353
community[patch]: import flattening fix (#20110)
This PR should make it easier for linters to do type checking and for IDEs to jump to definition of code.

See #20050 as a template for this PR.
- As a byproduct: Added 3 missed `test_imports`.
- Added missed `SolarChat` in to __init___.py Added it into test_import
ut.
- Added `# type: ignore` to fix linting. It is not clear, why linting
errors appear after ^ changes.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-10 13:01:19 -04:00
Yuki Oshima
12190ad728
openai[patch]: Fix langchain-openai unknown parameter error with gpt-4-turbo (#20271)
**Description:** 

I fixed langchain-openai unknown parameter error with gpt-4-turbo.

It seems that the behavior of the Chat Completions API implicitly
changed when using the latest gpt-4-turbo model, differing from previous
models. It now appears to reject parameters that are not listed in the
[API
Reference](https://platform.openai.com/docs/api-reference/chat/create).
So I found some errors and fixed them.

**Issue:** https://github.com/langchain-ai/langchain/issues/20264

**Dependencies:** none

**Twitter handle:** https://twitter.com/oshima_123
2024-04-10 09:51:38 -07:00
ccurme
21c1ce0bc1
update agents to use tool call messages (#20074)
```python
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant"),
        MessagesPlaceholder("chat_history", optional=True),
        ("human", "{input}"),
        MessagesPlaceholder("agent_scratchpad"),
    ]
)
model = ChatAnthropic(model="claude-3-opus-20240229")

@tool
def magic_function(input: int) -> int:
    """Applies a magic function to an input."""
    return input + 2

tools = [magic_function]

agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input": "what is the value of magic_function(3)?"})
```
```
> Entering new AgentExecutor chain...

Invoking: `magic_function` with `{'input': 3}`
responded: [{'text': '<thinking>\nThe user has asked for the value of magic_function applied to the input 3. Looking at the available tools, magic_function is the relevant one to use here, as it takes an integer input and returns an integer output.\n\nThe magic_function has one required parameter:\n- input (integer)\n\nThe user has directly provided the value 3 for the input parameter. Since the required parameter is present, we can proceed with calling the function.\n</thinking>', 'type': 'text'}, {'id': 'toolu_01HsTheJPA5mcipuFDBbJ1CW', 'input': {'input': 3}, 'name': 'magic_function', 'type': 'tool_use'}]

5
Therefore, the value of magic_function(3) is 5.

> Finished chain.
{'input': 'what is the value of magic_function(3)?',
 'output': 'Therefore, the value of magic_function(3) is 5.'}
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-10 11:54:51 -04:00
Erick Friis
9eb6f538f0
infra, multiple: rc release versions (#20252) 2024-04-09 17:54:58 -07:00
Bagatur
0d0458d1a7
mistralai[patch]: Pre-release 0.1.2-rc.1 (#20251) 2024-04-10 00:25:38 +00:00
Bagatur
e4046939d0
anthropic[patch]: Pre-release 0.1.8-rc.1 (#20250) 2024-04-10 00:23:10 +00:00
Bagatur
a8eb0f5b1b
openai[patch]: pre-release 0.1.3-rc.1 (#20249) 2024-04-10 00:22:08 +00:00
Bagatur
a43b9e4f33
core[patch]: Pre-release 0.1.42-rc.1 (#20248) 2024-04-09 19:10:38 -05:00
Bagatur
9514bc4d67
core[minor], ...: add tool calls message (#18947)
core[minor], langchain[patch], openai[minor], anthropic[minor], fireworks[minor], groq[minor], mistralai[minor]

```python
class ToolCall(TypedDict):
    name: str
    args: Dict[str, Any]
    id: Optional[str]

class InvalidToolCall(TypedDict):
    name: Optional[str]
    args: Optional[str]
    id: Optional[str]
    error: Optional[str]

class ToolCallChunk(TypedDict):
    name: Optional[str]
    args: Optional[str]
    id: Optional[str]
    index: Optional[int]


class AIMessage(BaseMessage):
    ...
    tool_calls: List[ToolCall] = []
    invalid_tool_calls: List[InvalidToolCall] = []
    ...


class AIMessageChunk(AIMessage, BaseMessageChunk):
    ...
    tool_call_chunks: Optional[List[ToolCallChunk]] = None
    ...
```
Important considerations:
- Parsing logic occurs within different providers;
- ~Changing output type is a breaking change for anyone doing explicit
type checking;~
- ~Langsmith rendering will need to be updated:
https://github.com/langchain-ai/langchainplus/pull/3561~
- ~Langserve will need to be updated~
- Adding chunks:
- ~AIMessage + ToolCallsMessage = ToolCallsMessage if either has
non-null .tool_calls.~
- Tool call chunks are appended, merging when having equal values of
`index`.
  - additional_kwargs accumulate the normal way.
- During streaming:
- ~Messages can change types (e.g., from AIMessageChunk to
AIToolCallsMessageChunk)~
- Output parsers parse additional_kwargs (during .invoke they read off
tool calls).

Packages outside of `partners/`:
- https://github.com/langchain-ai/langchain-cohere/pull/7
- https://github.com/langchain-ai/langchain-google/pull/123/files

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-09 18:41:42 -05:00
Erick Friis
00552918ac
groq: xfail tool_choice tests (#20247) 2024-04-09 23:29:59 +00:00
Bagatur
2d83505be9
experimental[patch]: Release 0.0.57 (#20243) 2024-04-09 17:08:01 -05:00
Bagatur
f06cb59ab9
groq[patch]: Release 0.1.1 (#20242) 2024-04-09 21:59:58 +00:00
Bagatur
0b2f0307d7
openai[patch]: Release 0.1.2 (#20241) 2024-04-09 21:55:19 +00:00
Bagatur
4b84c9b28c
anthropic[patch]: Release 0.1.7 (#20240) 2024-04-09 21:53:16 +00:00
Bagatur
74d04a4e80
mistralai[patch]: Release 0.1.1 (#20239) 2024-04-09 21:53:01 +00:00
Bagatur
e5913c8758
langchain[patch]: Release 0.1.15 (#20237) 2024-04-09 21:50:32 +00:00
Bagatur
e39fdfddf1
community[patch]: Release 0.0.32 (#20236) 2024-04-09 21:37:10 +00:00
Bagatur
a07238d14e
core[patch]: Release 0.1.41 (#20233) 2024-04-09 21:11:37 +00:00
Chip Davis
806d4ae48f
community[patch]: fixed multithreading returning List[List[Documents]] instead of List[Documents] (#20230)
Description: When multithreading is set to True and using the
DirectoryLoader, there was a bug that caused the return type to be a
double nested list. This resulted in other places upstream not being
able to utilize the from_documents method as it was no longer a
`List[Documents]` it was a `List[List[Documents]]`. The change made was
to just loop through the `future.result()` and yield every item.
Issue: #20093
Dependencies: N/A
Twitter handle: N/A
2024-04-09 17:06:37 -04:00
Eugene Yurtsev
fe35e13083
langchain[patch]: Update unit test (#20228)
This unit test fails likely validation by the openai client.

Newer openai library seems to be doing more validation so the existing
test fails since http_client needs to be of httpx instance
2024-04-09 16:44:23 -04:00