Thank you for contributing to LangChain!
Update docs to match latest langchain-ai21 release.
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
In collaboration with @rlouf I build an
[outlines](https://dottxt-ai.github.io/outlines/latest/) integration for
langchain!
I think this is really useful for doing any type of structured output
locally.
[Dottxt](https://dottxt.co) spend alot of work optimising this process
at a lower level
([outlines-core](https://pypi.org/project/outlines-core/0.1.14/) written
in rust) so I think this is a better alternative over all current
approaches in langchain to do structured output.
It also implements the `.with_structured_output` method so it should be
a drop in replacement for a lot of applications.
The integration includes:
- **Outlines LLM class**
- **ChatOutlines class**
- **Tutorial Cookbooks**
- **Documentation Page**
- **Validation and error messages**
- **Exposes Outlines Structured output features**
- **Support for multiple backends**
- **Integration and Unit Tests**
Dependencies: `outlines` + additional (depending on backend used)
I am not sure if the unit-tests comply with all requirements, if not I
suggest to just remove them since I don't see a useful way to do it
differently.
### Quick overview:
Chat Models:
<img width="698" alt="image"
src="https://github.com/user-attachments/assets/05a499b9-858c-4397-a9ff-165c2b3e7acc">
Structured Output:
<img width="955" alt="image"
src="https://github.com/user-attachments/assets/b9fcac11-d3e5-4698-b1ae-8c4cb3d54c45">
---------
Co-authored-by: Vadym Barda <vadym@langchain.dev>
**Description:**
This PR modifies the documentation regarding the configuration of the
VLLM with the LoRA adapter. The updates aim to provide clear
instructions for users on how to set up the LoRA adapter when using the
VLLM.
- before
```python
VLLM(..., enable_lora=True)
```
- after
```python
VLLM(...,
vllm_kwargs={
"enable_lora": True
}
)
```
This change clarifies that users should use the vllm_kwargs to enable
the LoRA adapter.
Co-authored-by: Um Changyong <changyong.um@sfa.co.kr>
**Description:**
I added code for lora_request in the community package, but I forgot to
add content to the VLLM page. So, I will do that now. #27731
---------
Co-authored-by: Um Changyong <changyong.um@sfa.co.kr>
This PR updates the integration with OCI data science model deployment
service.
- Update LLM to support streaming and async calls.
- Added chat model.
- Updated tests and docs.
- Updated `libs/community/scripts/check_pydantic.sh` since the use of
`@pre_init` is removed from existing integration.
- Updated `libs/community/extended_testing_deps.txt` as this integration
requires `langchain_openai`.
---------
Co-authored-by: MING KANG <ming.kang@oracle.com>
Co-authored-by: Dmitrii Cherkasov <dmitrii.cherkasov@oracle.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
**Description:** [IPEX-LLM](https://github.com/intel-analytics/ipex-llm)
is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local
PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low
latency. This PR adds Intel GPU support to `ipex-llm` llm integration.
**Dependencies:** `ipex-llm`
**Contribution maintainer**: @ivy-lv11 @Oscilloscope98
**tests and docs**:
- Add: langchain/docs/docs/integrations/llms/ipex_llm_gpu.ipynb
- Update: langchain/docs/docs/integrations/llms/ipex_llm_gpu.ipynb
- Update: langchain/libs/community/tests/llms/test_ipex_llm.py
---------
Co-authored-by: ivy-lv11 <zhicunlv@gmail.com>
Thank you for contributing to LangChain!
- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
- Example: "community: add foobar LLM"
- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change
- **Issue:** the issue # it fixes, if applicable
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
I have validated langchain interface with tei/tgi works as expected when
TEI and TGI running on Intel Gaudi2. Adding some references to notebooks
to help users find relevant info.
---------
Co-authored-by: Rita Brugarolas <rbrugaro@idc708053.jf.intel.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
The new `langchain-ollama` package seems pretty well implemented, but I
noticed the docs were still outdated so I decided to fix em up a bit.
- Llama3.1 was release on 23rd of July;
https://ai.meta.com/blog/meta-llama-3-1/
- Ollama supports tool calling since 25th of July;
https://ollama.com/blog/tool-support
- LangChain Ollama partner package was released 1st of august;
https://pypi.org/project/langchain-ollama/
**Problem**: Docs note langchain-community instead of langchain-ollama
**Solution**: Update docs to
https://python.langchain.com/v0.2/docs/integrations/chat/ollama/
**Problem**: OllamaFunctions is deprecated, as noted on
[Integrations](https://python.langchain.com/v0.2/docs/integrations/chat/ollama_functions/):
This was an experimental wrapper that attempts to bolt-on tool calling
support to models that do not natively support it. The [primary Ollama
integration](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/) now
supports tool calling, and should be used instead.
**Solution**: Delete old notebook from repo, update the existing one
with @tool decorator + pydantic examples to the notebook
**Problem**: Llama3.1 was released while llama3-groq-tool-call fine-tune
Is noted in notebooks.
**Solution**: update docs + notebooks to llama3.1 (which has improved
tool calling support)
**Problem**: Install instructions are incomplete, there is no
information to download a model and/or run the Ollama server
**Solution**: Add simple instructions to start the ollama service and
pull model (for toolcalling)
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Check whether the API key is already in the environment
Update:
```python
import getpass
import os
os.environ["DATABRICKS_HOST"] = "https://your-workspace.cloud.databricks.com"
os.environ["DATABRICKS_TOKEN"] = getpass.getpass("Enter your Databricks access token: ")
```
To:
```python
import getpass
import os
os.environ["DATABRICKS_HOST"] = "https://your-workspace.cloud.databricks.com"
if "DATABRICKS_TOKEN" not in os.environ:
os.environ["DATABRICKS_TOKEN"] = getpass.getpass(
"Enter your Databricks access token: "
)
```
grit migration:
```
engine marzano(0.1)
language python
`os.environ[$Q] = getpass.getpass("$X")` as $CHECK where {
$CHECK <: ! within if_statement(),
$CHECK => `if $Q not in os.environ:\n $CHECK`
}
```
- **Description:** Runhouse recently migrated from Read the Docs to a
self-hosted solution. This PR updates a broken link from the old docs to
www.run.house/docs. Also changed "The Runhouse" to "Runhouse" (it's
cleaner).
- **Issue:** None
- **Dependencies:** None
Thank you for contributing to LangChain!
- [x] **PR title**: "community:add Yi LLM", "docs:add Yi Documentation"
- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** This PR adds support for the Yi model to LangChain.
- **Dependencies:**
[langchain_core,requests,contextlib,typing,logging,json,langchain_community]
- **Twitter handle:** 01.AI
- [x] **Add tests and docs**: I've added the corresponding documentation
to the relevant paths
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
Thank you for contributing to LangChain!
- [x] **PR title**: Update IBM docs about information to pass client
into WatsonxLLM and WatsonxEmbeddings object.
- [x] **PR message**:
- **Description:** Update IBM docs about information to pass client into
WatsonxLLM and WatsonxEmbeddings object.
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
Description: added support for LangChain v0.2 for PipelineAI
integration. Removed deprecated classes and incorporated support for
LangChain v0.2 to integrate with PipelineAI. Removed LLMChain and
replaced it with Runnable interface. Also added StrOutputParser, that
parses LLMResult into the top likely string.
Issue: None
Dependencies: None.
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
- Update Meta Llama 3 cookbook link
- Add prereq section and information on `messages_modifier` to LangGraph
migration guide
- Update `PydanticToolsParser` explanation and entrypoint in tool
calling guide
- Add more obvious warning to `OllamaFunctions`
- Fix Wikidata tool install flow
- Update Bedrock LLM initialization
@baskaryan can you add a bit of information on how to authenticate into
the `ChatBedrock` and `BedrockLLM` models? I wasn't able to figure it
out :(