Files
langchain/docs
Shengsheng Huang 5c9ae435f6 community[minor]: add BigDL-LLM integrations (#17953)
- **Description**:
[`bigdl-llm`](https://github.com/intel-analytics/BigDL) is a library for
running LLM on Intel XPU (from Laptop to GPU to Cloud) using
INT4/FP4/INT8/FP8 with very low latency (for any PyTorch model). This PR
adds bigdl-llm integrations to langchain.
- **Issue**: NA
- **Dependencies**: `bigdl-llm` library
- **Contribution maintainer**: @shane-huang 
 
Examples added:
- docs/docs/integrations/llms/bigdl.ipynb
2024-04-25 17:39:06 -07:00
..
2024-02-21 16:38:28 -08:00
2024-02-20 18:30:11 -08:00
2024-02-22 15:20:34 -08:00
2024-02-20 18:30:11 -08:00
2024-02-08 14:52:26 -08:00
2024-01-08 08:38:14 -08:00

LangChain Documentation

For more information on contributing to our documentation, see the Documentation Contributing Guide