mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-22 07:05:36 +00:00
- **Description**: [`bigdl-llm`](https://github.com/intel-analytics/BigDL) is a library for running LLM on Intel XPU (from Laptop to GPU to Cloud) using INT4/FP4/INT8/FP8 with very low latency (for any PyTorch model). This PR adds bigdl-llm integrations to langchain. - **Issue**: NA - **Dependencies**: `bigdl-llm` library - **Contribution maintainer**: @shane-huang Examples added: - docs/docs/integrations/llms/bigdl.ipynb
LangChain Documentation
For more information on contributing to our documentation, see the Documentation Contributing Guide