mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-29 19:18:53 +00:00
- **Description**: [`bigdl-llm`](https://github.com/intel-analytics/BigDL) is a library for running LLM on Intel XPU (from Laptop to GPU to Cloud) using INT4/FP4/INT8/FP8 with very low latency (for any PyTorch model). This PR adds bigdl-llm integrations to langchain. - **Issue**: NA - **Dependencies**: `bigdl-llm` library - **Contribution maintainer**: @shane-huang Examples added: - docs/docs/integrations/llms/bigdl.ipynb |
||
---|---|---|
.. | ||
api_reference | ||
data | ||
docs | ||
scripts | ||
src | ||
static | ||
.gitignore | ||
.local_build.sh | ||
babel.config.js | ||
code-block-loader.js | ||
docusaurus.config.js | ||
package-lock.json | ||
package.json | ||
README.md | ||
settings.ini | ||
sidebars.js | ||
vercel_build.sh | ||
vercel_requirements.txt | ||
vercel.json |
LangChain Documentation
For more information on contributing to our documentation, see the Documentation Contributing Guide