langchain/docs/docs
Cheng, Penghui cc407e8a1b
community[minor]: weight only quantization with intel-extension-for-transformers. (#14504)
Support weight only quantization with intel-extension-for-transformers.
[Intel® Extension for
Transformers](https://github.com/intel/intel-extension-for-transformers)
is an innovative toolkit to accelerate Transformer-based models on Intel
platforms, in particular effective on 4th Intel Xeon Scalable processor
[Sapphire
Rapids](https://www.intel.com/content/www/us/en/products/docs/processors/xeon-accelerated/4th-gen-xeon-scalable-processors.html)
(codenamed Sapphire Rapids). The toolkit provides the below key
features:

* Seamless user experience of model compressions on Transformer-based
models by extending [Hugging Face
transformers](https://github.com/huggingface/transformers) APIs and
leveraging [Intel® Neural
Compressor](https://github.com/intel/neural-compressor)
* Advanced software optimizations and unique compression-aware runtime.
* Optimized Transformer-based model packages.
*
[NeuralChat](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat),
a customizable chatbot framework to create your own chatbot within
minutes by leveraging a rich set of plugins and SOTA optimizations.
*
[Inference](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/llm/runtime/graph)
of Large Language Model (LLM) in pure C/C++ with weight-only
quantization kernels.
This PR is an integration of weight only quantization feature with
intel-extension-for-transformers.

Unit test is in
lib/langchain/tests/integration_tests/llm/test_weight_only_quantization.py
The notebook is in
docs/docs/integrations/llms/weight_only_quantization.ipynb.
The document is in
docs/docs/integrations/providers/weight_only_quantization.mdx.

---------

Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-03 16:21:34 +00:00
..
_templates
additional_resources docs[patch]: Fix or remove broken mdx links (#19777) 2024-03-29 15:25:08 -07:00
changelog
contributing docs: contribute / integrations code examples update (#19319) 2024-03-20 09:27:53 -04:00
expression_language docs[minor]: Add chat model tabs to docs pages (#19589) 2024-03-29 14:23:55 -07:00
get_started cohere, docs: update imports and installs to langchain_cohere (#19918) 2024-04-02 09:47:58 -07:00
guides community[minor]: add Layerup Security integration (#19787) 2024-04-01 23:49:00 +00:00
integrations community[minor]: weight only quantization with intel-extension-for-transformers. (#14504) 2024-04-03 16:21:34 +00:00
langsmith
modules docs: mention caveats with CacheBackedEmbeddings.embed_query (#19926) 2024-04-02 19:19:29 +00:00
use_cases docs[minor]: Add chat model tabs to docs pages (#19589) 2024-03-29 14:23:55 -07:00
.gitignore
packages.mdx docs: release date fix (#19585) 2024-03-26 14:51:09 -07:00
people.mdx
security.md