mirror of
https://github.com/hwchase17/langchain.git
synced 2026-01-13 20:06:24 +00:00
FasterWhisperParser fails on a machine without an NVIDIA GPU: "Requested float16 compute type, but the target device or backend do not support efficient float16 computation." This problem arises because the WhisperModel is called with compute_type="float16", which works only for NVIDIA GPU. According to the [CTranslate2 docs](https://opennmt.net/CTranslate2/quantization.html#bit-floating-points-float16) float16 is supported only on NVIDIA GPUs. Removing the compute_type parameter solves the problem for CPUs. According to the [CTranslate2 docs](https://opennmt.net/CTranslate2/quantization.html#quantize-on-model-loading) setting compute_type to "default" (standard when omitting the parameter) uses the original compute type of the model or performs implicit conversion for the specific computation device (GPU or CPU). I suggest to remove compute_type="float16". @hulitaitai you are the original author of the FasterWhisperParser - is there a reason for setting the parameter to float16? Thanks for reviewing the PR! Co-authored-by: qonnop <qonnop@users.noreply.github.com>
🦜️🧑🤝🧑 LangChain Community
Quick Install
pip install langchain-community
What is it?
LangChain Community contains third-party integrations that implement the base interfaces defined in LangChain Core, making them ready-to-use in any LangChain application.
For full documentation see the API reference.
📕 Releases & Versioning
langchain-community is currently on version 0.0.x
All changes will be accompanied by a patch version increase.
💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see the Contributing Guide.