mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-21 22:56:05 +00:00
The async embed function does not properly handle HTTP errors. For instance with large batches, Mistral AI returns `Too many inputs in request, split into more batches.` in a 400 error. This leads to a KeyError in `response.json()["data"]` l.288 This PR fixes the issue by: - calling `response.raise_for_status()` before returning - adding a retry similarly to what is done in the synchronous counterpart `embed_documents` I also added an integration test, but willing to move it to unit tests if more relevant.