Update ascend.py (#30060)

add batch_size to fix oom when embed large amount texts

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
This commit is contained in:
cold-eye 2025-03-01 11:10:41 -08:00 committed by GitHub
parent 3b066dc005
commit 7c175e3fda
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -30,6 +30,7 @@ class AscendEmbeddings(Embeddings, BaseModel):
document_instruction: str = ""
use_fp16: bool = True
pooling_method: Optional[str] = "cls"
batch_size: int = 32
model: Any
tokenizer: Any
@ -119,7 +120,18 @@ class AscendEmbeddings(Embeddings, BaseModel):
)
def embed_documents(self, texts: List[str]) -> List[List[float]]:
return self.encode([self.document_instruction + text for text in texts])
try:
import numpy as np
except ImportError as e:
raise ImportError(
"Unable to import numpy, please install with `pip install -U numpy`."
) from e
embedding_list = []
for i in range(0, len(texts), self.batch_size):
texts_ = texts[i : i + self.batch_size]
emb = self.encode([self.document_instruction + text for text in texts_])
embedding_list.append(emb)
return np.concatenate(embedding_list)
def embed_query(self, text: str) -> List[float]:
return self.encode([self.query_instruction + text])[0]