langchain/libs/community/langchain_community/embeddings/laser.py
Eugene Yurtsev bf5193bb99
community[patch]: Upgrade pydantic extra (#25185)
Upgrade to using a literal for specifying the extra which is the
recommended approach in pydantic 2.

This works correctly also in pydantic v1.

```python
from pydantic.v1 import BaseModel

class Foo(BaseModel, extra="forbid"):
    x: int

Foo(x=5, y=1)
```

And 


```python
from pydantic.v1 import BaseModel

class Foo(BaseModel):
    x: int

    class Config:
      extra = "forbid"

Foo(x=5, y=1)
```


## Enum -> literal using grit pattern:

```
engine marzano(0.1)
language python
or {
    `extra=Extra.allow` => `extra="allow"`,
    `extra=Extra.forbid` => `extra="forbid"`,
    `extra=Extra.ignore` => `extra="ignore"`
}
```

Resorted attributes in config and removed doc-string in case we will
need to deal with going back and forth between pydantic v1 and v2 during
the 0.3 release. (This will reduce merge conflicts.)


## Sort attributes in Config:

```
engine marzano(0.1)
language python


function sort($values) js {
    return $values.text.split(',').sort().join("\n");
}


class_definition($name, $body) as $C where {
    $name <: `Config`,
    $body <: block($statements),
    $values = [],
    $statements <: some bubble($values) assignment() as $A where {
        $values += $A
    },
    $body => sort($values),
}

```
2024-08-08 17:20:39 +00:00

89 lines
2.9 KiB
Python

from typing import Any, Dict, List, Optional
import numpy as np
from langchain_core.embeddings import Embeddings
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.utils import pre_init
LASER_MULTILINGUAL_MODEL: str = "laser2"
class LaserEmbeddings(BaseModel, Embeddings):
"""LASER Language-Agnostic SEntence Representations.
LASER is a Python library developed by the Meta AI Research team
and used for creating multilingual sentence embeddings for over 147 languages
as of 2/25/2024
See more documentation at:
* https://github.com/facebookresearch/LASER/
* https://github.com/facebookresearch/LASER/tree/main/laser_encoders
* https://arxiv.org/abs/2205.12654
To use this class, you must install the `laser_encoders` Python package.
`pip install laser_encoders`
Example:
from laser_encoders import LaserEncoderPipeline
encoder = LaserEncoderPipeline(lang="eng_Latn")
embeddings = encoder.encode_sentences(["Hello", "World"])
"""
lang: Optional[str]
"""The language or language code you'd like to use
If empty, this implementation will default
to using a multilingual earlier LASER encoder model (called laser2)
Find the list of supported languages at
https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200
"""
_encoder_pipeline: Any # : :meta private:
class Config:
extra = "forbid"
@pre_init
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that laser_encoders has been installed."""
try:
from laser_encoders import LaserEncoderPipeline
lang = values.get("lang")
if lang:
encoder_pipeline = LaserEncoderPipeline(lang=lang)
else:
encoder_pipeline = LaserEncoderPipeline(laser=LASER_MULTILINGUAL_MODEL)
values["_encoder_pipeline"] = encoder_pipeline
except ImportError as e:
raise ImportError(
"Could not import 'laser_encoders' Python package. "
"Please install it with `pip install laser_encoders`."
) from e
return values
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Generate embeddings for documents using LASER.
Args:
texts: The list of texts to embed.
Returns:
List of embeddings, one for each text.
"""
embeddings: np.ndarray
embeddings = self._encoder_pipeline.encode_sentences(texts)
return embeddings.tolist()
def embed_query(self, text: str) -> List[float]:
"""Generate single query text embeddings using LASER.
Args:
text: The text to embed.
Returns:
Embeddings for the text.
"""
query_embeddings: np.ndarray
query_embeddings = self._encoder_pipeline.encode_sentences([text])
return query_embeddings.tolist()[0]