Commit Graph

6 Commits

Author SHA1 Message Date
Eugene Yurtsev
844955d6e1 community[patch]: assign missed default (#26326)
Assigning missed defaults in various classes. Most clients were being
assigned during the `model_validator(mode="before")` step, so this
change should amount to a no-op in those cases.

---

This PR was autogenerated using gritql

```shell

grit apply 'class_definition(name=$C, $body, superclasses=$S) where {    
    $C <: ! "Config", // Does not work in this scope, but works after class_definition
    $body <: block($statements),
    $statements <: some bubble assignment(left=$x, right=$y, type=$t) as $A where {
        or {
            $y <: `Field($z)`,
            $x <: "model_config"
        }
    },
    // And has either Any or Optional fields without a default
    $statements <: some bubble assignment(left=$x, right=$y, type=$t) as $A where {
        $t <: or {
            r"Optional.*",
            r"Any",
            r"Union[None, .*]",
            r"Union[.*, None, .*]",
            r"Union[.*, None]",
        },
        $y <: ., // Match empty node        
        $t => `$t = None`,
    },    
}
' --language python .

```
2024-09-11 11:13:11 -04:00
Harrison Chase
8516a03a02 langchain-community[major]: Upgrade community to pydantic 2 (#26011)
This PR upgrades langchain-community to pydantic 2.


* Most of this PR was auto-generated using code mods with gritql
(https://github.com/eyurtsev/migrate-pydantic/tree/main)
* Subsequently, some code was fixed manually due to accommodate
differences between pydantic 1 and 2

Breaking Changes:

- Use TEXTEMBED_API_KEY and TEXTEMBEB_API_URL for env variables for text
embed integrations:
cbea780492

Other changes:

- Added pydantic_settings as a required dependency for community. This
may be removed if we have enough time to convert the dependency into an
optional one.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-09-05 14:07:10 -04:00
Eugene Yurtsev
bf5193bb99 community[patch]: Upgrade pydantic extra (#25185)
Upgrade to using a literal for specifying the extra which is the
recommended approach in pydantic 2.

This works correctly also in pydantic v1.

```python
from pydantic.v1 import BaseModel

class Foo(BaseModel, extra="forbid"):
    x: int

Foo(x=5, y=1)
```

And 


```python
from pydantic.v1 import BaseModel

class Foo(BaseModel):
    x: int

    class Config:
      extra = "forbid"

Foo(x=5, y=1)
```


## Enum -> literal using grit pattern:

```
engine marzano(0.1)
language python
or {
    `extra=Extra.allow` => `extra="allow"`,
    `extra=Extra.forbid` => `extra="forbid"`,
    `extra=Extra.ignore` => `extra="ignore"`
}
```

Resorted attributes in config and removed doc-string in case we will
need to deal with going back and forth between pydantic v1 and v2 during
the 0.3 release. (This will reduce merge conflicts.)


## Sort attributes in Config:

```
engine marzano(0.1)
language python


function sort($values) js {
    return $values.text.split(',').sort().join("\n");
}


class_definition($name, $body) as $C where {
    $name <: `Config`,
    $body <: block($statements),
    $values = [],
    $statements <: some bubble($values) assignment() as $A where {
        $values += $A
    },
    $body => sort($values),
}

```
2024-08-08 17:20:39 +00:00
Leonid Ganeline
dc7c06bc07 community[minor]: import fix (#20995)
Issue: When the third-party package is not installed, whenever we need
to `pip install <package>` the ImportError is raised.
But sometimes, the `ValueError` or `ModuleNotFoundError` is raised. It
is bad for consistency.
Change: replaced the `ValueError` or `ModuleNotFoundError` with
`ImportError` when we raise an error with the `pip install <package>`
message.
Note: Ideally, we replace all `try: import... except... raise ... `with
helper functions like `import_aim` or just use the existing
[langchain_core.utils.utils.guard_import](https://api.python.langchain.com/en/latest/utils/langchain_core.utils.utils.guard_import.html#langchain_core.utils.utils.guard_import)
But it would be much bigger refactoring. @baskaryan Please, advice on
this.
2024-04-29 10:32:50 -04:00
ccurme
481d3855dc patch: remove usage of llm, chat model __call__ (#20788)
- `llm(prompt)` -> `llm.invoke(prompt)`
- `llm(prompt=prompt` -> `llm.invoke(prompt)` (same with `messages=`)
- `llm(prompt, callbacks=callbacks)` -> `llm.invoke(prompt,
config={"callbacks": callbacks})`
- `llm(prompt, **kwargs)` -> `llm.invoke(prompt, **kwargs)`
2024-04-24 19:39:23 -04:00
Cheng, Penghui
cc407e8a1b community[minor]: weight only quantization with intel-extension-for-transformers. (#14504)
Support weight only quantization with intel-extension-for-transformers.
[Intel® Extension for
Transformers](https://github.com/intel/intel-extension-for-transformers)
is an innovative toolkit to accelerate Transformer-based models on Intel
platforms, in particular effective on 4th Intel Xeon Scalable processor
[Sapphire
Rapids](https://www.intel.com/content/www/us/en/products/docs/processors/xeon-accelerated/4th-gen-xeon-scalable-processors.html)
(codenamed Sapphire Rapids). The toolkit provides the below key
features:

* Seamless user experience of model compressions on Transformer-based
models by extending [Hugging Face
transformers](https://github.com/huggingface/transformers) APIs and
leveraging [Intel® Neural
Compressor](https://github.com/intel/neural-compressor)
* Advanced software optimizations and unique compression-aware runtime.
* Optimized Transformer-based model packages.
*
[NeuralChat](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat),
a customizable chatbot framework to create your own chatbot within
minutes by leveraging a rich set of plugins and SOTA optimizations.
*
[Inference](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/llm/runtime/graph)
of Large Language Model (LLM) in pure C/C++ with weight-only
quantization kernels.
This PR is an integration of weight only quantization feature with
intel-extension-for-transformers.

Unit test is in
lib/langchain/tests/integration_tests/llm/test_weight_only_quantization.py
The notebook is in
docs/docs/integrations/llms/weight_only_quantization.ipynb.
The document is in
docs/docs/integrations/providers/weight_only_quantization.mdx.

---------

Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-03 16:21:34 +00:00