langchain/libs/experimental/tests/unit_tests/test_ollama_functions.py
Joel Akeret acfce30017
Adding compatibility for OllamaFunctions with ImagePromptTemplate (#24499)
- [ ] **PR title**: "experimental: Adding compatibility for
OllamaFunctions with ImagePromptTemplate"

- [ ] **PR message**: 
- **Description:** Removes the outdated
`_convert_messages_to_ollama_messages` method override in the
`OllamaFunctions` class to ensure that ollama multimodal models can be
invoked with an image.
    - **Issue:** #24174

---------

Co-authored-by: Joel Akeret <joel.akeret@ti&m.com>
Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-07-24 14:57:05 -07:00

31 lines
845 B
Python

import json
from typing import Any
from unittest.mock import patch
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_experimental.llms.ollama_functions import OllamaFunctions
class Schema(BaseModel):
pass
@patch.object(OllamaFunctions, "_create_stream")
def test_convert_image_prompt(
_create_stream_mock: Any,
) -> None:
response = {"message": {"content": '{"tool": "Schema", "tool_input": {}}'}}
_create_stream_mock.return_value = [json.dumps(response)]
prompt = ChatPromptTemplate.from_messages(
[("human", [{"image_url": "data:image/jpeg;base64,{image_url}"}])]
)
lmm = prompt | OllamaFunctions().with_structured_output(schema=Schema)
schema_instance = lmm.invoke(dict(image_url=""))
assert schema_instance is not None