Wfh/google docs update (#14676)

- Add gemini references
- Fix the notebook (ultra isn't generally available; also gemini will
randomly filter out responses, so added a fallback)

---------

Co-authored-by: Leonid Kuligin <lkuligin@yandex.ru>
This commit is contained in:
William FH 2023-12-13 13:26:53 -08:00 committed by GitHub
parent 73382a579f
commit 6c031e0ebf
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 783 additions and 34 deletions

File diff suppressed because one or more lines are too long

View File

@ -2,42 +2,57 @@
All functionality related to [Google Cloud Platform](https://cloud.google.com/) and other `Google` products.
## LLMs
### Vertex AI
Access `PaLM` LLMs like `text-bison` and `code-bison` via `Google Vertex AI`.
We need to install `google-cloud-aiplatform` python package.
```bash
pip install google-cloud-aiplatform
```
See a [usage example](/docs/integrations/llms/google_vertex_ai_palm).
```python
from langchain.llms import VertexAI
```
### Model Garden
Access PaLM and hundreds of OSS models via `Vertex AI Model Garden`.
We need to install `google-cloud-aiplatform` python package.
```bash
pip install google-cloud-aiplatform
```
See a [usage example](/docs/integrations/llms/google_vertex_ai_palm#vertex-model-garden).
```python
from langchain.llms import VertexAIModelGarden
```
## Chat models
### ChatGoogleGenerativeAI
Access `Gemini` models such as `gemini-pro` and `gemini-pro-vision` through the `ChatGoogleGenerativeAI` class.
```bash
pip install -U langchain-google-genai
```
Configure your API key.
```bash
export GOOGLE_API_KEY=your-api-key
```
```python
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(model="gemini-pro")
llm.invoke("Sing a ballad of LangChain.")
```
Gemini vision model supports image inputs when providing a single chat message. Example:
```python
from langchain_core.messages import HumanMessage
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(model="gemini-pro-vision")
# example
message = HumanMessage(
content=[
{
"type": "text",
"text": "What's in this image?",
}, # You can optionally provide text parts
{"type": "image_url", "image_url": "https://picsum.photos/seed/picsum/200/300"},
]
)
llm.invoke([message])
```
The value of image_url can be any of the following:
- A public image URL
- A gcs file (e.g., "gcs://path/to/file.png")
- A local file path
- A base64 encoded image (e.g., data:image/png;base64,abcd124)
- A PIL image
### Vertex AI
Access PaLM chat models like `chat-bison` and `codechat-bison` via Google Cloud.
@ -72,6 +87,41 @@ See a [usage example](/docs/integrations/document_loaders/google_bigquery).
from langchain.document_loaders import BigQueryLoader
```
## LLMs
### Vertex AI
Access to `Gemini` and `PaLM` LLMs (like `text-bison` and `code-bison`) via `Google Vertex AI`.
We need to install `google-cloud-aiplatform` python package.
```bash
pip install google-cloud-aiplatform
```
See a [usage example](/docs/integrations/llms/google_vertex_ai_palm).
```python
from langchain.llms import VertexAI
```
### Model Garden
Access PaLM and hundreds of OSS models via `Vertex AI Model Garden`.
We need to install `google-cloud-aiplatform` python package.
```bash
pip install google-cloud-aiplatform
```
See a [usage example](/docs/integrations/llms/google_vertex_ai_palm#vertex-model-garden).
```python
from langchain.llms import VertexAIModelGarden
```
### Google Cloud Storage
>[Google Cloud Storage](https://en.wikipedia.org/wiki/Google_Cloud_Storage) is a managed service for storing unstructured data.