Skip to main content

ChatGoogleGenerativeAI

Google AI offers a number of different chat models, including the powerful Gemini series. For information on the latest models, their features, context windows, etc. head to the Google AI docs.

This will help you getting started with ChatGoogleGenerativeAI chat models. For detailed documentation of all ChatGoogleGenerativeAI features and configurations head to the API reference.

Overview​

Integration details​

ClassPackageLocalSerializablePY supportPackage downloadsPackage latest
ChatGoogleGenerativeAI@langchain/google-genaiβŒβœ…βœ…NPM - DownloadsNPM - Version

Model features​

See the links in the table headers below for guides on how to use specific features.

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingToken usageLogprobs
βœ…βœ…βŒβœ…βœ…βœ…βœ…βœ…βŒ

Setup​

You can access Google’s gemini and gemini-vision models, as well as other generative models in LangChain through ChatGoogleGenerativeAI class in the @langchain/google-genai integration package.

tip

You can also access Google's gemini family of models via the LangChain VertexAI and VertexAI-web integrations.

Click here to read the docs.

Credentials​

Get an API key here: https://ai.google.dev/tutorials/setup

Then set the GOOGLE_API_KEY environment variable:

export GOOGLE_API_KEY="your-api-key"

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

# export LANGCHAIN_TRACING_V2="true"
# export LANGCHAIN_API_KEY="your-api-key"

Installation​

The LangChain ChatGoogleGenerativeAI integration lives in the @langchain/google-genai package:

yarn add @langchain/google-genai

Instantiation​

Now we can instantiate our model object and generate chat completions:

import { ChatGoogleGenerativeAI } from "@langchain/google-genai";

const llm = new ChatGoogleGenerativeAI({
model: "gemini-1.5-pro",
temperature: 0,
maxRetries: 2,
// other params...
});

Invocation​

const aiMsg = await llm.invoke([
[
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
],
["human", "I love programming."],
]);
aiMsg;
AIMessage {
"content": "J'adore programmer. \n",
"additional_kwargs": {
"finishReason": "STOP",
"index": 0,
"safetyRatings": [
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE"
}
]
},
"response_metadata": {
"finishReason": "STOP",
"index": 0,
"safetyRatings": [
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE"
}
]
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 21,
"output_tokens": 5,
"total_tokens": 26
}
}
console.log(aiMsg.content);
J'adore programmer.

Chaining​

We can chain our model with a prompt template like so:

import { ChatPromptTemplate } from "@langchain/core/prompts";

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
],
["human", "{input}"],
]);

const chain = prompt.pipe(llm);
await chain.invoke({
input_language: "English",
output_language: "German",
input: "I love programming.",
});
AIMessage {
"content": "Ich liebe das Programmieren. \n",
"additional_kwargs": {
"finishReason": "STOP",
"index": 0,
"safetyRatings": [
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE"
}
]
},
"response_metadata": {
"finishReason": "STOP",
"index": 0,
"safetyRatings": [
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE"
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE"
}
]
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 16,
"output_tokens": 7,
"total_tokens": 23
}
}

Safety Settings​

Gemini models have default safety settings that can be overridden. If you are receiving lots of β€œSafety Warnings” from your models, you can try tweaking the safety_settings attribute of the model. For example, to turn off safety blocking for dangerous content, you can import enums from the @google/generative-ai package, then construct your LLM as follows:

import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { HarmBlockThreshold, HarmCategory } from "@google/generative-ai";

const llmWithSafetySettings = new ChatGoogleGenerativeAI({
model: "gemini-1.5-pro",
temperature: 0,
safetySettings: [
{
category: HarmCategory.HARM_CATEGORY_HARASSMENT,
threshold: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
},
],
// other params...
});

Tool calling​

Tool calling with Google AI is mostly the same as tool calling with other models, but has a few restrictions on schema.

The Google AI API does not allow tool schemas to contain an object with unknown properties. For example, the following Zod schemas will throw an error:

const invalidSchema = z.object({ properties: z.record(z.unknown()) });

and

const invalidSchema2 = z.record(z.unknown());

Instead, you should explicitly define the properties of the object field. Here’s an example:

import { tool } from "@langchain/core/tools";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { z } from "zod";

// Define your tool
const fakeBrowserTool = tool(
(_) => {
return "The search result is xyz...";
},
{
name: "browser_tool",
description:
"Useful for when you need to find something on the web or summarize a webpage.",
schema: z.object({
url: z.string().describe("The URL of the webpage to search."),
query: z.string().optional().describe("An optional search query to use."),
}),
}
);

const llmWithTool = new ChatGoogleGenerativeAI({
model: "gemini-pro",
}).bindTools([fakeBrowserTool]); // Bind your tools to the model

const toolRes = await llmWithTool.invoke([
[
"human",
"Search the web and tell me what the weather will be like tonight in new york. use a popular weather website",
],
]);

console.log(toolRes.tool_calls);
[
{
name: 'browser_tool',
args: {
url: 'https://www.weather.com',
query: 'weather tonight in new york'
},
type: 'tool_call'
}
]

Gemini Prompting FAQs​

As of the time this doc was written (2023/12/12), Gemini has some restrictions on the types and structure of prompts it accepts. Specifically:

  1. When providing multimodal (image) inputs, you are restricted to at most 1 message of β€œhuman” (user) type. You cannot pass multiple messages (though the single human message may have multiple content entries)
  2. System messages are not natively supported, and will be merged with the first human message if present.
  3. For regular chat conversations, messages must follow the human/ai/human/ai alternating pattern. You may not provide 2 AI or human messages in sequence.
  4. Message may be blocked if they violate the safety checks of the LLM. In this case, the model will return an empty response.

API reference​

For detailed documentation of all ChatGoogleGenerativeAI features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_google_genai.ChatGoogleGenerativeAI.html


Was this page helpful?


You can also leave detailed feedback on GitHub.