gemini_chat
Generate text using Google's Gemini AI models through the Gemini API.
Overview
Generate text using Google's Gemini AI models through the Gemini API.
This step provides access to Google's Gemini family of models including Gemini Pro and Gemini Pro Vision (for multimodal tasks). Gemini offers strong performance across text generation, code, reasoning, and analysis tasks. You can customize system instructions, control generation parameters (temperature, top_p, top_k), set token limits, and configure safety settings. The step tracks usage statistics for monitoring costs. Ideal for content generation, question answering, summarization, translation, and creative applications.
Setup: 1. Create a Google Cloud project at https://console.cloud.google.com/ 2. Enable the Generative Language API (Gemini API) in your project 3. Generate an API key in the Credentials section (https://makersuite.google.com/app/apikey) 4. Store your API key securely (e.g., as an environment variable: GEMINI_API_KEY)
API Key: Required. Get your API key from https://makersuite.google.com/app/apikey
Examples
Basic text generation
Simple prompt completion with Gemini Pro
type: gemini_chat
api_key: ${env:gemini_api_key}
model: gemini-pro
input_from: user_prompt
output_to: ai_response
Customer service assistant
Specialized assistant with system instructions
type: gemini_chat
api_key: ${env:gemini_api_key}
model: gemini-pro
system: You are a helpful customer service assistant for an e-commerce platform. Be professional, empathetic, and solution-oriented.
input_from: customer_inquiry
output_to: service_response
temperature: 0.7
max_output_tokens: 500
Creative content generation
High temperature for creative, diverse outputs
type: gemini_chat
api_key: ${env:gemini_api_key}
model: gemini-pro
system: You are a creative content writer. Generate engaging, original content.
input_from: content_brief
output_to: generated_content
temperature: 0.9
top_p: 0.95
top_k: 40
max_output_tokens: 2048
Lead scoring with template prompt
Use prompt template with variable interpolation for structured analysis
type: gemini_chat
api_key: ${env:gemini_api_key}
model: gemini-1.5-flash
system: "You are a lead scoring assistant. Score leads from 1-10 and return only valid JSON."
prompt: |
Score this lead from 1-10 based on:
- Company size: ${company_data.employees}
- Industry: ${input.industry}
- Budget: ${input.budget}
Return JSON with this exact structure:
{
"score": <number from 1-10>,
"reasoning": "<brief explanation>"
}
output_to: lead_score
temperature: 0.0
max_output_tokens: 500
Structured JSON with schema validation
Use JSON schema for guaranteed structure (natively supported by Gemini)
type: gemini_chat
api_key: ${env:gemini_api_key}
model: gemini-1.5-flash
system: "You are a lead scoring assistant. Analyze leads accurately."
prompt: |
Score this lead from 1-10 based on:
- Company size: ${company.employees}
- Industry: ${company.industry}
- Budget: ${company.budget}
response_format:
type: json_schema
json_schema:
name: lead_score
schema:
type: object
properties:
score:
type: number
reasoning:
type: string
confidence:
type: string
enum: [low, medium, high]
required: [score, reasoning, confidence]
output_to: lead_analysis
temperature: 0.0
max_output_tokens: 500
Precise information extraction
Low temperature for consistent, factual outputs
type: gemini_chat
api_key: ${env:gemini_api_key}
model: gemini-pro
system: Extract key facts and data points from the text. Be precise and factual.
input_from: document
output_to: extracted_facts
temperature: 0.1
max_output_tokens: 1000
include_usage: true
Configuration
| Parameter | Type | Required | Description |
|---|---|---|---|
api_key | string | Yes | Google API key appended to each request as the 'key' query parameter. |
model | string | Yes | Gemini model identifier (for example 'gemini-1.5-flash'). |
input_from | string | No | Dot path for the user message content. When omitted, the entire event is serialized to JSON. |
input_key | string | No | DEPRECATED: Use 'input_from' instead. Dot path for the user message content. |
prompt | string | No | Template string for the user message with ${path.to.key} interpolation. When provided, this takes precedence over 'input_from'. |
system | string | No | Static system instruction applied when 'system_key' does not resolve. |
system_key | string | No | Dot path whose value overrides the static 'system' instruction when present. |
output_to | string | No | Event key storing primary response text.
Default: "gemini" |
output_key | string | No | DEPRECATED: Use 'output_to' instead. Event key for the response. |
include_usage | boolean | No | When True, usage metadata is stored under '<output_to>_usage'.
Default: true |
temperature | string | No | Sampling temperature (0.0-1.0). Lower values bias toward deterministic output. |
max_output_tokens | string | No | Maximum number of tokens Gemini should generate (if supported by the model). |
response_format | string | No | Response format configuration. Use {'type': 'json_schema', 'json_schema': {'name': '...', 'schema': {...}}} to request structured JSON output. Gemini supports this natively via response_schema in generation_config. |
top_p | string | No | Nucleus sampling probability cutoff expressed as a 0-1 float. |
top_k | string | No | Top-K sampling cutoff controlling the number of candidate tokens considered. |
safety_settings | string | No | List of safety settings dictionaries forwarded verbatim to the Gemini API. |
base_url | string | No | Base endpoint URL for the Gemini API; override when routing through a proxy.
Default: "https://generativelanguage.googleapis.com" |
raw_on_error | boolean | No | When True, preserve the raw response text under '<output_key>_raw' if parsing fails.
Default: true |
swallow_on_error | boolean | No | If True, skip injecting error details and return the original event on failures.
Default: false |
extra_headers | string | No | Additional HTTP headers merged into each request alongside the defaults (Content-Type, Accept, User-Agent). |
Base Configuration
These configuration options are available on all steps:
| Parameter | Type | Default | Description |
|---|---|---|---|
name | | null | Optional name for this step (for documentation and debugging) |
description | | null | Optional description of what this step does |
retries | integer | 0 | Number of retry attempts (0-10) |
backoff_seconds | number | 0 | Backoff (seconds) applied between retry attempts |
retry_propagate | boolean | false | If True, raise last exception after exhausting retries; otherwise swallow. |