anthropic_chat
Generate text completions using Anthropic's Claude AI models via the Messages API.
Overview
Generate text completions using Anthropic's Claude AI models via the Messages API.
This step integrates with Anthropic's Claude family of models (including Claude 3.5 Sonnet, Claude 3 Opus, and Claude 3 Haiku) for natural language generation, analysis, and transformation. You can provide static or dynamic system prompts to guide the AI's behavior, control generation parameters like temperature and token limits, and access usage statistics for monitoring costs. Input can come from a specific field or the entire event. The AI's response is injected into the event for downstream processing. Ideal for content generation, summarization, analysis, question answering, and creative tasks.
Setup: 1. Create an Anthropic account at https://console.anthropic.com/ 2. Generate an API key from the API Keys section 3. Store your API key securely (e.g., as an environment variable: ANTHROPIC_API_KEY) 4. Choose a Claude model from the available options (claude-3-5-sonnet-20241022 recommended for most use cases)
API Key: Required. Get your API key from https://console.anthropic.com/settings/keys
Examples
Simple text generation
Get a completion from Claude with minimal configuration
type: anthropic_chat
api_key: ${env:anthropic_api_key}
model: claude-3-5-sonnet-20241022
input_from: user_prompt
output_to: ai_response
max_tokens: 1024
Customer support with system prompt
Use Claude as a customer support assistant with specific personality
type: anthropic_chat
api_key: ${env:anthropic_api_key}
model: claude-3-5-sonnet-20241022
system: You are a helpful customer support assistant for a SaaS company. Be friendly, professional, and concise. Always offer to escalate complex issues.
input_from: customer_message
output_to: support_response
temperature: 0.7
max_tokens: 500
Dynamic system prompts from data
Customize AI behavior per-event using dynamic system prompts
type: anthropic_chat
api_key: ${env:anthropic_api_key}
model: claude-3-5-sonnet-20241022
system_key: conversation.system_instruction
input_from: conversation.user_message
output_to: conversation.ai_reply
temperature: 0.8
max_tokens: 2048
include_usage: true
Lead scoring with template prompt
Use prompt template with variable interpolation for structured analysis
type: anthropic_chat
api_key: ${env:anthropic_api_key}
model: claude-3-5-sonnet-20241022
system: "You are a lead scoring assistant. Score leads from 1-10 and return only valid JSON."
prompt: |
Score this lead from 1-10 based on:
- Company size: ${company_data.employees}
- Industry: ${input.industry}
- Budget: ${input.budget}
Return JSON with this exact structure:
{
"score": <number from 1-10>,
"reasoning": "<brief explanation>"
}
output_to: lead_score
temperature: 0.0
max_tokens: 500
Structured JSON with schema validation
Use JSON schema to get guaranteed structure (schema injected into system prompt)
type: anthropic_chat
api_key: ${env:anthropic_api_key}
model: claude-3-5-sonnet-20241022
system: "You are a lead scoring assistant. Analyze leads accurately."
prompt: |
Score this lead from 1-10 based on:
- Company size: ${company.employees}
- Industry: ${company.industry}
- Budget: ${company.budget}
response_format:
type: json_schema
json_schema:
name: lead_score
schema:
type: object
properties:
score:
type: number
minimum: 1
maximum: 10
reasoning:
type: string
confidence:
type: string
enum: [low, medium, high]
required: [score, reasoning, confidence]
output_to: lead_analysis
temperature: 0.0
max_tokens: 500
Precise responses with low temperature
Use low temperature for consistent, deterministic outputs
type: anthropic_chat
api_key: ${env:anthropic_api_key}
model: claude-3-5-sonnet-20241022
system: Extract key information and respond in JSON format.
input_from: document.text
output_to: extracted_data
temperature: 0.0
max_tokens: 1500
Long-form content with Claude Opus
Use Claude 3 Opus for complex reasoning and longer outputs
type: anthropic_chat
api_key: ${env:anthropic_api_key}
model: claude-3-opus-20240229
system: You are an expert technical writer. Create detailed, well-structured documentation.
input_from: requirements
output_to: documentation
temperature: 0.6
max_tokens: 4096
include_usage: true
Configuration
| Parameter | Type | Required | Description |
|---|---|---|---|
api_key | string | Yes | Anthropic API key placed in the 'x-api-key' header for every request. |
model | string | Yes | Claude model identifier. |
input_from | string | No | Dot path selecting event content for the user message. When omitted, the entire event is JSON-serialized and used as the prompt. |
input_key | string | No | DEPRECATED: Use 'input_from' instead. Dot path selecting event content for the user message. |
prompt | string | No | Template string for the user message with ${path.to.key} interpolation. When provided, this takes precedence over 'input_from'. |
system | string | No | Static system prompt string applied when 'system_key' does not resolve to a value. |
system_key | string | No | Dot path in the event whose value overrides 'system' when present. |
output_to | string | No | Event key that receives the first text block from the Claude response.
Default: "anthropic" |
output_key | string | No | DEPRECATED: Use 'output_to' instead. Event key for the response. |
include_usage | boolean | No | When True, usage statistics are saved under '<output_to>_usage'.
Default: true |
max_tokens | string | No | Maximum number of tokens Anthropic should generate. Defaults to 1024 when not provided. |
response_format | string | No | Response format configuration. Use {'type': 'json_schema', 'json_schema': {'name': '...', 'schema': {...}}} to request structured JSON output. Note: Anthropic doesn't natively support JSON schema, so this will inject the schema into the system prompt and instruct the model to follow it. |
temperature | string | No | Sampling temperature (0.0-1.0). Lower values produce more deterministic output. |
top_p | string | No | Nucleus sampling probability cutoff (0-1). Lower values limit the candidate token pool. |
top_k | string | No | Top-K sampling cutoff defining how many candidate tokens are considered at each step. |
stop_sequences | string | No | List of strings that immediately stop generation when encountered. |
base_url | string | No | Base URL for the Anthropic API. Override when routing through a proxy or gateway.
Default: "https://api.anthropic.com" |
anthropic_version | string | No | Value for the 'anthropic-version' header; controls API contract versioning.
Default: "2023-06-01" |
raw_on_error | boolean | No | When True, preserve the raw response text under '<output_key>_raw' if JSON parsing fails.
Default: true |
swallow_on_error | boolean | No | If True, leave the event unchanged on HTTP or parsing errors (no error payload injected).
Default: false |
extra_headers | string | No | Additional HTTP headers merged into each request without replacing the required defaults (x-api-key, anthropic-version, Content-Type, Accept, User-Agent). |
Base Configuration
These configuration options are available on all steps:
| Parameter | Type | Default | Description |
|---|---|---|---|
name | | null | Optional name for this step (for documentation and debugging) |
description | | null | Optional description of what this step does |
retries | integer | 0 | Number of retry attempts (0-10) |
backoff_seconds | number | 0 | Backoff (seconds) applied between retry attempts |
retry_propagate | boolean | false | If True, raise last exception after exhausting retries; otherwise swallow. |