step

openai_completion

Generate text using OpenAI's GPT models or compatible APIs (Azure OpenAI, local models).

Overview

Generate text using OpenAI's GPT models or compatible APIs (Azure OpenAI, local models).

This step provides access to OpenAI's powerful language models including GPT-4, GPT-4 Turbo, and GPT-3.5. It supports the chat completions API format, which is now the standard for all OpenAI models. You can customize system prompts, control generation parameters, enable JSON mode for structured outputs, and track token usage. The step also works with OpenAI-compatible APIs by changing the base_url. Perfect for text generation, code assistance, data extraction, analysis, translation, and creative tasks.

Setup: 1. Create an OpenAI account at https://platform.openai.com/ 2. Generate an API key from the API Keys section (https://platform.openai.com/api-keys) 3. Store your API key securely (e.g., as an environment variable: OPENAI_API_KEY) 4. Choose a model (gpt-4, gpt-4-turbo-preview, gpt-3.5-turbo, etc.)

API Key: Required. Get your API key from https://platform.openai.com/api-keys

Examples

Basic GPT-4 completion

Simple text generation with GPT-4

type: openai_completion
api_key: ${env:openai_api_key}
model: gpt-4-turbo-preview
input_from: user_prompt
output_to: ai_response
max_tokens: 500

Code assistant with system prompt

Specialized assistant for generating code examples

type: openai_completion
api_key: ${env:openai_api_key}
model: gpt-4-turbo-preview
system: You are an expert Python developer. Provide clean, well-commented code with explanations.
input_from: coding_question
output_to: code_solution
temperature: 0.3
max_tokens: 1500

Structured JSON extraction

Force model to output valid JSON for data extraction

type: openai_completion
api_key: ${env:openai_api_key}
model: gpt-4-turbo-preview
system: Extract the person's name, email, and phone number from the text. Return as JSON with keys: name, email, phone.
input_from: message_text
output_to: contact_info
response_format:
  type: json_object
temperature: 0.0

Lead scoring with template prompt

Use prompt template with variable interpolation for structured analysis

type: openai_completion
api_key: ${env:openai_api_key}
model: gpt-4-turbo-preview
system: "You are a lead scoring assistant. Score leads from 1-10 and return only valid JSON."
prompt: |
  Score this lead from 1-10 based on:
  - Company size: ${company_data.employees}
  - Industry: ${input.industry}
  - Budget: ${input.budget}

  Return JSON with this exact structure:
  {
    "score": <number from 1-10>,
    "reasoning": "<brief explanation>"
  }
response_format:
  type: json_object
temperature: 0.0
output_to: lead_score

Cost-effective with GPT-3.5

Use GPT-3.5 Turbo for high-volume, lower-cost tasks

type: openai_completion
api_key: ${env:openai_api_key}
model: gpt-3.5-turbo
system: Summarize the following text in 2-3 sentences.
input_from: article.content
output_to: article.summary
temperature: 0.5
max_tokens: 200
include_usage: true

Structured JSON with schema validation

Use JSON schema for guaranteed structure with strict validation

type: openai_completion
api_key: ${env:openai_api_key}
model: gpt-4o-mini
system: "You are a lead scoring assistant. Analyze leads and return scores with reasoning."
prompt: |
  Score this lead from 1-10 based on:
  - Company size: ${company.employees}
  - Industry: ${company.industry}
  - Budget: ${company.budget}
  - Engagement: ${company.engagement_score}
response_format:
  type: json_schema
  json_schema:
    name: lead_score
    strict: true
    schema:
      type: object
      properties:
        score:
          type: number
          minimum: 1
          maximum: 10
        reasoning:
          type: string
        confidence:
          type: string
          enum: [low, medium, high]
      required: [score, reasoning, confidence]
      additionalProperties: false
output_to: lead_analysis
temperature: 0.0

Azure OpenAI integration

Use Azure OpenAI Service with custom endpoint

type: openai_completion
api_key: ${env:azure_openai_key}
model: gpt-4
base_url: https://your-resource.openai.azure.com/openai/deployments/gpt-4
input_key: query
output_key: response
max_tokens: 1000

Configuration

Parameter Type Required Description
api_key string Yes OpenAI-compatible API key sent as a Bearer token.
model string Yes Chat model identifier (for example 'gpt-4o-mini').
input_from string No Dot path selecting the user message content. When omitted, the entire event is serialized to JSON.
input_key string No Deprecated. Use `input_from` instead.
prompt string No Template string for the user message with ${path.to.key} interpolation. When provided, this takes precedence over 'input_from'.
system string No Static system prompt text used when 'system_key' does not resolve.
system_key string No Dot path in the event whose value overrides the static 'system' prompt when present.
output_to string No Event key where the primary model response (first choice content) is stored.
Default: "openai"
output_key string No Deprecated. Use `output_to` instead.
include_usage boolean No When True, token usage statistics are saved under '<output_key>_usage'.
Default: true
temperature string No Sampling temperature (0.0-2.0 range supported by the API). Lower values produce more deterministic output.
max_tokens string No Maximum number of tokens the model may generate in the response.
base_url string No Base API URL for the OpenAI-compatible endpoint. Override when routing through a proxy.
Default: "https://api.openai.com/v1"
raw_on_error boolean No If True, store raw response body on JSON parse failure under '<output_key>_raw'.
Default: false
swallow_on_error boolean No If True, leave event unchanged on errors (no injection).
Default: false
extra_headers string No Dict of additional headers merged (does not remove Authorization unless overwritten).
response_format string No Response format configuration. Use {'type': 'json_object'} for basic JSON mode, or {'type': 'json_schema', 'json_schema': {...}} for structured output with schema validation.

Base Configuration

These configuration options are available on all steps:

Parameter Type Default Description
name null Optional name for this step (for documentation and debugging)
description null Optional description of what this step does
retries integer 0 Number of retry attempts (0-10)
backoff_seconds number 0 Backoff (seconds) applied between retry attempts
retry_propagate boolean false If True, raise last exception after exhausting retries; otherwise swallow.