Skip to main content

Prompt Entity

The Prompt entity integrates AI/LLM services into workflows. It combines a model configuration, prompt template, and optional preScript / postScript snippets for intelligent content generation.

Purpose

Prompt entities enable:

  1. AI Content Generation — Generate text using LLM APIs
  2. Dynamic Prompts — Interpolate message data into prompts
  3. Model Flexibility — Switch between AI providers via Models
  4. Response Processing — Transform AI output via postScript (or prepare inputs in preScript)

Configuration

Available Fields

FieldDescription
NameDisplay name (e.g., "Generate Welcome Message")
DescriptionOptional documentation
ModelReference to a Model (API URL + credentials)
PromptTemplate text sent to the LLM
Pre-ScriptOptional JavaScript before arguments, condition, and prompt/model handling
Post-ScriptOptional JavaScript after the model call, before context capture
ArgumentsVariables injected after pre-script, before condition evaluation

Model Selection

Models are configured in Settings → Models and contain:

  • API endpoint URL
  • Authentication (Token or Client Key/Secret)
  • Model name for display

Example Model Configuration:

Name: GPT-4 Turbo
URL: https://api.openai.com/v1/chat/completions
Auth: Token (sk-proj-xxx)

Prompt Template

The prompt field contains the text sent to the LLM. Use handlebar syntax {{...}} for dynamic values:

You are a helpful assistant for {{COMPANY_NAME}}.

A new user just signed up:
- Name: {{user.name}}
- Email: {{user.email}}

Write a personalized welcome message for this user.
Keep it friendly and under 280 characters for social media.

Handlebar Substitution:

Variables are automatically substituted using {{path.to.variable}} syntax:

SourceExampleDescription
Message fields{{user.name}}Access any property from the incoming message
Message nested{{event.player.name}}Deep property access
Environment variables{{COMPANY_NAME}}Variables defined in Settings
Argument variables{{userName}}Arguments defined on this entity
Upstream variables{{socialContent}}Variables set by earlier entities

How it works (unified pipeline):

  1. Message is injected; preScript runs if present
  2. Arguments are injected and evaluated
  3. Condition is evaluated (if any); ___result___ is set
  4. Prompt/model: handlebars on the prompt, then executePromptWithModel() when a model is configured
  5. postScript runs if present (e.g. inspect latestPromptResponse())
  6. Context is captured; the workflow continues to children (no branching modes on Prompt entities)
tip

Unresolved handlebars (undefined variables) are kept as-is and logged as warnings. This helps with debugging.

Script Execution

  • preScript — Runs before arguments injection and condition evaluation. Use it to set variables the condition or handlebars need.
  • postScript — Runs after the model call (when configured). Use it to log or transform latestPromptResponse().
// preScript: values available before condition / handlebars
var userName = user.displayName || user.email.split('@')[0];
var companyName = COMPANY_NAME || 'Acme Corp';

// postScript: after executePromptWithModel()
print('AI said:', latestPromptResponse());

Auto-Execution

When a Model is associated, the Prompt entity automatically calls:

await executePromptWithModel();

This function:

  1. Reads the prompt field
  2. Uses the entityModel credentials
  3. Calls the model's API
  4. Stores the response for latestPromptResponse()

Available in Script Context

VariableDescription
(top-level fields)Incoming message properties (type, user, content, …) are available directly in script scope
promptThe prompt template text
entityModelModel configuration object
entityModel.modelUrlAPI endpoint
entityModel.tokenBearer token (if configured)
entityModel.clientKeyClient key (if configured)
entityModel.secretKeySecret key (if configured)
ArgumentsEach argument as top-level var

Built-in Functions

// Call prompt with model credentials
await executePromptWithModel();

// Get the latest response
const response = latestPromptResponse();

// Get a specific response by index
const firstResponse = getPromptResponse(0);

// Manual prompt call with token auth
const result = await promptCallToken(token, promptText, modelUrl);

// Manual prompt call with key/secret auth
const result = await promptCallKeys(clientKey, secretKey, promptText, modelUrl);

Execution Flow

Prompt entities use UnifiedEntityEvaluator, same as Events and Actions. For a typical Prompt, the pipeline runs through pre-script → arguments → condition (if any) → inject prompt/model → handlebars → executePromptWithModel() → post-script → capture context → continue to children.

┌─────────────────────────────────────────────────────────────┐
│ UnifiedEntityEvaluator (Prompt node) │
├─────────────────────────────────────────────────────────────┤
│ 1. Inject message │
│ 2. Run preScript (if any) │
│ 3. Inject arguments (if any) │
│ 4. Evaluate condition (if any) │
│ 5. Inject prompt/model, handlebars, optional vision prep │
│ 6. executePromptWithModel() (if model configured) │
│ 7. Run postScript (if any) │
│ 8. Capture context → process_children │
└─────────────────────────────────────────────────────────────┘

Examples

Example 1: Simple Welcome Message

Generate a welcome message for new users:

Configuration:

{
"name": "Generate Welcome Message",
"modelId": "gpt4-model-uuid",
"prompt": "Write a friendly welcome message for a new user named {{userName}}. Keep it under 200 characters.",
"arguments": [
{
"argumentName": "userName",
"argumentValue": "{{user.name}}",
"argumentDescription": "User's display name"
}
]
}

Pre-Script:

// Fallback if name is missing
var userName = user.name || 'friend';

Example 2: Content Summarization

Summarize long-form content:

Configuration:

{
"name": "Summarize Article",
"modelId": "claude-model-uuid",
"prompt": "Summarize the following article in 3 bullet points:\n\n{{articleContent}}"
}

Pre-Script:

// Extract content from message
var articleContent = content.body || content.text;

// Truncate if too long
if (articleContent.length > 10000) {
articleContent = articleContent.substring(0, 10000) + '...';
}

Example 3: Sentiment-Aware Response

Generate response based on detected sentiment:

Configuration:

{
"name": "Generate Support Response",
"modelId": "gpt4-model-uuid",
"prompt": "A customer sent this message:\n\n{{customerMessage}}\n\nTheir detected sentiment is: {{sentiment}}\n\nWrite an appropriate support response."
}

Pre-Script:

var customerMessage = content;
var sentiment = metadata.sentiment || 'neutral';

// Add context for the AI
if (sentiment === 'negative') {
var additionalContext = 'The customer seems frustrated. Be extra empathetic.';
} else if (sentiment === 'positive') {
var additionalContext = 'The customer is happy. Match their enthusiasm.';
}

Example 4: Multi-Model Workflow

Use different models for different tasks:

Entity 1: Fast Classification

{
"name": "Classify Intent",
"modelId": "fast-model-uuid",
"prompt": "Classify this message into one category: support, sales, or general.\n\nMessage: {{content}}\n\nRespond with just the category name."
}

Entity 2: Detailed Response

{
"name": "Generate Detailed Response",
"modelId": "quality-model-uuid",
"prompt": "Write a detailed response for this {{intent}} inquiry:\n\n{{content}}"
}

Example 5: Manual API Calls

For advanced use cases, call APIs directly:

// Use environment variables for credentials
const apiKey = CUSTOM_API_KEY;
const modelUrl = CUSTOM_MODEL_URL;

// Build custom prompt
const customPrompt = `
System: You are a JSON generator.
User: Generate product data for: ${productName}
Output: JSON only, no markdown
`;

// Call with token auth
const response = await promptCallToken(apiKey, customPrompt, modelUrl);

// Parse response
try {
var productData = JSON.parse(response);
} catch (e) {
print('Failed to parse response:', response);
var productData = null;
}

Response Handling

Accessing Responses

After executePromptWithModel() completes:

// In the same entity or downstream entities
const aiResponse = latestPromptResponse();

// The response is typically a string
print('AI said:', aiResponse);

// Parse if JSON
try {
const structured = JSON.parse(aiResponse);
} catch (e) {
// Handle plain text response
}

Response Storage

Responses are stored in the execution context:

{
"____prompt_responses____": [
"Welcome to Acme Corp! We're thrilled to have you..."
],
"____latest_prompt_response____": "Welcome to Acme Corp! We're thrilled..."
}

This data is saved with the processed message in S3.

Best Practices

Prompt Engineering

  1. Be specific — Clear instructions produce better results
  2. Provide context — Include relevant message data
  3. Set constraints — Character limits, format requirements
  4. Use examples — Show desired output format

Error Handling

Model and script failures are handled by the consumer (may exit the workflow path). In postScript, guard optional work:

try {
const response = latestPromptResponse();
if (!response) {
print('Warning: Empty response from model');
var fallbackContent = 'Default message';
}
} catch (error) {
print('Post-processing failed:', error.message);
}

Cost Optimization

  1. Choose appropriate models — Faster models for simple tasks
  2. Truncate long inputs — Limit token usage
  3. Cache responses — Store in variables for reuse
  4. Batch when possible — Combine related prompts

Security

  1. Never expose credentials — Use Model configuration
  2. Sanitize user input — Prevent prompt injection
  3. Validate responses — Don't trust AI output blindly