Skip to main content

Prompt Entity

The Prompt entity integrates AI/LLM services into workflows. It combines a model configuration, prompt template, and optional script for intelligent content generation.

Purpose

Prompt entities enable:

  1. AI Content Generation — Generate text using LLM APIs
  2. Dynamic Prompts — Interpolate message data into prompts
  3. Model Flexibility — Switch between AI providers via Models
  4. Response Processing — Transform AI output via scripts

Configuration

Available Fields

FieldDescription
NameDisplay name (e.g., "Generate Welcome Message")
DescriptionOptional documentation
ModelReference to a Model (API URL + credentials)
PromptTemplate text sent to the LLM
ScriptJavaScript code for pre/post processing
ArgumentsVariables injected before execution

Model Selection

Models are configured in Settings → Models and contain:

  • API endpoint URL
  • Authentication (Token or Client Key/Secret)
  • Model name for display

Example Model Configuration:

Name: GPT-4 Turbo
URL: https://api.openai.com/v1/chat/completions
Auth: Token (sk-proj-xxx)

Prompt Template

The prompt field contains the text sent to the LLM. Use handlebar syntax {{...}} for dynamic values:

You are a helpful assistant for {{COMPANY_NAME}}.

A new user just signed up:
- Name: {{message.user.name}}
- Email: {{message.user.email}}

Write a personalized welcome message for this user.
Keep it friendly and under 280 characters for social media.

Handlebar Substitution:

Variables are automatically substituted using {{path.to.variable}} syntax:

SourceExampleDescription
Message fields{{message.user.name}}Access any property from the incoming message
Message nested{{message.event.player.name}}Deep property access
Environment variables{{COMPANY_NAME}}Variables defined in Settings
Argument variables{{userName}}Arguments defined on this entity
Upstream variables{{socialContent}}Variables set by earlier entities

How it works:

  1. Message is injected into context
  2. Arguments are injected and evaluated
  3. Handlebar substitution runs — all {{...}} patterns are replaced with actual values
  4. User script runs (can further modify prompt variable)
  5. executePromptWithModel() sends the final prompt to the LLM
tip

Unresolved handlebars (undefined variables) are kept as-is and logged as warnings. This helps with debugging.

Script Execution

Prompt scripts can run before or modify behavior after the prompt:

// Pre-processing: Prepare data for the prompt
var userName = message.user.displayName || message.user.email.split('@')[0];
var userEmail = message.user.email;
var companyName = COMPANY_NAME || 'Acme Corp';

// The prompt template will use these variables

// Post-processing happens after executePromptWithModel()
// Access the response via latestPromptResponse()

Auto-Execution

When a Model is associated, the Prompt entity automatically calls:

await executePromptWithModel();

This function:

  1. Reads the prompt field
  2. Uses the entityModel credentials
  3. Calls the model's API
  4. Stores the response for latestPromptResponse()

Available in Script Context

VariableDescription
messageThe incoming message payload
promptThe prompt template text
entityModelModel configuration object
entityModel.modelUrlAPI endpoint
entityModel.tokenBearer token (if configured)
entityModel.clientKeyClient key (if configured)
entityModel.secretKeySecret key (if configured)
ArgumentsEach argument as top-level var

Built-in Functions

// Call prompt with model credentials
await executePromptWithModel();

// Get the latest response
const response = latestPromptResponse();

// Get a specific response by index
const firstResponse = getPromptResponse(0);

// Manual prompt call with token auth
const result = await promptCallToken(token, promptText, modelUrl);

// Manual prompt call with key/secret auth
const result = await promptCallKeys(clientKey, secretKey, promptText, modelUrl);

Execution Flow

The PromptEvaluator processes Prompt entities:

┌─────────────────────────────────────────────────────────────┐
│ PromptEvaluator.evaluate() │
├─────────────────────────────────────────────────────────────┤
│ 1. Inject message into V8 context │
│ 2. Inject prompt, model, and arguments │
│ 3. Process handlebar substitution in prompt │
│ {{message.x}} → actual values │
│ 4. Run pre-processing script (if any) │
│ 5. Auto-execute: executePromptWithModel() │
│ 6. Capture context variables │
│ 7. Return { action: 'process_children', children } │
└─────────────────────────────────────────────────────────────┘

Examples

Example 1: Simple Welcome Message

Generate a welcome message for new users:

Configuration:

{
"name": "Generate Welcome Message",
"modelId": "gpt4-model-uuid",
"prompt": "Write a friendly welcome message for a new user named {{userName}}. Keep it under 200 characters.",
"arguments": [
{
"argumentName": "userName",
"argumentValue": "{{message.user.name}}",
"argumentDescription": "User's display name"
}
]
}

Script:

// Fallback if name is missing
var userName = message.user.name || 'friend';

Example 2: Content Summarization

Summarize long-form content:

Configuration:

{
"name": "Summarize Article",
"modelId": "claude-model-uuid",
"prompt": "Summarize the following article in 3 bullet points:\n\n{{articleContent}}"
}

Script:

// Extract content from message
var articleContent = message.content.body || message.content.text;

// Truncate if too long
if (articleContent.length > 10000) {
articleContent = articleContent.substring(0, 10000) + '...';
}

Example 3: Sentiment-Aware Response

Generate response based on detected sentiment:

Configuration:

{
"name": "Generate Support Response",
"modelId": "gpt4-model-uuid",
"prompt": "A customer sent this message:\n\n{{customerMessage}}\n\nTheir detected sentiment is: {{sentiment}}\n\nWrite an appropriate support response."
}

Script:

var customerMessage = message.content;
var sentiment = message.metadata.sentiment || 'neutral';

// Add context for the AI
if (sentiment === 'negative') {
var additionalContext = 'The customer seems frustrated. Be extra empathetic.';
} else if (sentiment === 'positive') {
var additionalContext = 'The customer is happy. Match their enthusiasm.';
}

Example 4: Multi-Model Workflow

Use different models for different tasks:

Entity 1: Fast Classification

{
"name": "Classify Intent",
"modelId": "fast-model-uuid",
"prompt": "Classify this message into one category: support, sales, or general.\n\nMessage: {{message.content}}\n\nRespond with just the category name."
}

Entity 2: Detailed Response

{
"name": "Generate Detailed Response",
"modelId": "quality-model-uuid",
"prompt": "Write a detailed response for this {{intent}} inquiry:\n\n{{message.content}}"
}

Example 5: Manual API Calls

For advanced use cases, call APIs directly:

// Use environment variables for credentials
const apiKey = CUSTOM_API_KEY;
const modelUrl = CUSTOM_MODEL_URL;

// Build custom prompt
const customPrompt = `
System: You are a JSON generator.
User: Generate product data for: ${message.productName}
Output: JSON only, no markdown
`;

// Call with token auth
const response = await promptCallToken(apiKey, customPrompt, modelUrl);

// Parse response
try {
var productData = JSON.parse(response);
} catch (e) {
print('Failed to parse response:', response);
var productData = null;
}

Response Handling

Accessing Responses

After executePromptWithModel() completes:

// In the same entity or downstream entities
const aiResponse = latestPromptResponse();

// The response is typically a string
print('AI said:', aiResponse);

// Parse if JSON
try {
const structured = JSON.parse(aiResponse);
} catch (e) {
// Handle plain text response
}

Response Storage

Responses are stored in the execution context:

{
"____prompt_responses____": [
"Welcome to Acme Corp! We're thrilled to have you..."
],
"____latest_prompt_response____": "Welcome to Acme Corp! We're thrilled..."
}

This data is saved with the processed message in S3.

Best Practices

Prompt Engineering

  1. Be specific — Clear instructions produce better results
  2. Provide context — Include relevant message data
  3. Set constraints — Character limits, format requirements
  4. Use examples — Show desired output format

Error Handling

try {
await executePromptWithModel();
const response = latestPromptResponse();

if (!response) {
print('Warning: Empty response from model');
var fallbackContent = 'Default message';
}
} catch (error) {
print('Model call failed:', error.message);
// Workflow continues to children
}

Cost Optimization

  1. Choose appropriate models — Faster models for simple tasks
  2. Truncate long inputs — Limit token usage
  3. Cache responses — Store in variables for reuse
  4. Batch when possible — Combine related prompts

Security

  1. Never expose credentials — Use Model configuration
  2. Sanitize user input — Prevent prompt injection
  3. Validate responses — Don't trust AI output blindly