Prompt Entity
The Prompt entity integrates AI/LLM services into workflows. It combines a model configuration, prompt template, and optional script for intelligent content generation.
Purpose
Prompt entities enable:
- AI Content Generation — Generate text using LLM APIs
- Dynamic Prompts — Interpolate message data into prompts
- Model Flexibility — Switch between AI providers via Models
- Response Processing — Transform AI output via scripts
Configuration
Available Fields
| Field | Description |
|---|---|
| Name | Display name (e.g., "Generate Welcome Message") |
| Description | Optional documentation |
| Model | Reference to a Model (API URL + credentials) |
| Prompt | Template text sent to the LLM |
| Script | JavaScript code for pre/post processing |
| Arguments | Variables injected before execution |
Model Selection
Models are configured in Settings → Models and contain:
- API endpoint URL
- Authentication (Token or Client Key/Secret)
- Model name for display
Example Model Configuration:
Name: GPT-4 Turbo
URL: https://api.openai.com/v1/chat/completions
Auth: Token (sk-proj-xxx)
Prompt Template
The prompt field contains the text sent to the LLM. Use handlebar syntax {{...}} for dynamic values:
You are a helpful assistant for {{COMPANY_NAME}}.
A new user just signed up:
- Name: {{message.user.name}}
- Email: {{message.user.email}}
Write a personalized welcome message for this user.
Keep it friendly and under 280 characters for social media.
Handlebar Substitution:
Variables are automatically substituted using {{path.to.variable}} syntax:
| Source | Example | Description |
|---|---|---|
| Message fields | {{message.user.name}} | Access any property from the incoming message |
| Message nested | {{message.event.player.name}} | Deep property access |
| Environment variables | {{COMPANY_NAME}} | Variables defined in Settings |
| Argument variables | {{userName}} | Arguments defined on this entity |
| Upstream variables | {{socialContent}} | Variables set by earlier entities |
How it works:
- Message is injected into context
- Arguments are injected and evaluated
- Handlebar substitution runs — all
{{...}}patterns are replaced with actual values - User script runs (can further modify
promptvariable) executePromptWithModel()sends the final prompt to the LLM
Unresolved handlebars (undefined variables) are kept as-is and logged as warnings. This helps with debugging.
Script Execution
Prompt scripts can run before or modify behavior after the prompt:
// Pre-processing: Prepare data for the prompt
var userName = message.user.displayName || message.user.email.split('@')[0];
var userEmail = message.user.email;
var companyName = COMPANY_NAME || 'Acme Corp';
// The prompt template will use these variables
// Post-processing happens after executePromptWithModel()
// Access the response via latestPromptResponse()
Auto-Execution
When a Model is associated, the Prompt entity automatically calls:
await executePromptWithModel();
This function:
- Reads the
promptfield - Uses the
entityModelcredentials - Calls the model's API
- Stores the response for
latestPromptResponse()
Available in Script Context
| Variable | Description |
|---|---|
message | The incoming message payload |
prompt | The prompt template text |
entityModel | Model configuration object |
entityModel.modelUrl | API endpoint |
entityModel.token | Bearer token (if configured) |
entityModel.clientKey | Client key (if configured) |
entityModel.secretKey | Secret key (if configured) |
| Arguments | Each argument as top-level var |
Built-in Functions
// Call prompt with model credentials
await executePromptWithModel();
// Get the latest response
const response = latestPromptResponse();
// Get a specific response by index
const firstResponse = getPromptResponse(0);
// Manual prompt call with token auth
const result = await promptCallToken(token, promptText, modelUrl);
// Manual prompt call with key/secret auth
const result = await promptCallKeys(clientKey, secretKey, promptText, modelUrl);
Execution Flow
The PromptEvaluator processes Prompt entities:
┌─────────────────────────────────────────────────────────────┐
│ PromptEvaluator.evaluate() │
├─────────────────────────────────────────────────────────────┤
│ 1. Inject message into V8 context │
│ 2. Inject prompt, model, and arguments │
│ 3. Process handlebar substitution in prompt │
│ {{message.x}} → actual values │
│ 4. Run pre-processing script (if any) │
│ 5. Auto-execute: executePromptWithModel() │
│ 6. Capture context variables │
│ 7. Return { action: 'process_children', children } │
└─────────────────────────────────────────────────────────────┘
Examples
Example 1: Simple Welcome Message
Generate a welcome message for new users:
Configuration:
{
"name": "Generate Welcome Message",
"modelId": "gpt4-model-uuid",
"prompt": "Write a friendly welcome message for a new user named {{userName}}. Keep it under 200 characters.",
"arguments": [
{
"argumentName": "userName",
"argumentValue": "{{message.user.name}}",
"argumentDescription": "User's display name"
}
]
}
Script:
// Fallback if name is missing
var userName = message.user.name || 'friend';
Example 2: Content Summarization
Summarize long-form content:
Configuration:
{
"name": "Summarize Article",
"modelId": "claude-model-uuid",
"prompt": "Summarize the following article in 3 bullet points:\n\n{{articleContent}}"
}
Script:
// Extract content from message
var articleContent = message.content.body || message.content.text;
// Truncate if too long
if (articleContent.length > 10000) {
articleContent = articleContent.substring(0, 10000) + '...';
}
Example 3: Sentiment-Aware Response
Generate response based on detected sentiment:
Configuration:
{
"name": "Generate Support Response",
"modelId": "gpt4-model-uuid",
"prompt": "A customer sent this message:\n\n{{customerMessage}}\n\nTheir detected sentiment is: {{sentiment}}\n\nWrite an appropriate support response."
}
Script:
var customerMessage = message.content;
var sentiment = message.metadata.sentiment || 'neutral';
// Add context for the AI
if (sentiment === 'negative') {
var additionalContext = 'The customer seems frustrated. Be extra empathetic.';
} else if (sentiment === 'positive') {
var additionalContext = 'The customer is happy. Match their enthusiasm.';
}
Example 4: Multi-Model Workflow
Use different models for different tasks:
Entity 1: Fast Classification
{
"name": "Classify Intent",
"modelId": "fast-model-uuid",
"prompt": "Classify this message into one category: support, sales, or general.\n\nMessage: {{message.content}}\n\nRespond with just the category name."
}
Entity 2: Detailed Response
{
"name": "Generate Detailed Response",
"modelId": "quality-model-uuid",
"prompt": "Write a detailed response for this {{intent}} inquiry:\n\n{{message.content}}"
}
Example 5: Manual API Calls
For advanced use cases, call APIs directly:
// Use environment variables for credentials
const apiKey = CUSTOM_API_KEY;
const modelUrl = CUSTOM_MODEL_URL;
// Build custom prompt
const customPrompt = `
System: You are a JSON generator.
User: Generate product data for: ${message.productName}
Output: JSON only, no markdown
`;
// Call with token auth
const response = await promptCallToken(apiKey, customPrompt, modelUrl);
// Parse response
try {
var productData = JSON.parse(response);
} catch (e) {
print('Failed to parse response:', response);
var productData = null;
}
Response Handling
Accessing Responses
After executePromptWithModel() completes:
// In the same entity or downstream entities
const aiResponse = latestPromptResponse();
// The response is typically a string
print('AI said:', aiResponse);
// Parse if JSON
try {
const structured = JSON.parse(aiResponse);
} catch (e) {
// Handle plain text response
}
Response Storage
Responses are stored in the execution context:
{
"____prompt_responses____": [
"Welcome to Acme Corp! We're thrilled to have you..."
],
"____latest_prompt_response____": "Welcome to Acme Corp! We're thrilled..."
}
This data is saved with the processed message in S3.
Best Practices
Prompt Engineering
- Be specific — Clear instructions produce better results
- Provide context — Include relevant message data
- Set constraints — Character limits, format requirements
- Use examples — Show desired output format
Error Handling
try {
await executePromptWithModel();
const response = latestPromptResponse();
if (!response) {
print('Warning: Empty response from model');
var fallbackContent = 'Default message';
}
} catch (error) {
print('Model call failed:', error.message);
// Workflow continues to children
}
Cost Optimization
- Choose appropriate models — Faster models for simple tasks
- Truncate long inputs — Limit token usage
- Cache responses — Store in variables for reuse
- Batch when possible — Combine related prompts
Security
- Never expose credentials — Use Model configuration
- Sanitize user input — Prevent prompt injection
- Validate responses — Don't trust AI output blindly
Related Topics
- Settings - Models — Configure AI models
- Prompt Scripts — Built-in prompt functions
- Consumer Evaluator — PromptEvaluator implementation