Prompt Entity
The Prompt entity integrates AI/LLM services into workflows. It combines a model configuration, prompt template, and optional preScript / postScript snippets for intelligent content generation.
Purpose
Prompt entities enable:
- AI Content Generation — Generate text using LLM APIs
- Dynamic Prompts — Interpolate message data into prompts
- Model Flexibility — Switch between AI providers via Models
- Response Processing — Transform AI output via postScript (or prepare inputs in preScript)
Configuration
Available Fields
| Field | Description |
|---|---|
| Name | Display name (e.g., "Generate Welcome Message") |
| Description | Optional documentation |
| Model | Reference to a Model (API URL + credentials) |
| Prompt | Template text sent to the LLM |
| Pre-Script | Optional JavaScript before arguments, condition, and prompt/model handling |
| Post-Script | Optional JavaScript after the model call, before context capture |
| Arguments | Variables injected after pre-script, before condition evaluation |
Model Selection
Models are configured in Settings → Models and contain:
- API endpoint URL
- Authentication (Token or Client Key/Secret)
- Model name for display
Example Model Configuration:
Name: GPT-4 Turbo
URL: https://api.openai.com/v1/chat/completions
Auth: Token (sk-proj-xxx)
Prompt Template
The prompt field contains the text sent to the LLM. Use handlebar syntax {{...}} for dynamic values:
You are a helpful assistant for {{COMPANY_NAME}}.
A new user just signed up:
- Name: {{user.name}}
- Email: {{user.email}}
Write a personalized welcome message for this user.
Keep it friendly and under 280 characters for social media.
Handlebar Substitution:
Variables are automatically substituted using {{path.to.variable}} syntax:
| Source | Example | Description |
|---|---|---|
| Message fields | {{user.name}} | Access any property from the incoming message |
| Message nested | {{event.player.name}} | Deep property access |
| Environment variables | {{COMPANY_NAME}} | Variables defined in Settings |
| Argument variables | {{userName}} | Arguments defined on this entity |
| Upstream variables | {{socialContent}} | Variables set by earlier entities |
How it works (unified pipeline):
- Message is injected; preScript runs if present
- Arguments are injected and evaluated
- Condition is evaluated (if any);
___result___is set - Prompt/model: handlebars on the prompt, then
executePromptWithModel()when a model is configured - postScript runs if present (e.g. inspect
latestPromptResponse()) - Context is captured; the workflow continues to children (no branching modes on Prompt entities)
Unresolved handlebars (undefined variables) are kept as-is and logged as warnings. This helps with debugging.
Script Execution
preScript— Runs before arguments injection and condition evaluation. Use it to set variables the condition or handlebars need.postScript— Runs after the model call (when configured). Use it to log or transformlatestPromptResponse().
// preScript: values available before condition / handlebars
var userName = user.displayName || user.email.split('@')[0];
var companyName = COMPANY_NAME || 'Acme Corp';
// postScript: after executePromptWithModel()
print('AI said:', latestPromptResponse());
Auto-Execution
When a Model is associated, the Prompt entity automatically calls:
await executePromptWithModel();
This function:
- Reads the
promptfield - Uses the
entityModelcredentials - Calls the model's API
- Stores the response for
latestPromptResponse()
Available in Script Context
| Variable | Description |
|---|---|
| (top-level fields) | Incoming message properties (type, user, content, …) are available directly in script scope |
prompt | The prompt template text |
entityModel | Model configuration object |
entityModel.modelUrl | API endpoint |
entityModel.token | Bearer token (if configured) |
entityModel.clientKey | Client key (if configured) |
entityModel.secretKey | Secret key (if configured) |
| Arguments | Each argument as top-level var |
Built-in Functions
// Call prompt with model credentials
await executePromptWithModel();
// Get the latest response
const response = latestPromptResponse();
// Get a specific response by index
const firstResponse = getPromptResponse(0);
// Manual prompt call with token auth
const result = await promptCallToken(token, promptText, modelUrl);
// Manual prompt call with key/secret auth
const result = await promptCallKeys(clientKey, secretKey, promptText, modelUrl);
Execution Flow
Prompt entities use UnifiedEntityEvaluator, same as Events and Actions. For a typical Prompt, the pipeline runs through pre-script → arguments → condition (if any) → inject prompt/model → handlebars → executePromptWithModel() → post-script → capture context → continue to children.
┌─────────────────────────────────────────────────────────────┐
│ UnifiedEntityEvaluator (Prompt node) │
├─────────────────────────────────────────────────────────────┤
│ 1. Inject message │
│ 2. Run preScript (if any) │
│ 3. Inject arguments (if any) │
│ 4. Evaluate condition (if any) │
│ 5. Inject prompt/model, handlebars, optional vision prep │
│ 6. executePromptWithModel() (if model configured) │
│ 7. Run postScript (if any) │
│ 8. Capture context → process_children │
└─────────────────────────────────────────────────────────────┘
Examples
Example 1: Simple Welcome Message
Generate a welcome message for new users:
Configuration:
{
"name": "Generate Welcome Message",
"modelId": "gpt4-model-uuid",
"prompt": "Write a friendly welcome message for a new user named {{userName}}. Keep it under 200 characters.",
"arguments": [
{
"argumentName": "userName",
"argumentValue": "{{user.name}}",
"argumentDescription": "User's display name"
}
]
}
Pre-Script:
// Fallback if name is missing
var userName = user.name || 'friend';
Example 2: Content Summarization
Summarize long-form content:
Configuration:
{
"name": "Summarize Article",
"modelId": "claude-model-uuid",
"prompt": "Summarize the following article in 3 bullet points:\n\n{{articleContent}}"
}
Pre-Script:
// Extract content from message
var articleContent = content.body || content.text;
// Truncate if too long
if (articleContent.length > 10000) {
articleContent = articleContent.substring(0, 10000) + '...';
}
Example 3: Sentiment-Aware Response
Generate response based on detected sentiment:
Configuration:
{
"name": "Generate Support Response",
"modelId": "gpt4-model-uuid",
"prompt": "A customer sent this message:\n\n{{customerMessage}}\n\nTheir detected sentiment is: {{sentiment}}\n\nWrite an appropriate support response."
}
Pre-Script:
var customerMessage = content;
var sentiment = metadata.sentiment || 'neutral';
// Add context for the AI
if (sentiment === 'negative') {
var additionalContext = 'The customer seems frustrated. Be extra empathetic.';
} else if (sentiment === 'positive') {
var additionalContext = 'The customer is happy. Match their enthusiasm.';
}
Example 4: Multi-Model Workflow
Use different models for different tasks:
Entity 1: Fast Classification
{
"name": "Classify Intent",
"modelId": "fast-model-uuid",
"prompt": "Classify this message into one category: support, sales, or general.\n\nMessage: {{content}}\n\nRespond with just the category name."
}
Entity 2: Detailed Response
{
"name": "Generate Detailed Response",
"modelId": "quality-model-uuid",
"prompt": "Write a detailed response for this {{intent}} inquiry:\n\n{{content}}"
}
Example 5: Manual API Calls
For advanced use cases, call APIs directly:
// Use environment variables for credentials
const apiKey = CUSTOM_API_KEY;
const modelUrl = CUSTOM_MODEL_URL;
// Build custom prompt
const customPrompt = `
System: You are a JSON generator.
User: Generate product data for: ${productName}
Output: JSON only, no markdown
`;
// Call with token auth
const response = await promptCallToken(apiKey, customPrompt, modelUrl);
// Parse response
try {
var productData = JSON.parse(response);
} catch (e) {
print('Failed to parse response:', response);
var productData = null;
}
Response Handling
Accessing Responses
After executePromptWithModel() completes:
// In the same entity or downstream entities
const aiResponse = latestPromptResponse();
// The response is typically a string
print('AI said:', aiResponse);
// Parse if JSON
try {
const structured = JSON.parse(aiResponse);
} catch (e) {
// Handle plain text response
}
Response Storage
Responses are stored in the execution context:
{
"____prompt_responses____": [
"Welcome to Acme Corp! We're thrilled to have you..."
],
"____latest_prompt_response____": "Welcome to Acme Corp! We're thrilled..."
}
This data is saved with the processed message in S3.
Best Practices
Prompt Engineering
- Be specific — Clear instructions produce better results
- Provide context — Include relevant message data
- Set constraints — Character limits, format requirements
- Use examples — Show desired output format
Error Handling
Model and script failures are handled by the consumer (may exit the workflow path). In postScript, guard optional work:
try {
const response = latestPromptResponse();
if (!response) {
print('Warning: Empty response from model');
var fallbackContent = 'Default message';
}
} catch (error) {
print('Post-processing failed:', error.message);
}
Cost Optimization
- Choose appropriate models — Faster models for simple tasks
- Truncate long inputs — Limit token usage
- Cache responses — Store in variables for reuse
- Batch when possible — Combine related prompts
Security
- Never expose credentials — Use Model configuration
- Sanitize user input — Prevent prompt injection
- Validate responses — Don't trust AI output blindly
Related Topics
- Settings - Universal Model System — Configure AI models
- Prompt Scripts — Built-in prompt functions
- Consumer Evaluator — UnifiedEntityEvaluator implementation