Scripts are JavaScript functions that are automatically injected into the V8 execution context when workflows run. They provide powerful capabilities for interacting with external services, processing data, and debugging.
How Scripts Work
When a workflow executes, the RocketWave Pulse Consumer:
- Creates an isolated V8 context using
isolated-vm
- Injects all available script functions into the global scope
- Runs each workflow entity through a unified pipeline, executing
preScript and postScript (when set) along with conditions, arguments, and optional model calls — see Workflow Entities Overview
- Cleans up the context after execution
This isolation ensures that:
- User code cannot access the Node.js process or file system
- Each execution is independent and sandboxed
- External calls are safely proxied through the host environment
Available Functions Reference
Scripts are organized by category. Each function is injected into the V8 execution context and available as a global.
Debugging
| Function | Signature | Returns | Description |
|---|
print | print(...args) | void | Output debug messages to application logs |
Condition Evaluation
| Function | Signature | Returns | Description |
|---|
evaluateCondition | evaluateCondition(tree, data) | boolean | Evaluate condition tree with AND/OR logic |
AI/LLM Services
| Function | Signature | Returns | Description |
|---|
promptCallToken | await promptCallToken(token, promptText, modelUrl) | string | Call AI with Bearer token (or Bedrock) |
promptCallKeys | await promptCallKeys(clientKey, secretKey, promptText, modelUrl) | string | Call AI with key/secret auth |
latestPromptResponse | await latestPromptResponse() | string | undefined | Get the most recent AI response |
latestPromptResponseJson | await latestPromptResponseJson() | object | null | Get the most recent AI response parsed as JSON |
getPromptResponse | await getPromptResponse(index) | string | undefined | Get a specific AI response by 1-based index |
createEmbedding | await createEmbedding(text) | number[] | Generate embedding using AWS Bedrock Titan |
Vector Database (Pinecone)
| Function | Signature | Returns | Description |
|---|
pineconeUpsert | await pineconeUpsert(vectors, namespace?) | {upsertedCount} | Insert/update vectors |
pineconeQuery | await pineconeQuery(vector, topK?, namespace?, filter?) | {matches} | Query similar vectors |
pineconeFetch | await pineconeFetch(ids, namespace?) | {vectors} | Fetch vectors by ID |
pineconeDelete | await pineconeDelete(ids, namespace?) | {} | Delete vectors by ID |
Short-Term Memory (S3)
| Function | Signature | Returns | Description |
|---|
stmStore | await stmStore(metadata[], value, filename) | {success, key} | Store value to S3 |
stmRetrieve | await stmRetrieve(metadata[], filename) | any | null | Retrieve value from S3 |
stmRetrieveAll | await stmRetrieveAll(metadata[]) | array | Retrieve all values under prefix |
| Function | Signature | Returns | Description |
|---|
postToMastodon | await postToMastodon(url, token, status) | object | Post status to Mastodon |
postLatestPromptToMastodon | await postLatestPromptToMastodon() | object | Post latest AI response to Mastodon |
Email (SendGrid)
| Function | Signature | Returns | Description |
|---|
sendEmailViaSendgrid | await sendEmailViaSendgrid(to, subject, content, contentType?) | {status, body} | Send email via SendGrid API |
Templating (Jinja2/Nunjucks)
| Function | Signature | Returns | Description |
|---|
renderTemplate | await renderTemplate(template, data) | string | Render a Jinja2/Nunjucks template with data |
renderTemplateFromContext | await renderTemplateFromContext(template) | string | Render using all available context variables |
Timing
| Function | Signature | Returns | Description |
|---|
sleep | await sleep(ms) | void | Async delay (max 30s) |
delay | await delay(ms) | void | Alias for sleep |
wait | await wait(ms) | void | Alias for sleep |
Messaging (PubSub)
| Function | Signature | Returns | Description |
|---|
pubsubPublish | await pubsubPublish(channel, data) | void | Publish data to a Redis PubSub channel |
Workflow Orchestration
| Function | Signature | Returns | Description |
|---|
triggerWorkflow | await triggerWorkflow(workflowId) | {success, workflowId, sequenceNumber} | Trigger a sub-workflow by dispatching a Kinesis message with the full current execution state |
getWorkflowByName | await getWorkflowByName(name) | {id, name} | Look up a workflow by name within the current org/env |
Sports Data (SportRadar)
| Function | Signature | Returns | Description |
|---|
nflGetTeams | await nflGetTeams() | Team[] | Get all 32 NFL teams |
nflGetTeamProfile | await nflGetTeamProfile(teamId) | object | Get team profile with roster |
nflGetTeamRoster | await nflGetTeamRoster(teamId) | Player[] | Get team roster (simplified) |
nflGetPlayerProfile | await nflGetPlayerProfile(playerId) | object | Get player profile with stats |
Assistant Research (Internal)
| Function | Signature | Returns | Description |
|---|
executeResearchPlan | await executeResearchPlan() | object | Execute AI assistant research plan |
mcpCall | await mcpCall(method, path, params) | object | Call admin MCP API |
vectorSearch | await vectorSearch(query, limit?) | array | Search OpenAI vector store |
Usage Example
Here's a complete example showing multiple scripts working together:
print('Processing event:', type);
print('Organization:', organizationId);
const aiResponse = await promptCallToken(
OPENAI_API_KEY,
`Summarize this event: ${JSON.stringify(payload)}`,
'https://api.openai.com/v1/chat/completions'
);
print('AI generated:', aiResponse);
await postLatestPromptToMastodon();
print('Posted to Mastodon successfully!');
Environment Variables
Many scripts use environment variables for configuration. These are set in the Admin Console at the organization level:
| Variable | Used By | Description |
|---|
MASTODON_URL | Mastodon scripts | Your Mastodon instance URL |
MASTODON_ACCESS_TOKEN | Mastodon scripts | OAuth access token |
OPENAI_API_KEY | Prompt scripts | OpenAI API key |
AWS_REGION | Bedrock, STM | AWS region (defaults to us-east-2) |
EMBEDDING_MODEL_ID | createEmbedding | Bedrock embedding model ID |
S3_STM_BUCKET | STM scripts | S3 bucket for short-term memory |
PINECONE_API_KEY | Pinecone scripts | Pinecone API key |
PINECONE_INDEX_HOST | Pinecone scripts | Pinecone index host URL |
SPORTRADAR_API_KEY | SportRadar scripts | SportRadar API key |
SPORTRADAR_API_TYPE | SportRadar scripts | API type: production or trial |
SENDGRID_API_KEY | SendGrid scripts | SendGrid API key |
SENDGRID_FROM_EMAIL | SendGrid scripts | Verified sender email address |
Environment variables are securely injected into the V8 context as global variables.
Script Runtime
For detailed information about how scripts execute, including capabilities, limitations, and best practices, see Script Runtime Environment.
Error Handling
All async script functions may throw errors. It's recommended to use try/catch blocks:
try {
await postToMastodon(MASTODON_URL, MASTODON_ACCESS_TOKEN, status);
print('Posted successfully');
} catch (error) {
print('Failed to post:', error.message);
}
Execution Context
When a workflow executes, all message fields are spread directly onto the V8 global scope. There is no message wrapper object — you access fields by name.
| Global | Type | Description |
|---|
organizationId | String | The organization UUID from the incoming message |
environmentId | String | The environment UUID from the incoming message |
type | String | The message type (e.g., "user.signup", "workflow_trigger") |
payload | Object | The message payload data |
body | Object | The message body (if present) |
workflow | Object | The current workflow configuration |
entity | Object | The workflow entity definition |
| (other message fields) | any | All fields from the incoming message are available as top-level globals |
| Environment vars | String | Any org-level environment variables (e.g., OPENAI_API_KEY) |
For example, if the incoming message contains { "organizationId": "abc", "type": "order.created", "payload": { "orderId": 42 } }, then organizationId, type, and payload are all directly accessible as globals in your scripts.
Script variables and prompt responses also accumulate on the global scope as the workflow progresses. The message is injected once at the start of each workflow, and all subsequent entity evaluations share the same V8 context.