Skip to main content

Scripts Overview

Scripts are JavaScript functions that are automatically injected into the V8 execution context when workflows run. They provide powerful capabilities for interacting with external services, processing data, and debugging.

How Scripts Work

When a workflow executes, the RocketWave Pulse Consumer:

  1. Creates an isolated V8 context using isolated-vm
  2. Injects all available script functions into the global scope
  3. Runs each workflow entity through a unified pipeline, executing preScript and postScript (when set) along with conditions, arguments, and optional model calls — see Workflow Entities Overview
  4. Cleans up the context after execution

This isolation ensures that:

  • User code cannot access the Node.js process or file system
  • Each execution is independent and sandboxed
  • External calls are safely proxied through the host environment

Available Functions Reference

Scripts are organized by category. Each function is injected into the V8 execution context and available as a global.

Debugging

FunctionSignatureReturnsDescription
printprint(...args)voidOutput debug messages to application logs

Condition Evaluation

FunctionSignatureReturnsDescription
evaluateConditionevaluateCondition(tree, data)booleanEvaluate condition tree with AND/OR logic

AI/LLM Services

FunctionSignatureReturnsDescription
promptCallTokenawait promptCallToken(token, promptText, modelUrl)stringCall AI with Bearer token (or Bedrock)
promptCallKeysawait promptCallKeys(clientKey, secretKey, promptText, modelUrl)stringCall AI with key/secret auth
latestPromptResponseawait latestPromptResponse()string | undefinedGet the most recent AI response
latestPromptResponseJsonawait latestPromptResponseJson()object | nullGet the most recent AI response parsed as JSON
getPromptResponseawait getPromptResponse(index)string | undefinedGet a specific AI response by 1-based index
createEmbeddingawait createEmbedding(text)number[]Generate embedding using AWS Bedrock Titan

Vector Database (Pinecone)

FunctionSignatureReturnsDescription
pineconeUpsertawait pineconeUpsert(vectors, namespace?){upsertedCount}Insert/update vectors
pineconeQueryawait pineconeQuery(vector, topK?, namespace?, filter?){matches}Query similar vectors
pineconeFetchawait pineconeFetch(ids, namespace?){vectors}Fetch vectors by ID
pineconeDeleteawait pineconeDelete(ids, namespace?){}Delete vectors by ID

Short-Term Memory (S3)

FunctionSignatureReturnsDescription
stmStoreawait stmStore(metadata[], value, filename){success, key}Store value to S3
stmRetrieveawait stmRetrieve(metadata[], filename)any | nullRetrieve value from S3
stmRetrieveAllawait stmRetrieveAll(metadata[])arrayRetrieve all values under prefix

Social Media

FunctionSignatureReturnsDescription
postToMastodonawait postToMastodon(url, token, status)objectPost status to Mastodon
postLatestPromptToMastodonawait postLatestPromptToMastodon()objectPost latest AI response to Mastodon

Email (SendGrid)

FunctionSignatureReturnsDescription
sendEmailViaSendgridawait sendEmailViaSendgrid(to, subject, content, contentType?){status, body}Send email via SendGrid API

Templating (Jinja2/Nunjucks)

FunctionSignatureReturnsDescription
renderTemplateawait renderTemplate(template, data)stringRender a Jinja2/Nunjucks template with data
renderTemplateFromContextawait renderTemplateFromContext(template)stringRender using all available context variables

Timing

FunctionSignatureReturnsDescription
sleepawait sleep(ms)voidAsync delay (max 30s)
delayawait delay(ms)voidAlias for sleep
waitawait wait(ms)voidAlias for sleep

Messaging (PubSub)

FunctionSignatureReturnsDescription
pubsubPublishawait pubsubPublish(channel, data)voidPublish data to a Redis PubSub channel

Workflow Orchestration

FunctionSignatureReturnsDescription
triggerWorkflowawait triggerWorkflow(workflowId){success, workflowId, sequenceNumber}Trigger a sub-workflow by dispatching a Kinesis message with the full current execution state
getWorkflowByNameawait getWorkflowByName(name){id, name}Look up a workflow by name within the current org/env

Sports Data (SportRadar)

FunctionSignatureReturnsDescription
nflGetTeamsawait nflGetTeams()Team[]Get all 32 NFL teams
nflGetTeamProfileawait nflGetTeamProfile(teamId)objectGet team profile with roster
nflGetTeamRosterawait nflGetTeamRoster(teamId)Player[]Get team roster (simplified)
nflGetPlayerProfileawait nflGetPlayerProfile(playerId)objectGet player profile with stats

Assistant Research (Internal)

FunctionSignatureReturnsDescription
executeResearchPlanawait executeResearchPlan()objectExecute AI assistant research plan
mcpCallawait mcpCall(method, path, params)objectCall admin MCP API
vectorSearchawait vectorSearch(query, limit?)arraySearch OpenAI vector store

Usage Example

Here's a complete example showing multiple scripts working together:

// Debug the incoming message — fields are top-level globals
print('Processing event:', type);
print('Organization:', organizationId);

// Generate content using AI
const aiResponse = await promptCallToken(
OPENAI_API_KEY,
`Summarize this event: ${JSON.stringify(payload)}`,
'https://api.openai.com/v1/chat/completions'
);

print('AI generated:', aiResponse);

// Post to Mastodon
await postLatestPromptToMastodon();

print('Posted to Mastodon successfully!');

Environment Variables

Many scripts use environment variables for configuration. These are set in the Admin Console at the organization level:

VariableUsed ByDescription
MASTODON_URLMastodon scriptsYour Mastodon instance URL
MASTODON_ACCESS_TOKENMastodon scriptsOAuth access token
OPENAI_API_KEYPrompt scriptsOpenAI API key
AWS_REGIONBedrock, STMAWS region (defaults to us-east-2)
EMBEDDING_MODEL_IDcreateEmbeddingBedrock embedding model ID
S3_STM_BUCKETSTM scriptsS3 bucket for short-term memory
PINECONE_API_KEYPinecone scriptsPinecone API key
PINECONE_INDEX_HOSTPinecone scriptsPinecone index host URL
SPORTRADAR_API_KEYSportRadar scriptsSportRadar API key
SPORTRADAR_API_TYPESportRadar scriptsAPI type: production or trial
SENDGRID_API_KEYSendGrid scriptsSendGrid API key
SENDGRID_FROM_EMAILSendGrid scriptsVerified sender email address

Environment variables are securely injected into the V8 context as global variables.

Script Runtime

For detailed information about how scripts execute, including capabilities, limitations, and best practices, see Script Runtime Environment.

Error Handling

All async script functions may throw errors. It's recommended to use try/catch blocks:

try {
await postToMastodon(MASTODON_URL, MASTODON_ACCESS_TOKEN, status);
print('Posted successfully');
} catch (error) {
print('Failed to post:', error.message);
}

Execution Context

When a workflow executes, all message fields are spread directly onto the V8 global scope. There is no message wrapper object — you access fields by name.

GlobalTypeDescription
organizationIdStringThe organization UUID from the incoming message
environmentIdStringThe environment UUID from the incoming message
typeStringThe message type (e.g., "user.signup", "workflow_trigger")
payloadObjectThe message payload data
bodyObjectThe message body (if present)
workflowObjectThe current workflow configuration
entityObjectThe workflow entity definition
(other message fields)anyAll fields from the incoming message are available as top-level globals
Environment varsStringAny org-level environment variables (e.g., OPENAI_API_KEY)

For example, if the incoming message contains { "organizationId": "abc", "type": "order.created", "payload": { "orderId": 42 } }, then organizationId, type, and payload are all directly accessible as globals in your scripts.

Script variables and prompt responses also accumulate on the global scope as the workflow progresses. The message is injected once at the start of each workflow, and all subsequent entity evaluations share the same V8 context.