Scripts Overview
Scripts are JavaScript functions that are automatically injected into the V8 execution context when workflows run. They provide powerful capabilities for interacting with external services, processing data, and debugging.
How Scripts Work
When a workflow executes, the RocketWave Pulse Consumer:
- Creates an isolated V8 context using
isolated-vm - Injects all available script functions into the global scope
- Executes your workflow code with access to these functions
- Cleans up the context after execution
This isolation ensures that:
- User code cannot access the Node.js process or file system
- Each execution is independent and sandboxed
- External calls are safely proxied through the host environment
Available Functions Reference
Debugging
| Function | Signature | Returns | Description |
|---|---|---|---|
print | print(...args) | void | Output debug messages to application logs |
Condition Evaluation
| Function | Signature | Returns | Description |
|---|---|---|---|
evaluateCondition | evaluateCondition(tree, data) | boolean | Evaluate condition tree with AND/OR logic |
AI/LLM Services
| Function | Signature | Returns | Description |
|---|---|---|---|
promptCallToken | await promptCallToken(token, promptText, modelUrl) | string | Call AI with Bearer token (or Bedrock) |
promptCallKeys | await promptCallKeys(clientKey, secretKey, promptText, modelUrl) | string | Call AI with key/secret auth |
latestPromptResponse | await latestPromptResponse() | string | undefined | Get the most recent AI response |
getPromptResponse | await getPromptResponse(index) | string | undefined | Get a specific AI response by 1-based index |
createEmbedding | await createEmbedding(text) | number[] | Generate embedding using AWS Bedrock Titan |
Vector Database (Pinecone)
| Function | Signature | Returns | Description |
|---|---|---|---|
pineconeUpsert | await pineconeUpsert(vectors, namespace?) | {upsertedCount} | Insert/update vectors |
pineconeQuery | await pineconeQuery(vector, topK?, namespace?, filter?) | {matches} | Query similar vectors |
pineconeFetch | await pineconeFetch(ids, namespace?) | {vectors} | Fetch vectors by ID |
pineconeDelete | await pineconeDelete(ids, namespace?) | {} | Delete vectors by ID |
Short-Term Memory (S3)
| Function | Signature | Returns | Description |
|---|---|---|---|
stmStore | await stmStore(metadata[], value, filename) | {success, key} | Store value to S3 |
stmRetrieve | await stmRetrieve(metadata[], filename) | any | null | Retrieve value from S3 |
stmRetrieveAll | await stmRetrieveAll(metadata[]) | array | Retrieve all values under prefix |
Social Media
| Function | Signature | Returns | Description |
|---|---|---|---|
postToMastodon | await postToMastodon(url, token, status) | object | Post status to Mastodon |
postLatestPromptToMastodon | await postLatestPromptToMastodon() | object | Post latest AI response to Mastodon |
Sports Data (SportRadar)
| Function | Signature | Returns | Description |
|---|---|---|---|
nflGetTeams | await nflGetTeams() | Team[] | Get all 32 NFL teams |
nflGetTeamProfile | await nflGetTeamProfile(teamId) | object | Get team profile with roster |
nflGetTeamRoster | await nflGetTeamRoster(teamId) | Player[] | Get team roster (simplified) |
nflGetPlayerProfile | await nflGetPlayerProfile(playerId) | object | Get player profile with stats |
Usage Example
Here's a complete example showing multiple scripts working together:
// Debug the incoming message
print('Processing event:', message.type);
// Check if this event matches our conditions
const shouldProcess = evaluateCondition(conditionTree, message);
if (shouldProcess) {
// Generate content using AI
const aiResponse = await promptCallToken(
OPENAI_API_KEY,
`Summarize this event: ${JSON.stringify(message)}`,
'https://api.openai.com/v1/chat/completions'
);
print('AI generated:', aiResponse);
// Post to Mastodon
await postLatestPromptToMastodon();
print('Posted to Mastodon successfully!');
}
Environment Variables
Many scripts use environment variables for configuration. These are set in the Admin Console at the organization level:
| Variable | Used By | Description |
|---|---|---|
MASTODON_URL | Mastodon scripts | Your Mastodon instance URL |
MASTODON_ACCESS_TOKEN | Mastodon scripts | OAuth access token |
OPENAI_API_KEY | Prompt scripts | OpenAI API key |
AWS_REGION | Bedrock, STM | AWS region (defaults to us-east-2) |
EMBEDDING_MODEL_ID | createEmbedding | Bedrock embedding model ID |
S3_STM_BUCKET | STM scripts | S3 bucket for short-term memory |
PINECONE_API_KEY | Pinecone scripts | Pinecone API key |
PINECONE_INDEX_HOST | Pinecone scripts | Pinecone index host URL |
SPORTRADAR_API_KEY | SportRadar scripts | SportRadar API key |
SPORTRADAR_API_TYPE | SportRadar scripts | API type: production or trial |
Environment variables are securely injected into the V8 context as global variables.
Script Runtime
For detailed information about how scripts execute, including capabilities, limitations, and best practices, see Script Runtime Environment.
Error Handling
All async script functions may throw errors. It's recommended to use try/catch blocks:
try {
await postToMastodon(MASTODON_URL, MASTODON_ACCESS_TOKEN, status);
print('Posted successfully');
} catch (error) {
print('Failed to post:', error.message);
}
Execution Context
The following globals are available in your workflow scripts:
| Global | Type | Description |
|---|---|---|
message | Object | The event data being processed |
workflow | Object | The current workflow configuration |
entity | Object | The workflow entity definition |
| Environment vars | String | Any org-level environment variables |