Skip to main content

prompt

Call AI/LLM services from workflow scripts.

The prompt module provides functions for calling AI language model services with different authentication methods. Responses are automatically stored and can be retrieved for use with other integrations.


AWS Bedrock Support

The prompt module provides native support for AWS Bedrock models. When a model URL starts with bedrock://, the system automatically:

  1. Uses IAM role-based authentication (no tokens/keys needed)
  2. Calls the appropriate Bedrock API with the correct request format
  3. Parses responses based on model type (Claude vs Nova)

Bedrock URL Format

bedrock://<model-id>

Examples:

bedrock://anthropic.claude-3-sonnet-20240229-v1:0
bedrock://amazon.nova-pro-v1:0
bedrock://amazon.titan-text-express-v1

Model Support

Model FamilyFormatResponse Extraction
Anthropic ClaudeMessages API with anthropic_versioncontent[0].text
Amazon NovaMessages API with content arrayoutput.message.content[0].text
Amazon TitanStandard invoke formatoutput

Environment Requirements

For Bedrock models to work, ensure:

  • AWS_REGION environment variable is set (defaults to us-east-2)
  • The ECS task role or IAM credentials have bedrock:InvokeModel permission

promptCallToken

Call an AI service using Bearer token authentication.

Signature

await promptCallToken(token, promptText, modelUrl)

Description

Makes an HTTP POST request to an AI/LLM service endpoint using Bearer token authentication. The response is automatically stored and can be accessed via latestPromptResponse() or getPromptResponse(index).

For Bedrock models (bedrock:// URLs), the token parameter is ignored and IAM authentication is used automatically.

Parameters

ParameterTypeDescription
tokenstringBearer token for authentication (ignored for Bedrock)
promptTextstringThe prompt text to send to the AI model
modelUrlstringThe API endpoint URL or bedrock://<model-id>

Returns

Promise<string | object> — The AI model's response content.

Example

// OpenAI Example
const response = await promptCallToken(
OPENAI_API_KEY,
'Summarize this touchdown play: ' + message.description,
'https://api.openai.com/v1/chat/completions'
);

print('AI says:', response);
// AWS Bedrock Example (token ignored, uses IAM)
const response = await promptCallToken(
'', // Not used for Bedrock
'Summarize this touchdown play: ' + message.description,
'bedrock://anthropic.claude-3-sonnet-20240229-v1:0'
);

print('Claude says:', response);

Supported API Formats

The function automatically extracts the response content from common AI API formats:

  • OpenAI format: response.choices[0].message.content
  • Anthropic format: response.content[0].text
  • AWS Bedrock Claude: content[0].text
  • AWS Bedrock Nova: output.message.content[0].text
  • Generic format: response.response

promptCallKeys

Call an AI service using client key/secret authentication.

Signature

await promptCallKeys(clientKey, secretKey, promptText, modelUrl)

Description

Makes an HTTP POST request to an AI/LLM service endpoint using key/secret header authentication. This is useful for services that require separate client and secret keys.

For Bedrock models (bedrock:// URLs), the key parameters are ignored and IAM authentication is used automatically.

Parameters

ParameterTypeDescription
clientKeystringClient key for authentication (ignored for Bedrock)
secretKeystringSecret key for authentication (ignored for Bedrock)
promptTextstringThe prompt text to send to the AI model
modelUrlstringThe API endpoint URL or bedrock://<model-id>

Returns

Promise<string | object> — The AI model's response content.

Example

const response = await promptCallKeys(
MY_CLIENT_KEY,
MY_SECRET_KEY,
'Generate a social media post about: ' + message.summary,
'https://api.example.com/v1/completions'
);

print('Generated post:', response);

Request Headers

For non-Bedrock URLs, the function sets the following headers:

  • Content-Type: application/json
  • X-Client-Key: <clientKey>
  • X-Secret-Key: <secretKey>

latestPromptResponse

Get the most recent AI response.

Signature

await latestPromptResponse()

Description

Returns the response from the most recent promptCallToken or promptCallKeys call. This is useful for chaining operations, such as posting the AI response to social media.

Parameters

None.

Returns

Promise<string | object | undefined> — The latest response content, or undefined if no prompts have been called.

Example

// Generate content
await promptCallToken(
OPENAI_API_KEY,
'Write a tweet about this play: ' + message.description,
'https://api.openai.com/v1/chat/completions'
);

// Get the response
const tweet = await latestPromptResponse();
print('Generated tweet:', tweet);

// Use it for posting
await postToMastodon(MASTODON_URL, MASTODON_ACCESS_TOKEN, tweet);

getPromptResponse

Get a specific AI response by its index.

Signature

await getPromptResponse(index)

Description

Returns a specific prompt response by its 1-based index. This is useful when multiple prompts are called in a single workflow execution and you need to access a specific response.

Parameters

ParameterTypeDescription
indexnumber1-based index of the prompt response

Returns

Promise<string | object | undefined> — The response at the specified index, or undefined if the index is out of range.

Example

// Make multiple AI calls
await promptCallToken(OPENAI_API_KEY, 'Summarize: ' + message.play, apiUrl);
await promptCallToken(OPENAI_API_KEY, 'Generate hashtags for: ' + message.play, apiUrl);
await promptCallToken(OPENAI_API_KEY, 'Rate excitement 1-10: ' + message.play, apiUrl);

// Access specific responses
const summary = await getPromptResponse(1); // First response
const hashtags = await getPromptResponse(2); // Second response
const rating = await getPromptResponse(3); // Third response

print('Summary:', summary);
print('Hashtags:', hashtags);
print('Excitement:', rating);

Complete Example

Here's a full example showing the prompt functions in a real workflow:

// Debug incoming event
print('Processing play:', message.type);

// Build a detailed prompt
const prompt = `
You are a sports commentator. Generate an exciting social media post about this NFL play.

Play details:
- Type: ${message.type}
- Team: ${message.team}
- Player: ${message.player}
- Description: ${message.description}

Keep it under 280 characters. Include relevant emojis.
`;

try {
// Call OpenAI
const response = await promptCallToken(
OPENAI_API_KEY,
prompt,
'https://api.openai.com/v1/chat/completions'
);

print('Generated content:', response);

// Post to Mastodon using the latest response
await postLatestPromptToMastodon();

print('Posted successfully!');

} catch (error) {
print('Error:', error.message);
}

Configuration

Default Model

The prompt functions use gpt-4o-mini as the default model. The request body is structured as:

{
"model": "gpt-4o-mini",
"messages": [
{ "role": "user", "content": "<promptText>" }
]
}

Environment Variables

Set up these environment variables in the Admin Console for your organization:

VariableDescription
OPENAI_API_KEYYour OpenAI API key
ANTHROPIC_API_KEYYour Anthropic API key (if using Claude)

createEmbedding

Generate text embeddings using AWS Bedrock Titan.

Signature

await createEmbedding(text)

Description

Generates a vector embedding for the given text using AWS Bedrock's Titan embedding model. This is useful for semantic search, similarity comparisons, and RAG (Retrieval Augmented Generation) applications.

Parameters

ParameterTypeDescription
textstringThe text to generate an embedding for

Returns

Promise<number[]> — A float array representing the embedding vector (typically 1536 dimensions for Titan).

Example

// Generate embedding for semantic search
const embedding = await createEmbedding(message.content);
print('Embedding dimensions:', embedding.length);

// Store in vector database for later retrieval
await storeInPinecone(message.id, embedding, {
content: message.content,
timestamp: message.timestamp
});

Configuration

The embedding model can be configured in three ways (in order of precedence):

  1. Entity argument: Set embeddingModelId as an argument on the entity
  2. Environment variable: Set EMBEDDING_MODEL_ID in the environment
  3. Default: Uses amazon.titan-embed-text-v2:0
// In entity arguments:
// argumentName: embeddingModelId
// argumentValue: amazon.titan-embed-text-v1

const embedding = await createEmbedding('Sample text');
// Uses the model specified in the argument

Environment Requirements

  • AWS_REGION must be set (defaults to us-east-2)
  • The ECS task role must have bedrock:InvokeModel permission for the embedding model

Error Handling

The prompt functions throw errors in these cases:

  • Missing URL: "Model URL is required"
  • HTTP errors: "HTTP <status>: <error message>"
  • Network errors: Connection failures
  • Bedrock errors: IAM permission issues, model not available

Always wrap prompt calls in try/catch blocks:

try {
const response = await promptCallToken(token, prompt, url);
// Handle success
} catch (error) {
print('AI call failed:', error.message);
// Handle failure gracefully
}

Bedrock-Specific Errors

try {
const response = await promptCallToken(
'',
'Generate content',
'bedrock://anthropic.claude-3-sonnet-20240229-v1:0'
);
} catch (error) {
// Common Bedrock errors:
// - "AccessDeniedException" - Missing IAM permissions
// - "ValidationException" - Invalid model ID
// - "ModelNotReadyException" - Model not provisioned
print('Bedrock error:', error.message);
}