Skip to main content

Introduction

Welcome to RocketWave Pulse — a real-time event stream processing platform that enables powerful workflow automation with AI/LLM integration, multi-provider model support, and transaction billing.

Overview

RocketWave Pulse consists of five services that work together to process events, execute workflows, manage schedules, and send notifications.

Admin Console

The Admin Console is a Next.js web application for managing your stream processing infrastructure:

  • Visual workflow builder with drag-and-drop canvas
  • Universal model system supporting OpenAI, Anthropic, AWS Bedrock, Google, and xAI
  • Prompt Builder for testing and analyzing AI prompts with sentiment analysis
  • Environment variable management with write-only security
  • Organization and team management with role-based access control
  • Transaction usage monitoring and billing enforcement
  • AI Assistant with RAG-powered entity creation
  • Image recognition support for vision-capable models

Stream Consumer

The Consumer is a Node.js application that processes events from AWS Kinesis streams. It executes user-defined workflows in isolated V8 sandboxes, providing:

  • High throughput event processing with configurable worker pools
  • Secure isolation via isolated-vm with memory and timeout limits
  • Dynamic script execution with injected helper functions
  • Multi-provider AI/LLM integration via universal model adapters (OpenAI, Anthropic, Bedrock, Google)
  • Image recognition for vision-capable models
  • Transaction billing enforcement with per-organization limits and real-time plan change notifications via Redis pub/sub
  • Social media publishing, vector database, templating, and email capabilities

Scheduler

The Scheduler is a long-running worker that executes workflow triggers on a schedule:

  • Cron schedules for recurring workflow execution (e.g., every 5 minutes, daily at 9am)
  • One-time datetime schedules for future execution
  • Timezone-aware scheduling with IANA timezone support
  • Fires triggers through the receiver into Kinesis for consumer processing

Notifications

The Notifications service is a YAML-driven worker that sends alerts to organization administrators:

  • Usage warning emails when transaction usage reaches 70% of the plan limit
  • Usage exceeded alerts when organizations hit 100% of their limit
  • S3-based cooldown to prevent duplicate notifications
  • Powered by AWS SES for email delivery

Shared Library

The @rocketwave/stream-shared package provides shared TypeScript utilities used across services:

  • Universal multi-provider LLM adapters (OpenAI, Anthropic, Bedrock, Google)
  • V8 script modules (prompt, STM, Pinecone, Mastodon, SendGrid, templates, and more)
  • Workflow evaluator engine with entity evaluators
  • Transaction billing utilities for Redis-based usage counting and plan change pub/sub
  • Handlebar-style variable parsing and prompt assembly

Architecture

┌──────────────────┐    ┌──────────────────┐    ┌──────────────────┐
│ Event Sources │───▶│ Receiver / │───▶│ AWS Kinesis │
│ (APIs, webhooks)│ │ API Gateway │ │ Stream │
└──────────────────┘ └──────────────────┘ └────────┬─────────┘

┌──────────────────┐ │
│ Scheduler │──── POST trigger ──▶ Receiver ──────┘
│ (cron/datetime) │ │
└──────────────────┘ ▼
┌──────────────────┐
┌──────────────────┐ │ Consumer │
│ Admin Console │◀── Postgres/Redis ────────▶│ (V8 Isolate) │
│ (Next.js) │ │ • Evaluator │
└──────────────────┘ │ • Worker Pool │
│ └────────┬─────────┘
│ │
┌───────▼──────────┐ ┌────────▼─────────┐
│ Notifications │ │ Outputs (S3) │
│ (SES email) │ │ • success/ │
│ • usage alerts │◀── Redis counters │ • fail/ │
└──────────────────┘ │ • ignore/ │
└──────────────────┘

Data Flow

  1. Event sources send JSON messages to the Receiver endpoint
  2. The Receiver publishes messages to an AWS Kinesis stream
  3. The Scheduler fires cron/datetime triggers through the same Receiver
  4. The Consumer reads from Kinesis using a KCL-style coordinator with DynamoDB leases
  5. Each message is matched to workflows by organization and environment
  6. The Evaluator traverses the workflow tree, executing entities in V8 isolates
  7. Results are written to S3 (success, fail, or ignore)
  8. Redis tracks per-organization transaction counts for billing enforcement
  9. When plans or transaction limits change, the Admin publishes to a Redis pub/sub channel so the Consumer re-evaluates immediately
  10. The Notifications service monitors Redis counters and sends email alerts via SES

Key Concepts

Workflows

Workflows define how events are processed. Each workflow is a directed tree of entities:

  • Events — Entry points that evaluate conditions against incoming messages
  • Prompts — AI/LLM execution steps using the universal model system
  • Actions — Custom JavaScript code for side effects (API calls, publishing, storage)

Universal Model System

The platform supports multiple AI providers through a unified configuration system:

ComponentPurpose
Model ProvidersSupported AI platforms (OpenAI, Anthropic, Bedrock, Google, xAI)
Model DefinitionsSpecific models within a provider (e.g., GPT-4o, Claude 3.5 Sonnet)
Model CredentialsAPI keys and authentication (system-wide or per-organization)
Model ConfigurationsReady-to-use configs combining a definition + credential + defaults

Scripts

Scripts are JavaScript functions injected into the V8 execution context. They provide capabilities like:

  • Calling AI/LLM services (multi-provider)
  • Image recognition with vision models
  • Posting to social media (Mastodon)
  • Sending emails (SendGrid)
  • Vector database operations (Pinecone)
  • Short-term memory (S3)
  • Jinja2/Nunjucks templating
  • Redis pub/sub messaging
  • Sports data (SportRadar NFL)

See the Scripts Reference for complete documentation.

Transaction Billing

Each organization has a transaction limit and overage policy stored directly on its subscription record. Defaults are set from PlanConfiguration when a plan is assigned and can be overridden per-organization by system administrators.

The Consumer enforces limits in real-time using Redis counters. When limits or plans change, the Admin publishes a notification via Redis pub/sub so the Consumer re-evaluates immediately rather than waiting for the next cache refresh. Messages exceeding a hard_limit are routed to the ignore queue; a warn policy allows processing with a logged warning. The Notifications service sends email alerts at 70% and 100% thresholds.

Environment Variables

Sensitive data like API keys are stored as environment variables at the organization level and securely injected into workflow execution contexts. Values are write-only and never exposed through the API or UI.

Getting Started

  1. Set up the Admin Console to create your organization and configure environments
  2. Configure model credentials for your preferred AI provider (OpenAI, Anthropic, etc.)
  3. Create model configurations combining a model definition with your credentials
  4. Build workflows using the visual canvas editor
  5. Add entities (events, prompts, actions) with conditions and scripts
  6. Test with the Prompt Builder to validate AI prompts before deployment
  7. Set up schedules for automated recurring workflow execution
  8. Monitor results in the Processed Messages screen

Next Steps