Assistant Research
The Assistant Research script module executes AI assistant research plans by making MCP API calls and vector searches. This is used internally by the platform's AI Assistant workflow to gather context before generating responses.
Functions
executeResearchPlan
Execute a complete research plan from the upstream prompt entity's JSON output.
const results = await executeResearchPlan();
Returns: An object containing:
| Field | Type | Description |
|---|---|---|
researchResults | object | Map of call ID to { success, data } or { success, error } |
intent | string | The detected user intent from the research plan |
navigationSuggestion | object | Suggested UI navigation action (if any) |
reasoning | string | The planner's reasoning about the request |
Side effects: Sets the following V8 globals for downstream entities:
researchResults— The full results mapintent— The detected intent stringnavigationSuggestion— Navigation suggestion objectreasoning— The planner's reasoning
validateAssistantContext
Validate that the current workflow execution is running inside the assistant system org/env.
const { clientOrganizationId, clientEnvironmentId } = await validateAssistantContext();
Throws an error if the workflow's org/env does not match ASSISTANT_WORKFLOW_ORG_ID / ASSISTANT_WORKFLOW_ENV_ID.
mcpCall
Make an authenticated API call to the admin service.
const data = await mcpCall(method, path, params);
| Parameter | Type | Description |
|---|---|---|
method | string | HTTP method ("GET" or "POST") |
path | string | API path (e.g., "/api/environments") |
params | object | Query params (GET) or request body (POST) |
vectorSearch
Search the OpenAI vector store for documentation.
const results = await vectorSearch(query, limit?);
| Parameter | Type | Default | Description |
|---|---|---|---|
query | string | — | Natural-language search query |
limit | number | 5 | Maximum results to return |
Returns: Array of { title, content, similarity }
Environment Variables
These are read from process.env on the Consumer (not from V8 context variables):
| Variable | Required | Description |
|---|---|---|
ASSISTANT_WORKFLOW_ORG_ID | Yes | Organization ID for the assistant system workflow |
ASSISTANT_WORKFLOW_ENV_ID | Yes | Environment ID for the assistant system workflow |
ADMIN_API_URL | Yes | Base URL for the admin API (e.g., https://console.rocketwavelabs.io) |
ADMIN_SERVICE_TOKEN | Yes | Bearer token for authenticating MCP calls |
OPENAI_VECTOR_API_KEY | Yes | OpenAI API key for vector store search |
OPENAI_VECTOR_STORE_ID | Yes | OpenAI vector store ID |
How It Works
- The upstream Prompt entity generates a JSON research plan using
latestPromptResponseJson() executeResearchPlan()reads the plan and:- Validates the assistant context (org/env guard)
- Executes all
mcpCallsin the plan (scoped to the client org/env) - Executes all
vectorQueriesin the plan
- Results are stored as V8 globals for downstream entities to use
Research Plan Format
The upstream prompt generates a plan like:
{
"intent": "list_workflows",
"reasoning": "User wants to see their workflows",
"navigationSuggestion": { "page": "workflows" },
"mcpCalls": [
{
"id": "workflows",
"method": "GET",
"path": "/api/workflows",
"params": { "limit": "10" }
}
],
"vectorQueries": [
{
"id": "docs",
"query": "how to create workflows",
"limit": 3
}
]
}
Related Topics
- AI Assistant — Assistant overview and usage
- MCP API — MCP tool and resource documentation
- Scripts Overview — All available script functions