Skip to main content

Model Entity

The Model entity defines AI/LLM service configurations for use in Prompt entities. Models store authentication credentials and endpoint URLs for calling language model services.

Overview

Models are reusable AI service configurations that can be attached to Prompt entities. They support:

  • Bearer token authentication (OpenAI, Anthropic direct)
  • Client key/secret authentication (custom APIs)
  • AWS Bedrock via special bedrock:// URL format (IAM auth)

Model Types

User Models

Organization-specific models created by users:

FieldRequiredDescription
organizationIdYesOwner organization UUID
nameYesDisplay name
modelUrlYesAPI endpoint or bedrock://model-id
tokenConditionalBearer token (OR clientKey/secretKey)
clientKeyConditionalClient key (requires secretKey)
secretKeyConditionalSecret key (requires clientKey)

System Models

Platform-wide models managed by admins:

FieldRequiredDescription
organizationIdNoNULL for system models
modelTypeYesSet to "system"
isVisibleYesShow in org model selectors
nameYesDisplay name
modelUrlYesAPI endpoint or bedrock://model-id

API Endpoints

OperationMethodEndpointPermission
List ModelsGET/api/modelsmodels:read
Create ModelPOST/api/modelsmodels:create
Get ModelGET/api/models/{id}models:read
Update ModelPUT/api/models/{id}models:update
Delete ModelDELETE/api/models/{id}models:delete

Create User Model

POST /api/models
Content-Type: application/json

{
"organizationId": "550e8400-e29b-41d4-a716-446655440000",
"name": "OpenAI GPT-4o",
"description": "Production OpenAI model",
"modelUrl": "https://api.openai.com/v1/chat/completions",
"token": "sk-proj-xxxxxxxxxxxxx"
}

Response:

{
"id": "770e8400-e29b-41d4-a716-446655440001",
"organizationId": "550e8400-e29b-41d4-a716-446655440000",
"name": "OpenAI GPT-4o",
"description": "Production OpenAI model",
"modelUrl": "https://api.openai.com/v1/chat/completions",
"modelType": "user",
"isVisible": true,
"hasToken": true,
"hasClientKey": false,
"createdAt": "2025-12-15T10:00:00.000Z",
"updatedAt": "2025-12-15T10:00:00.000Z"
}
Security

Token, clientKey, and secretKey are never returned in API responses. Only hasToken and hasClientKey boolean flags are returned.


Create Bedrock Model

AWS Bedrock models use special bedrock:// URL format and IAM authentication:

POST /api/models
Content-Type: application/json

{
"organizationId": "550e8400-e29b-41d4-a716-446655440000",
"name": "Claude 3 Sonnet",
"description": "AWS Bedrock Claude 3 Sonnet",
"modelUrl": "bedrock://anthropic.claude-3-sonnet-20240229-v1:0",
"token": "iam"
}

Bedrock URL Format:

bedrock://<model-id>

Supported Bedrock Models:

ProviderModel ID
Anthropic Claude 3 Sonnetanthropic.claude-3-sonnet-20240229-v1:0
Anthropic Claude 3 Haikuanthropic.claude-3-haiku-20240307-v1:0
Amazon Nova Proamazon.nova-pro-v1:0
Amazon Nova Liteamazon.nova-lite-v1:0
Amazon Titan Textamazon.titan-text-express-v1
IAM Authentication

For Bedrock models, the token field value is ignored. Authentication uses IAM roles configured on the ECS task. Set token to any non-empty value (e.g., "iam") to satisfy validation.


Create System Model

System models are available to all organizations:

POST /api/models
Content-Type: application/json

{
"name": "Platform Claude 3",
"description": "Shared Bedrock model for all orgs",
"modelUrl": "bedrock://anthropic.claude-3-sonnet-20240229-v1:0",
"token": "iam",
"modelType": "system",
"isVisible": true
}

List Models

Get Organization Models

GET /api/models?organizationId=550e8400-e29b-41d4-a716-446655440000

Get Organization + System Models

Use for model selection dropdowns:

GET /api/models?organizationId=550e8400-e29b-41d4-a716-446655440000&includeSystem=true

Get Only System Models

For admin management:

GET /api/models?modelType=system

Update Model

PUT /api/models/{id}
Content-Type: application/json

{
"name": "Updated Model Name",
"description": "Updated description"
}

To update credentials, include the new token or keys:

PUT /api/models/{id}
Content-Type: application/json

{
"token": "new-api-key-here"
}

Using Models with Prompts

Attach Model to Prompt Entity

When creating a Prompt entity, reference a model by ID:

POST /api/workflow-entities
Content-Type: application/json

{
"organizationId": "550e8400-e29b-41d4-a716-446655440000",
"environmentId": "660e8400-e29b-41d4-a716-446655440001",
"name": "Generate Tweet",
"workflowEntityTypeId": "<prompt-type-uuid>",
"modelId": "770e8400-e29b-41d4-a716-446655440001",
"prompt": "Generate a tweet about: {{message.content}}",
"tfCondition": "Single Path"
}

Model Execution at Runtime

When a Prompt entity with a model is executed:

  1. Consumer retrieves model credentials from cache
  2. Model URL determines authentication method:
    • bedrock:// → Uses IAM authentication
    • https:// + token → Bearer token auth
    • https:// + clientKey/secretKey → Custom header auth
  3. Prompt is sent to model endpoint
  4. Response stored via latestPromptResponse()

Best Practices

Security

  1. Never share tokens — Each org should have its own API keys
  2. Use Bedrock for production — IAM roles more secure than tokens
  3. Rotate credentials — Update tokens periodically

Organization

  1. Descriptive names — Include provider and capability: "OpenAI GPT-4o for Tweets"
  2. Use system models — For shared platform capabilities
  3. Hide deprecated — Set isVisible: false on old models

Cost Management

  1. Choose appropriate model — Use cheaper models for simple tasks
  2. Monitor usage — Track which models are called most
  3. Set limits — Use environment variables to cap requests