Model Entity
The Model entity defines AI/LLM service configurations for use in Prompt entities. Models store authentication credentials and endpoint URLs for calling language model services.
Overview
Models are reusable AI service configurations that can be attached to Prompt entities. They support:
- Bearer token authentication (OpenAI, Anthropic direct)
- Client key/secret authentication (custom APIs)
- AWS Bedrock via special
bedrock://URL format (IAM auth)
Model Types
User Models
Organization-specific models created by users:
| Field | Required | Description |
|---|---|---|
organizationId | Yes | Owner organization UUID |
name | Yes | Display name |
modelUrl | Yes | API endpoint or bedrock://model-id |
token | Conditional | Bearer token (OR clientKey/secretKey) |
clientKey | Conditional | Client key (requires secretKey) |
secretKey | Conditional | Secret key (requires clientKey) |
System Models
Platform-wide models managed by admins:
| Field | Required | Description |
|---|---|---|
organizationId | No | NULL for system models |
modelType | Yes | Set to "system" |
isVisible | Yes | Show in org model selectors |
name | Yes | Display name |
modelUrl | Yes | API endpoint or bedrock://model-id |
API Endpoints
| Operation | Method | Endpoint | Permission |
|---|---|---|---|
| List Models | GET | /api/models | models:read |
| Create Model | POST | /api/models | models:create |
| Get Model | GET | /api/models/{id} | models:read |
| Update Model | PUT | /api/models/{id} | models:update |
| Delete Model | DELETE | /api/models/{id} | models:delete |
Create User Model
POST /api/models
Content-Type: application/json
{
"organizationId": "550e8400-e29b-41d4-a716-446655440000",
"name": "OpenAI GPT-4o",
"description": "Production OpenAI model",
"modelUrl": "https://api.openai.com/v1/chat/completions",
"token": "sk-proj-xxxxxxxxxxxxx"
}
Response:
{
"id": "770e8400-e29b-41d4-a716-446655440001",
"organizationId": "550e8400-e29b-41d4-a716-446655440000",
"name": "OpenAI GPT-4o",
"description": "Production OpenAI model",
"modelUrl": "https://api.openai.com/v1/chat/completions",
"modelType": "user",
"isVisible": true,
"hasToken": true,
"hasClientKey": false,
"createdAt": "2025-12-15T10:00:00.000Z",
"updatedAt": "2025-12-15T10:00:00.000Z"
}
Token, clientKey, and secretKey are never returned in API responses. Only hasToken and hasClientKey boolean flags are returned.
Create Bedrock Model
AWS Bedrock models use special bedrock:// URL format and IAM authentication:
POST /api/models
Content-Type: application/json
{
"organizationId": "550e8400-e29b-41d4-a716-446655440000",
"name": "Claude 3 Sonnet",
"description": "AWS Bedrock Claude 3 Sonnet",
"modelUrl": "bedrock://anthropic.claude-3-sonnet-20240229-v1:0",
"token": "iam"
}
Bedrock URL Format:
bedrock://<model-id>
Supported Bedrock Models:
| Provider | Model ID |
|---|---|
| Anthropic Claude 3 Sonnet | anthropic.claude-3-sonnet-20240229-v1:0 |
| Anthropic Claude 3 Haiku | anthropic.claude-3-haiku-20240307-v1:0 |
| Amazon Nova Pro | amazon.nova-pro-v1:0 |
| Amazon Nova Lite | amazon.nova-lite-v1:0 |
| Amazon Titan Text | amazon.titan-text-express-v1 |
For Bedrock models, the token field value is ignored. Authentication uses IAM roles configured on the ECS task. Set token to any non-empty value (e.g., "iam") to satisfy validation.
Create System Model
System models are available to all organizations:
POST /api/models
Content-Type: application/json
{
"name": "Platform Claude 3",
"description": "Shared Bedrock model for all orgs",
"modelUrl": "bedrock://anthropic.claude-3-sonnet-20240229-v1:0",
"token": "iam",
"modelType": "system",
"isVisible": true
}
List Models
Get Organization Models
GET /api/models?organizationId=550e8400-e29b-41d4-a716-446655440000
Get Organization + System Models
Use for model selection dropdowns:
GET /api/models?organizationId=550e8400-e29b-41d4-a716-446655440000&includeSystem=true
Get Only System Models
For admin management:
GET /api/models?modelType=system
Update Model
PUT /api/models/{id}
Content-Type: application/json
{
"name": "Updated Model Name",
"description": "Updated description"
}
To update credentials, include the new token or keys:
PUT /api/models/{id}
Content-Type: application/json
{
"token": "new-api-key-here"
}
Using Models with Prompts
Attach Model to Prompt Entity
When creating a Prompt entity, reference a model by ID:
POST /api/workflow-entities
Content-Type: application/json
{
"organizationId": "550e8400-e29b-41d4-a716-446655440000",
"environmentId": "660e8400-e29b-41d4-a716-446655440001",
"name": "Generate Tweet",
"workflowEntityTypeId": "<prompt-type-uuid>",
"modelId": "770e8400-e29b-41d4-a716-446655440001",
"prompt": "Generate a tweet about: {{message.content}}",
"tfCondition": "Single Path"
}
Model Execution at Runtime
When a Prompt entity with a model is executed:
- Consumer retrieves model credentials from cache
- Model URL determines authentication method:
bedrock://→ Uses IAM authenticationhttps://+token→ Bearer token authhttps://+clientKey/secretKey→ Custom header auth
- Prompt is sent to model endpoint
- Response stored via
latestPromptResponse()
Best Practices
Security
- Never share tokens — Each org should have its own API keys
- Use Bedrock for production — IAM roles more secure than tokens
- Rotate credentials — Update tokens periodically
Organization
- Descriptive names — Include provider and capability: "OpenAI GPT-4o for Tweets"
- Use system models — For shared platform capabilities
- Hide deprecated — Set
isVisible: falseon old models
Cost Management
- Choose appropriate model — Use cheaper models for simple tasks
- Monitor usage — Track which models are called most
- Set limits — Use environment variables to cap requests
Related Topics
- Prompt Entity — Uses models for AI execution
- Prompt Scripts — Runtime model execution functions
- AWS Bedrock Support — Bedrock-specific details